Nov 29 06:17:16 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 06:17:16 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 06:17:16 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:17:16 localhost kernel: BIOS-provided physical RAM map:
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 06:17:16 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 06:17:16 localhost kernel: NX (Execute Disable) protection: active
Nov 29 06:17:16 localhost kernel: APIC: Static calls initialized
Nov 29 06:17:16 localhost kernel: SMBIOS 2.8 present.
Nov 29 06:17:16 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 06:17:16 localhost kernel: Hypervisor detected: KVM
Nov 29 06:17:16 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 06:17:16 localhost kernel: kvm-clock: using sched offset of 4274571516 cycles
Nov 29 06:17:16 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 06:17:16 localhost kernel: tsc: Detected 2800.000 MHz processor
Nov 29 06:17:16 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 29 06:17:16 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 29 06:17:16 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 06:17:16 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 06:17:16 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 06:17:16 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 06:17:16 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 06:17:16 localhost kernel: Using GB pages for direct mapping
Nov 29 06:17:16 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 06:17:16 localhost kernel: ACPI: Early table checksum verification disabled
Nov 29 06:17:16 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 06:17:16 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:17:16 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:17:16 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:17:16 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 06:17:16 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:17:16 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:17:16 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 06:17:16 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 06:17:16 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 06:17:16 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 06:17:16 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 06:17:16 localhost kernel: No NUMA configuration found
Nov 29 06:17:16 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 06:17:16 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 06:17:16 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 06:17:16 localhost kernel: Zone ranges:
Nov 29 06:17:16 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 06:17:16 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 06:17:16 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:17:16 localhost kernel:   Device   empty
Nov 29 06:17:16 localhost kernel: Movable zone start for each node
Nov 29 06:17:16 localhost kernel: Early memory node ranges
Nov 29 06:17:16 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 06:17:16 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 06:17:16 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:17:16 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 06:17:16 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 06:17:16 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 06:17:16 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 06:17:16 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 06:17:16 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 06:17:16 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 06:17:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 06:17:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 06:17:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 06:17:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 06:17:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 06:17:16 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 06:17:16 localhost kernel: TSC deadline timer available
Nov 29 06:17:16 localhost kernel: CPU topo: Max. logical packages:   8
Nov 29 06:17:16 localhost kernel: CPU topo: Max. logical dies:       8
Nov 29 06:17:16 localhost kernel: CPU topo: Max. dies per package:   1
Nov 29 06:17:16 localhost kernel: CPU topo: Max. threads per core:   1
Nov 29 06:17:16 localhost kernel: CPU topo: Num. cores per package:     1
Nov 29 06:17:16 localhost kernel: CPU topo: Num. threads per package:   1
Nov 29 06:17:16 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 06:17:16 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 06:17:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 06:17:16 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 06:17:16 localhost kernel: Booting paravirtualized kernel on KVM
Nov 29 06:17:16 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 06:17:16 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 06:17:16 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 06:17:16 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 29 06:17:16 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 29 06:17:16 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 06:17:16 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:17:16 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 06:17:16 localhost kernel: random: crng init done
Nov 29 06:17:16 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 06:17:16 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 06:17:16 localhost kernel: Fallback order for Node 0: 0 
Nov 29 06:17:16 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 06:17:16 localhost kernel: Policy zone: Normal
Nov 29 06:17:16 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 06:17:16 localhost kernel: software IO TLB: area num 8.
Nov 29 06:17:16 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 06:17:16 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 06:17:16 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 06:17:16 localhost kernel: Dynamic Preempt: voluntary
Nov 29 06:17:16 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 06:17:16 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 29 06:17:16 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 06:17:16 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 29 06:17:16 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 29 06:17:16 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 29 06:17:16 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 06:17:16 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 06:17:16 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:17:16 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:17:16 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:17:16 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 06:17:16 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 06:17:16 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 06:17:16 localhost kernel: Console: colour VGA+ 80x25
Nov 29 06:17:16 localhost kernel: printk: console [ttyS0] enabled
Nov 29 06:17:16 localhost kernel: ACPI: Core revision 20230331
Nov 29 06:17:16 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 06:17:16 localhost kernel: x2apic enabled
Nov 29 06:17:16 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 06:17:16 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 06:17:16 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 29 06:17:16 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 06:17:16 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 06:17:16 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 06:17:16 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 06:17:16 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 06:17:16 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 06:17:16 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 06:17:16 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 06:17:16 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 06:17:16 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 06:17:16 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 06:17:16 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 06:17:16 localhost kernel: x86/bugs: return thunk changed
Nov 29 06:17:16 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 06:17:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 06:17:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 06:17:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 06:17:16 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 06:17:16 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 06:17:16 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 29 06:17:16 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 29 06:17:16 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 06:17:16 localhost kernel: landlock: Up and running.
Nov 29 06:17:16 localhost kernel: Yama: becoming mindful.
Nov 29 06:17:16 localhost kernel: SELinux:  Initializing.
Nov 29 06:17:16 localhost kernel: LSM support for eBPF active
Nov 29 06:17:16 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:17:16 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:17:16 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 06:17:16 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 06:17:16 localhost kernel: ... version:                0
Nov 29 06:17:16 localhost kernel: ... bit width:              48
Nov 29 06:17:16 localhost kernel: ... generic registers:      6
Nov 29 06:17:16 localhost kernel: ... value mask:             0000ffffffffffff
Nov 29 06:17:16 localhost kernel: ... max period:             00007fffffffffff
Nov 29 06:17:16 localhost kernel: ... fixed-purpose events:   0
Nov 29 06:17:16 localhost kernel: ... event mask:             000000000000003f
Nov 29 06:17:16 localhost kernel: signal: max sigframe size: 1776
Nov 29 06:17:16 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 29 06:17:16 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 29 06:17:16 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 29 06:17:16 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 29 06:17:16 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 06:17:16 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 06:17:16 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 29 06:17:16 localhost kernel: node 0 deferred pages initialised in 41ms
Nov 29 06:17:16 localhost kernel: Memory: 7765680K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 29 06:17:16 localhost kernel: devtmpfs: initialized
Nov 29 06:17:16 localhost kernel: x86/mm: Memory block size: 128MB
Nov 29 06:17:16 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 06:17:16 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 06:17:16 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 06:17:16 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 06:17:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 06:17:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 06:17:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 06:17:16 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 29 06:17:16 localhost kernel: audit: type=2000 audit(1764397033.195:1): state=initialized audit_enabled=0 res=1
Nov 29 06:17:16 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 06:17:16 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 06:17:16 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 06:17:16 localhost kernel: cpuidle: using governor menu
Nov 29 06:17:16 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 06:17:16 localhost kernel: PCI: Using configuration type 1 for base access
Nov 29 06:17:16 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 29 06:17:16 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 06:17:16 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 06:17:16 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 06:17:16 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 06:17:16 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 06:17:16 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:17:16 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 06:17:16 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 29 06:17:16 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 29 06:17:16 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 06:17:16 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 06:17:16 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 06:17:16 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 06:17:16 localhost kernel: ACPI: Interpreter enabled
Nov 29 06:17:16 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 06:17:16 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 06:17:16 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 06:17:16 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 06:17:16 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 06:17:16 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 06:17:16 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [3] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [4] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [5] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [6] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [7] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [8] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [9] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [10] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [11] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [12] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [13] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [14] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [15] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [16] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [17] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [18] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [19] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [20] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [21] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [22] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [23] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [24] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [25] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [26] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [27] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [28] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [29] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [30] registered
Nov 29 06:17:16 localhost kernel: acpiphp: Slot [31] registered
Nov 29 06:17:16 localhost kernel: PCI host bridge to bus 0000:00
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 06:17:16 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 06:17:16 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 06:17:16 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:17:16 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 06:17:16 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 06:17:16 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 06:17:16 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 06:17:16 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 06:17:16 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 06:17:16 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 06:17:16 localhost kernel: iommu: Default domain type: Translated
Nov 29 06:17:16 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 06:17:16 localhost kernel: SCSI subsystem initialized
Nov 29 06:17:16 localhost kernel: ACPI: bus type USB registered
Nov 29 06:17:16 localhost kernel: usbcore: registered new interface driver usbfs
Nov 29 06:17:16 localhost kernel: usbcore: registered new interface driver hub
Nov 29 06:17:16 localhost kernel: usbcore: registered new device driver usb
Nov 29 06:17:16 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 06:17:16 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 06:17:16 localhost kernel: PTP clock support registered
Nov 29 06:17:16 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 29 06:17:16 localhost kernel: NetLabel: Initializing
Nov 29 06:17:16 localhost kernel: NetLabel:  domain hash size = 128
Nov 29 06:17:16 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 06:17:16 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 06:17:16 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 29 06:17:16 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 29 06:17:16 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 29 06:17:16 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 06:17:16 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 06:17:16 localhost kernel: vgaarb: loaded
Nov 29 06:17:16 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 06:17:16 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 06:17:16 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 06:17:16 localhost kernel: pnp: PnP ACPI init
Nov 29 06:17:16 localhost kernel: pnp 00:03: [dma 2]
Nov 29 06:17:16 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 29 06:17:16 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 06:17:16 localhost kernel: NET: Registered PF_INET protocol family
Nov 29 06:17:16 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 06:17:16 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 06:17:16 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 06:17:16 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 06:17:16 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 06:17:16 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 06:17:16 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 06:17:16 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:17:16 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:17:16 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 06:17:16 localhost kernel: NET: Registered PF_XDP protocol family
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 06:17:16 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 06:17:16 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 06:17:16 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 06:17:16 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73306 usecs
Nov 29 06:17:16 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 29 06:17:16 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 06:17:16 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 06:17:16 localhost kernel: ACPI: bus type thunderbolt registered
Nov 29 06:17:16 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 29 06:17:16 localhost kernel: Initialise system trusted keyrings
Nov 29 06:17:16 localhost kernel: Key type blacklist registered
Nov 29 06:17:16 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 06:17:16 localhost kernel: zbud: loaded
Nov 29 06:17:16 localhost kernel: integrity: Platform Keyring initialized
Nov 29 06:17:16 localhost kernel: integrity: Machine keyring initialized
Nov 29 06:17:16 localhost kernel: Freeing initrd memory: 85868K
Nov 29 06:17:16 localhost kernel: NET: Registered PF_ALG protocol family
Nov 29 06:17:16 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 29 06:17:16 localhost kernel: Key type asymmetric registered
Nov 29 06:17:16 localhost kernel: Asymmetric key parser 'x509' registered
Nov 29 06:17:16 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 06:17:16 localhost kernel: io scheduler mq-deadline registered
Nov 29 06:17:16 localhost kernel: io scheduler kyber registered
Nov 29 06:17:16 localhost kernel: io scheduler bfq registered
Nov 29 06:17:16 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 06:17:16 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 06:17:16 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 06:17:16 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 29 06:17:16 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 06:17:16 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 06:17:16 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 06:17:16 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 06:17:16 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 06:17:16 localhost kernel: Non-volatile memory driver v1.3
Nov 29 06:17:16 localhost kernel: rdac: device handler registered
Nov 29 06:17:16 localhost kernel: hp_sw: device handler registered
Nov 29 06:17:16 localhost kernel: emc: device handler registered
Nov 29 06:17:16 localhost kernel: alua: device handler registered
Nov 29 06:17:16 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 06:17:16 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 06:17:16 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 06:17:16 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 06:17:16 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 06:17:16 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 06:17:16 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 29 06:17:16 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 06:17:16 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 06:17:16 localhost kernel: hub 1-0:1.0: USB hub found
Nov 29 06:17:16 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 29 06:17:16 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 06:17:16 localhost kernel: usbserial: USB Serial support registered for generic
Nov 29 06:17:16 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 06:17:16 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 06:17:16 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 06:17:16 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 06:17:16 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 06:17:16 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 06:17:16 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 06:17:16 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:17:15 UTC (1764397035)
Nov 29 06:17:16 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 06:17:16 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 06:17:16 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 06:17:16 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 06:17:16 localhost kernel: usbcore: registered new interface driver usbhid
Nov 29 06:17:16 localhost kernel: usbhid: USB HID core driver
Nov 29 06:17:16 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 29 06:17:16 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 06:17:16 localhost kernel: Initializing XFRM netlink socket
Nov 29 06:17:16 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 29 06:17:16 localhost kernel: Segment Routing with IPv6
Nov 29 06:17:16 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 29 06:17:16 localhost kernel: mpls_gso: MPLS GSO support
Nov 29 06:17:16 localhost kernel: IPI shorthand broadcast: enabled
Nov 29 06:17:16 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 06:17:16 localhost kernel: AES CTR mode by8 optimization enabled
Nov 29 06:17:16 localhost kernel: sched_clock: Marking stable (3779013234, 197172749)->(4285373540, -309187557)
Nov 29 06:17:16 localhost kernel: registered taskstats version 1
Nov 29 06:17:16 localhost kernel: Loading compiled-in X.509 certificates
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 06:17:16 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:17:16 localhost kernel: page_owner is disabled
Nov 29 06:17:16 localhost kernel: Key type .fscrypt registered
Nov 29 06:17:16 localhost kernel: Key type fscrypt-provisioning registered
Nov 29 06:17:16 localhost kernel: Key type big_key registered
Nov 29 06:17:16 localhost kernel: Key type encrypted registered
Nov 29 06:17:16 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 06:17:16 localhost kernel: Loading compiled-in module X.509 certificates
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:17:16 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 29 06:17:16 localhost kernel: ima: No architecture policies found
Nov 29 06:17:16 localhost kernel: evm: Initialising EVM extended attributes:
Nov 29 06:17:16 localhost kernel: evm: security.selinux
Nov 29 06:17:16 localhost kernel: evm: security.SMACK64 (disabled)
Nov 29 06:17:16 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 06:17:16 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 06:17:16 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 06:17:16 localhost kernel: evm: security.apparmor (disabled)
Nov 29 06:17:16 localhost kernel: evm: security.ima
Nov 29 06:17:16 localhost kernel: evm: security.capability
Nov 29 06:17:16 localhost kernel: evm: HMAC attrs: 0x1
Nov 29 06:17:16 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 06:17:16 localhost kernel: Running certificate verification RSA selftest
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 06:17:16 localhost kernel: Running certificate verification ECDSA selftest
Nov 29 06:17:16 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 06:17:16 localhost kernel: clk: Disabling unused clocks
Nov 29 06:17:16 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 29 06:17:16 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 06:17:16 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 06:17:16 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 06:17:16 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 29 06:17:16 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 06:17:16 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 06:17:16 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 29 06:17:16 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 06:17:16 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 06:17:16 localhost kernel: Run /init as init process
Nov 29 06:17:16 localhost kernel:   with arguments:
Nov 29 06:17:16 localhost kernel:     /init
Nov 29 06:17:16 localhost kernel:   with environment:
Nov 29 06:17:16 localhost kernel:     HOME=/
Nov 29 06:17:16 localhost kernel:     TERM=linux
Nov 29 06:17:16 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 29 06:17:16 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 06:17:16 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 06:17:16 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:17:16 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:17:16 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:17:16 localhost systemd[1]: Running in initrd.
Nov 29 06:17:16 localhost systemd[1]: No hostname configured, using default hostname.
Nov 29 06:17:16 localhost systemd[1]: Hostname set to <localhost>.
Nov 29 06:17:16 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 29 06:17:16 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 29 06:17:16 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:17:16 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:17:16 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 29 06:17:16 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:17:16 localhost systemd[1]: Reached target Path Units.
Nov 29 06:17:16 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:17:16 localhost systemd[1]: Reached target Swaps.
Nov 29 06:17:16 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:17:16 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:17:16 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 29 06:17:16 localhost systemd[1]: Listening on Journal Socket.
Nov 29 06:17:16 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:17:16 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:17:16 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:17:16 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:17:16 localhost systemd[1]: Starting Journal Service...
Nov 29 06:17:16 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:17:16 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:17:16 localhost systemd[1]: Starting Create System Users...
Nov 29 06:17:16 localhost systemd[1]: Starting Setup Virtual Console...
Nov 29 06:17:16 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:17:16 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:17:16 localhost systemd[1]: Finished Create System Users.
Nov 29 06:17:16 localhost systemd-journald[310]: Journal started
Nov 29 06:17:16 localhost systemd-journald[310]: Runtime Journal (/run/log/journal/841b890998384df3bf7cbb9b0c2a4d0c) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:17:16 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Nov 29 06:17:16 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Nov 29 06:17:16 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 06:17:16 localhost systemd[1]: Started Journal Service.
Nov 29 06:17:16 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:17:16 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:17:16 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:17:16 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:17:16 localhost systemd[1]: Finished Setup Virtual Console.
Nov 29 06:17:16 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 06:17:16 localhost systemd[1]: Starting dracut cmdline hook...
Nov 29 06:17:16 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 06:17:16 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:17:16 localhost systemd[1]: Finished dracut cmdline hook.
Nov 29 06:17:16 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 29 06:17:16 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 06:17:16 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 29 06:17:16 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 06:17:16 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 29 06:17:16 localhost kernel: RPC: Registered udp transport module.
Nov 29 06:17:16 localhost kernel: RPC: Registered tcp transport module.
Nov 29 06:17:16 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 06:17:16 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 06:17:16 localhost rpc.statd[446]: Version 2.5.4 starting
Nov 29 06:17:16 localhost rpc.statd[446]: Initializing NSM state
Nov 29 06:17:16 localhost rpc.idmapd[451]: Setting log level to 0
Nov 29 06:17:17 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 29 06:17:17 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:17:17 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:17:17 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:17:17 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 29 06:17:17 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 29 06:17:17 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:17:17 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 29 06:17:17 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:17:17 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:17:17 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:17:17 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:17:17 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 29 06:17:17 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:17:17 localhost systemd[1]: Reached target Network.
Nov 29 06:17:17 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:17:17 localhost systemd[1]: Starting dracut initqueue hook...
Nov 29 06:17:17 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 29 06:17:17 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:17:17 localhost systemd[1]: Reached target Basic System.
Nov 29 06:17:17 localhost kernel: libata version 3.00 loaded.
Nov 29 06:17:17 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 29 06:17:17 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 06:17:17 localhost kernel: scsi host0: ata_piix
Nov 29 06:17:17 localhost kernel: scsi host1: ata_piix
Nov 29 06:17:17 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 06:17:17 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 06:17:17 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:17:17 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 06:17:17 localhost kernel:  vda: vda1
Nov 29 06:17:17 localhost kernel: ata1: found unknown device (class 0)
Nov 29 06:17:17 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 06:17:17 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 06:17:17 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:17:17 localhost systemd[1]: Reached target Initrd Root Device.
Nov 29 06:17:17 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 06:17:17 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 06:17:17 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 06:17:17 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 29 06:17:17 localhost systemd[1]: Finished dracut initqueue hook.
Nov 29 06:17:17 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:17:17 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 06:17:17 localhost systemd[1]: Reached target Remote File Systems.
Nov 29 06:17:17 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 29 06:17:17 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 29 06:17:17 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 06:17:17 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 06:17:17 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:17:17 localhost systemd[1]: Mounting /sysroot...
Nov 29 06:17:18 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 06:17:18 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 06:17:18 localhost kernel: XFS (vda1): Ending clean mount
Nov 29 06:17:18 localhost systemd[1]: Mounted /sysroot.
Nov 29 06:17:18 localhost systemd[1]: Reached target Initrd Root File System.
Nov 29 06:17:18 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 06:17:18 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 06:17:18 localhost systemd[1]: Reached target Initrd File Systems.
Nov 29 06:17:18 localhost systemd[1]: Reached target Initrd Default Target.
Nov 29 06:17:18 localhost systemd[1]: Starting dracut mount hook...
Nov 29 06:17:18 localhost systemd[1]: Finished dracut mount hook.
Nov 29 06:17:18 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 06:17:18 localhost rpc.idmapd[451]: exiting on signal 15
Nov 29 06:17:18 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 06:17:18 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 06:17:18 localhost systemd[1]: Stopped target Network.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Timer Units.
Nov 29 06:17:18 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 06:17:18 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Basic System.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Path Units.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Remote File Systems.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Slice Units.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Socket Units.
Nov 29 06:17:18 localhost systemd[1]: Stopped target System Initialization.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Local File Systems.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Swaps.
Nov 29 06:17:18 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut mount hook.
Nov 29 06:17:18 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 29 06:17:18 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 06:17:18 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 06:17:18 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 29 06:17:18 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 29 06:17:18 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 06:17:18 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 06:17:18 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 06:17:18 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 06:17:18 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 29 06:17:18 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 06:17:18 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 06:17:18 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Closed udev Control Socket.
Nov 29 06:17:18 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Closed udev Kernel Socket.
Nov 29 06:17:18 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 29 06:17:18 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 29 06:17:18 localhost systemd[1]: Starting Cleanup udev Database...
Nov 29 06:17:18 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 06:17:18 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 06:17:18 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Stopped Create System Users.
Nov 29 06:17:18 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 06:17:18 localhost systemd[1]: Finished Cleanup udev Database.
Nov 29 06:17:18 localhost systemd[1]: Reached target Switch Root.
Nov 29 06:17:18 localhost systemd[1]: Starting Switch Root...
Nov 29 06:17:18 localhost systemd[1]: Switching root.
Nov 29 06:17:18 localhost systemd-journald[310]: Received SIGTERM from PID 1 (systemd).
Nov 29 06:17:18 localhost systemd-journald[310]: Journal stopped
Nov 29 06:17:19 localhost kernel: audit: type=1404 audit(1764397038.839:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability open_perms=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:17:19 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:17:19 localhost kernel: audit: type=1403 audit(1764397038.973:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 06:17:19 localhost systemd[1]: Successfully loaded SELinux policy in 137.734ms.
Nov 29 06:17:19 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.988ms.
Nov 29 06:17:19 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:17:19 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:17:19 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:17:19 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:17:19 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Stopped Switch Root.
Nov 29 06:17:19 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 06:17:19 localhost systemd[1]: Created slice Slice /system/getty.
Nov 29 06:17:19 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 29 06:17:19 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 29 06:17:19 localhost systemd[1]: Created slice User and Session Slice.
Nov 29 06:17:19 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:17:19 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 29 06:17:19 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 06:17:19 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:17:19 localhost systemd[1]: Stopped target Switch Root.
Nov 29 06:17:19 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 29 06:17:19 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 29 06:17:19 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 29 06:17:19 localhost systemd[1]: Reached target Path Units.
Nov 29 06:17:19 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 29 06:17:19 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:17:19 localhost systemd[1]: Reached target Swaps.
Nov 29 06:17:19 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 29 06:17:19 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 29 06:17:19 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 29 06:17:19 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 29 06:17:19 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 29 06:17:19 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:17:19 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:17:19 localhost systemd[1]: Mounting Huge Pages File System...
Nov 29 06:17:19 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 29 06:17:19 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 29 06:17:19 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 29 06:17:19 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:17:19 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:17:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:17:19 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 29 06:17:19 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 29 06:17:19 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 29 06:17:19 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 06:17:19 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 29 06:17:19 localhost systemd[1]: Stopped Journal Service.
Nov 29 06:17:19 localhost systemd[1]: Starting Journal Service...
Nov 29 06:17:19 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:17:19 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 29 06:17:19 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:17:19 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 29 06:17:19 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 06:17:19 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:17:19 localhost kernel: fuse: init (API version 7.37)
Nov 29 06:17:19 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:17:19 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:19 localhost systemd-journald[681]: Journal started
Nov 29 06:17:19 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:17:19 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 29 06:17:19 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Started Journal Service.
Nov 29 06:17:19 localhost systemd[1]: Mounted Huge Pages File System.
Nov 29 06:17:19 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 06:17:19 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 29 06:17:19 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 29 06:17:19 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:17:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:17:19 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 06:17:19 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 29 06:17:19 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 06:17:19 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 06:17:19 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 06:17:19 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:17:19 localhost kernel: ACPI: bus type drm_connector registered
Nov 29 06:17:19 localhost systemd[1]: Mounting FUSE Control File System...
Nov 29 06:17:19 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:17:19 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 29 06:17:19 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 06:17:19 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 06:17:19 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 06:17:19 localhost systemd[1]: Starting Create System Users...
Nov 29 06:17:19 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 06:17:19 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 29 06:17:19 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:17:19 localhost systemd-journald[681]: Received client request to flush runtime journal.
Nov 29 06:17:19 localhost systemd[1]: Mounted FUSE Control File System.
Nov 29 06:17:19 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 06:17:19 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 06:17:19 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:17:19 localhost systemd[1]: Finished Create System Users.
Nov 29 06:17:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:17:19 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:17:19 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:17:19 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 06:17:19 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:17:19 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 06:17:19 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 06:17:19 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 06:17:19 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 06:17:19 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 06:17:19 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 06:17:19 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:17:19 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Nov 29 06:17:19 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 06:17:19 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:17:19 localhost systemd[1]: Starting Security Auditing Service...
Nov 29 06:17:19 localhost systemd[1]: Starting RPC Bind...
Nov 29 06:17:19 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 06:17:19 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 06:17:19 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 06:17:19 localhost systemd[1]: Started RPC Bind.
Nov 29 06:17:19 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 06:17:19 localhost augenrules[710]: /sbin/augenrules: No change
Nov 29 06:17:19 localhost augenrules[725]: No rules
Nov 29 06:17:19 localhost augenrules[725]: enabled 1
Nov 29 06:17:19 localhost augenrules[725]: failure 1
Nov 29 06:17:19 localhost augenrules[725]: pid 705
Nov 29 06:17:19 localhost augenrules[725]: rate_limit 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_limit 8192
Nov 29 06:17:19 localhost augenrules[725]: lost 0
Nov 29 06:17:19 localhost augenrules[725]: backlog 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time 60000
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 29 06:17:19 localhost augenrules[725]: enabled 1
Nov 29 06:17:19 localhost augenrules[725]: failure 1
Nov 29 06:17:19 localhost augenrules[725]: pid 705
Nov 29 06:17:19 localhost augenrules[725]: rate_limit 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_limit 8192
Nov 29 06:17:19 localhost augenrules[725]: lost 0
Nov 29 06:17:19 localhost augenrules[725]: backlog 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time 60000
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 29 06:17:19 localhost augenrules[725]: enabled 1
Nov 29 06:17:19 localhost augenrules[725]: failure 1
Nov 29 06:17:19 localhost augenrules[725]: pid 705
Nov 29 06:17:19 localhost augenrules[725]: rate_limit 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_limit 8192
Nov 29 06:17:19 localhost augenrules[725]: lost 0
Nov 29 06:17:19 localhost augenrules[725]: backlog 0
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time 60000
Nov 29 06:17:19 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 29 06:17:19 localhost systemd[1]: Started Security Auditing Service.
Nov 29 06:17:19 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 06:17:20 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 06:17:20 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 06:17:20 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 29 06:17:20 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:17:20 localhost systemd[1]: Starting Update is Completed...
Nov 29 06:17:20 localhost systemd[1]: Finished Update is Completed.
Nov 29 06:17:20 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:17:20 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:17:20 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:17:20 localhost systemd[1]: Started dnf makecache --timer.
Nov 29 06:17:20 localhost systemd[1]: Started Daily rotation of log files.
Nov 29 06:17:20 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 06:17:20 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:17:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:17:20 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 06:17:20 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:17:20 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 29 06:17:20 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:17:20 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:17:20 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:17:20 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:17:20 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 06:17:20 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 29 06:17:20 localhost systemd[1]: Reached target Basic System.
Nov 29 06:17:20 localhost systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:17:20 localhost dbus-broker-lau[745]: Ready
Nov 29 06:17:20 localhost systemd[1]: Starting NTP client/server...
Nov 29 06:17:20 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 06:17:20 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 06:17:20 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 06:17:20 localhost systemd[1]: Started irqbalance daemon.
Nov 29 06:17:20 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 06:17:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:17:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:17:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:17:20 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 29 06:17:20 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 06:17:20 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 29 06:17:20 localhost systemd[1]: Starting User Login Management...
Nov 29 06:17:20 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 06:17:20 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 06:17:20 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 06:17:20 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 06:17:20 localhost chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 06:17:20 localhost chronyd[798]: Loaded 0 symmetric keys
Nov 29 06:17:20 localhost chronyd[798]: Using right/UTC timezone to obtain leap second data
Nov 29 06:17:20 localhost chronyd[798]: Loaded seccomp filter (level 2)
Nov 29 06:17:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 06:17:20 localhost systemd[1]: Started NTP client/server.
Nov 29 06:17:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 06:17:20 localhost systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 06:17:20 localhost systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 06:17:20 localhost systemd-logind[787]: New seat seat0.
Nov 29 06:17:20 localhost systemd[1]: Started User Login Management.
Nov 29 06:17:20 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 06:17:20 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 06:17:20 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 06:17:20 localhost kernel: kvm_amd: TSC scaling supported
Nov 29 06:17:20 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 29 06:17:20 localhost kernel: kvm_amd: Nested Paging enabled
Nov 29 06:17:20 localhost kernel: kvm_amd: LBR virtualization supported
Nov 29 06:17:20 localhost kernel: Console: switching to colour dummy device 80x25
Nov 29 06:17:20 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 06:17:20 localhost kernel: [drm] features: -context_init
Nov 29 06:17:20 localhost kernel: [drm] number of scanouts: 1
Nov 29 06:17:20 localhost kernel: [drm] number of cap sets: 0
Nov 29 06:17:20 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 06:17:20 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 06:17:20 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 29 06:17:20 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 06:17:20 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Nov 29 06:17:20 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 06:17:21 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:17:20 +0000. Up 9.21 seconds.
Nov 29 06:17:21 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 29 06:17:21 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 29 06:17:21 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp97c1m710.mount: Deactivated successfully.
Nov 29 06:17:21 localhost systemd[1]: Starting Hostname Service...
Nov 29 06:17:21 localhost systemd[1]: Started Hostname Service.
Nov 29 06:17:21 np0005539565.novalocal systemd-hostnamed[856]: Hostname set to <np0005539565.novalocal> (static)
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Reached target Preparation for Network.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5736] NetworkManager (version 1.54.1-1.el9) is starting... (boot:7d4aa993-1c07-4b6d-9a71-a6bd2dda9a8d)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5739] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5801] manager[0x5621e57bd080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5854] hostname: hostname: using hostnamed
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5855] hostname: static hostname changed from (none) to "np0005539565.novalocal"
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5859] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5951] manager[0x5621e57bd080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.5952] manager[0x5621e57bd080]: rfkill: WWAN hardware radio set enabled
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6142] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6142] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6143] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6143] manager: Networking is enabled by state file
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6146] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6157] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6179] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6192] dhcp: init: Using DHCP client 'internal'
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6195] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6208] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6215] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6223] device (lo): Activation: starting connection 'lo' (45524fb0-33f8-4a9e-bdee-77d73a355010)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6232] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6234] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6260] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6263] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6265] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6267] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6268] device (eth0): carrier: link connected
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6271] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6277] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6281] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6284] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6286] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6288] manager: NetworkManager state is now CONNECTING
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6288] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6294] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6310] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Started Network Manager.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Reached target Network.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6453] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6456] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:17:21 np0005539565.novalocal NetworkManager[860]: <info>  [1764397041.6463] device (lo): Activation: successful, device activated.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Reached target NFS client services.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: Reached target Remote File Systems.
Nov 29 06:17:21 np0005539565.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3138] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3150] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3181] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3251] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3252] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3255] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3257] device (eth0): Activation: successful, device activated.
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3262] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:17:25 np0005539565.novalocal NetworkManager[860]: <info>  [1764397045.3264] manager: startup complete
Nov 29 06:17:25 np0005539565.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:17:25 np0005539565.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:17:25 +0000. Up 13.89 seconds.
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.144         | 255.255.255.0 | global | fa:16:3e:97:ff:c7 |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe97:ffc7/64 |       .       |  link  | fa:16:3e:97:ff:c7 |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 06:17:25 np0005539565.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: new group: name=cloud-user, GID=1001
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: add 'cloud-user' to group 'adm'
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: add 'cloud-user' to group 'systemd-journal'
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: add 'cloud-user' to shadow group 'adm'
Nov 29 06:17:26 np0005539565.novalocal useradd[991]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Generating public/private rsa key pair.
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: SHA256:Qry9iG7bkEOvfR6YI/FDoQe0ALElDQrv3uM4g4/e78o root@np0005539565.novalocal
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +---[RSA 3072]----+
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |*=o .            |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |o=.o o           |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |o . o +          |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | .   + +         |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |  . + = S        |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | . o O = .       |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | .. O O o        |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |..=+.O o..       |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |oooEO+oo.        |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: SHA256:3C00YCxaTmy/TEMTquYEBnNvWlC89nysfugOjq/W7hM root@np0005539565.novalocal
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +---[ECDSA 256]---+
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |o oo.. .+.       |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | + o. *o+.       |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |  o +B.+ .o      |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | . =+...+o o     |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |  ..+o +Soo .    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |   +E o =  .     |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |   .o. +         |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |  .oo.o .        |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | .o===+.         |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: SHA256:f0jJLQI4wDtff/Eq9u7HLkYv9gCwnaKIp81KqhfSOUI root@np0005539565.novalocal
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +--[ED25519 256]--+
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | ..              |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |  .. .           |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |   .o o          |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: | Eo  ..= o.o     |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |.. + .o.S =o.    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |o.=... ..=oo.    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |.+oo.    o++.    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |o=.     o *o+    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: |*oo    . B+*o    |
Nov 29 06:17:27 np0005539565.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Reached target Network is Online.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting System Logging Service...
Nov 29 06:17:27 np0005539565.novalocal sm-notify[1007]: Version 2.5.4 starting
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Permit User Sessions...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Finished Permit User Sessions.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started Command Scheduler.
Nov 29 06:17:27 np0005539565.novalocal sshd[1009]: Server listening on 0.0.0.0 port 22.
Nov 29 06:17:27 np0005539565.novalocal sshd[1009]: Server listening on :: port 22.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started Getty on tty1.
Nov 29 06:17:27 np0005539565.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Nov 29 06:17:27 np0005539565.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 29 06:17:27 np0005539565.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 25% if used.)
Nov 29 06:17:27 np0005539565.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Reached target Login Prompts.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 29 06:17:27 np0005539565.novalocal rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Nov 29 06:17:27 np0005539565.novalocal rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Started System Logging Service.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Reached target Multi-User System.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1033]: Connection reset by 38.102.83.114 port 47210 [preauth]
Nov 29 06:17:27 np0005539565.novalocal rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1048]: Unable to negotiate with 38.102.83.114 port 47226: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1056]: Connection reset by 38.102.83.114 port 47230 [preauth]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1066]: Unable to negotiate with 38.102.83.114 port 47242: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1074]: Unable to negotiate with 38.102.83.114 port 47250: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 29 06:17:27 np0005539565.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Nov 29 06:17:27 np0005539565.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1092]: Unable to negotiate with 38.102.83.114 port 47286: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1107]: Unable to negotiate with 38.102.83.114 port 47298: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1080]: Connection closed by 38.102.83.114 port 47262 [preauth]
Nov 29 06:17:27 np0005539565.novalocal cloud-init[1146]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:17:27 +0000. Up 15.72 seconds.
Nov 29 06:17:27 np0005539565.novalocal sshd-session[1084]: Connection closed by 38.102.83.114 port 47272 [preauth]
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 06:17:27 np0005539565.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 06:17:27 np0005539565.novalocal dracut[1288]: dracut-057-102.git20250818.el9
Nov 29 06:17:27 np0005539565.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:17:27 +0000. Up 16.11 seconds.
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1306]: #############################################################
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1307]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1309]: 256 SHA256:3C00YCxaTmy/TEMTquYEBnNvWlC89nysfugOjq/W7hM root@np0005539565.novalocal (ECDSA)
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1313]: 256 SHA256:f0jJLQI4wDtff/Eq9u7HLkYv9gCwnaKIp81KqhfSOUI root@np0005539565.novalocal (ED25519)
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1319]: 3072 SHA256:Qry9iG7bkEOvfR6YI/FDoQe0ALElDQrv3uM4g4/e78o root@np0005539565.novalocal (RSA)
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1323]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1325]: #############################################################
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 06:17:28 np0005539565.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:17:28 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 16.33 seconds
Nov 29 06:17:28 np0005539565.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 06:17:28 np0005539565.novalocal systemd[1]: Reached target Cloud-init target.
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:17:28 np0005539565.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: memstrack is not available
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: memstrack is not available
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:17:29 np0005539565.novalocal dracut[1290]: *** Including module: systemd ***
Nov 29 06:17:30 np0005539565.novalocal dracut[1290]: *** Including module: fips ***
Nov 29 06:17:30 np0005539565.novalocal chronyd[798]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Nov 29 06:17:30 np0005539565.novalocal chronyd[798]: System clock wrong by 1.525819 seconds
Nov 29 06:17:31 np0005539565.novalocal chronyd[798]: System clock was stepped by 1.525819 seconds
Nov 29 06:17:31 np0005539565.novalocal chronyd[798]: System clock TAI offset set to 37 seconds
Nov 29 06:17:32 np0005539565.novalocal dracut[1290]: *** Including module: systemd-initrd ***
Nov 29 06:17:32 np0005539565.novalocal dracut[1290]: *** Including module: i18n ***
Nov 29 06:17:32 np0005539565.novalocal dracut[1290]: *** Including module: drm ***
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 35 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 33 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 31 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 28 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 34 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 32 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 30 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 06:17:32 np0005539565.novalocal irqbalance[781]: IRQ 29 affinity is now unmanaged
Nov 29 06:17:32 np0005539565.novalocal dracut[1290]: *** Including module: prefixdevname ***
Nov 29 06:17:32 np0005539565.novalocal dracut[1290]: *** Including module: kernel-modules ***
Nov 29 06:17:32 np0005539565.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: kernel-modules-extra ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: qemu ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: fstab-sys ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: rootfs-block ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: terminfo ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: udev-rules ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: Skipping udev rule: 91-permissions.rules
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: virtiofs ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: dracut-systemd ***
Nov 29 06:17:33 np0005539565.novalocal dracut[1290]: *** Including module: usrmount ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: base ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: fs-lib ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: kdumpbase ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:   microcode_ctl module: mangling fw_dir
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: openssl ***
Nov 29 06:17:34 np0005539565.novalocal dracut[1290]: *** Including module: shutdown ***
Nov 29 06:17:35 np0005539565.novalocal dracut[1290]: *** Including module: squash ***
Nov 29 06:17:35 np0005539565.novalocal dracut[1290]: *** Including modules done ***
Nov 29 06:17:35 np0005539565.novalocal dracut[1290]: *** Installing kernel module dependencies ***
Nov 29 06:17:35 np0005539565.novalocal dracut[1290]: *** Installing kernel module dependencies done ***
Nov 29 06:17:35 np0005539565.novalocal dracut[1290]: *** Resolving executable dependencies ***
Nov 29 06:17:36 np0005539565.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: *** Resolving executable dependencies done ***
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: *** Generating early-microcode cpio image ***
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: *** Store current command line parameters ***
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: Stored kernel commandline:
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: No dracut internal kernel commandline stored in the initramfs
Nov 29 06:17:37 np0005539565.novalocal dracut[1290]: *** Install squash loader ***
Nov 29 06:17:38 np0005539565.novalocal dracut[1290]: *** Squashing the files inside the initramfs ***
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: *** Squashing the files inside the initramfs done ***
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: *** Hardlinking files ***
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Mode:           real
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Files:          50
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Linked:         0 files
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Compared:       0 xattrs
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Compared:       0 files
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Saved:          0 B
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: Duration:       0.001160 seconds
Nov 29 06:17:39 np0005539565.novalocal dracut[1290]: *** Hardlinking files done ***
Nov 29 06:17:40 np0005539565.novalocal dracut[1290]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 06:17:40 np0005539565.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Nov 29 06:17:40 np0005539565.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Nov 29 06:17:40 np0005539565.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 29 06:17:40 np0005539565.novalocal systemd[1]: Startup finished in 4.220s (kernel) + 2.845s (initrd) + 20.574s (userspace) = 27.641s.
Nov 29 06:17:44 np0005539565.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 46830 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 29 06:17:44 np0005539565.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 29 06:17:44 np0005539565.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 06:17:44 np0005539565.novalocal systemd-logind[787]: New session 1 of user zuul.
Nov 29 06:17:44 np0005539565.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 06:17:44 np0005539565.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 29 06:17:44 np0005539565.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Queued start job for default target Main User Target.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Created slice User Application Slice.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Reached target Paths.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Reached target Timers.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Reached target Sockets.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Reached target Basic System.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Reached target Main User Target.
Nov 29 06:17:45 np0005539565.novalocal systemd[4302]: Startup finished in 107ms.
Nov 29 06:17:45 np0005539565.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 29 06:17:45 np0005539565.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 29 06:17:45 np0005539565.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:17:45 np0005539565.novalocal python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:17:51 np0005539565.novalocal python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:17:53 np0005539565.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:17:57 np0005539565.novalocal python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:17:59 np0005539565.novalocal python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 06:18:01 np0005539565.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO009q8XvhgDCH/CYntn/Nj7apUjGycgerKyxcYKwqlrsQqtgZ+4b1AwoiDJ6ACRb/89P698Zu8SgdnR/v9pn0LFMXEa2g1lWeFaQovDGpqBz4mYtyZIbvWOJAPw3VQm6HJnXakvw8LrVDql95W2i6anqAeBFXq/hs4EAkNzhNR4pua8lJHwAgkexNQ+7fdWwTNsd+E5A23VTA0NzgPyGjZyo5PcuqueNFdk/JaekH4GB/BVWyh0KIH6JnPu98++RaPl1C8BRj9wWE/zvooiZsXPQCOfW1oql3StPekqBwJti2jRygs685e4eHPE+tO1VzwfPTyXZfQAe9dOPlZsWdnKtIw5H/2tajn7DELzA77VUbsuA1U+jNJ9sE0PwaWj6JsBqDB9tBbb31S7B12ZvrS250Qc0Q/c4Qv/WdSE87jti5CrwLfsjPX2DOo37gqMfu2EB90zV1L+h9vMlmkg3g8rOzpQK5jspXBfUIO2Pq0Nyyj9IORN7HLSKyZmK+teE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:01 np0005539565.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:02 np0005539565.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:02 np0005539565.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397081.906025-253-25689854358807/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa follow=False checksum=d281ebb5f24e5d8783693a36170923a3c25cbd23 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:03 np0005539565.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:03 np0005539565.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397082.9058464-308-260126645446679/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa.pub follow=False checksum=96e05a798ef30e23c5626e997638db7097fc90b9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:04 np0005539565.novalocal python3[4974]: ansible-ping Invoked with data=pong
Nov 29 06:18:06 np0005539565.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:18:08 np0005539565.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 06:18:09 np0005539565.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:09 np0005539565.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:09 np0005539565.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:10 np0005539565.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:10 np0005539565.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:10 np0005539565.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:12 np0005539565.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getfnirytbchfgqagqvwabrowzfyivvp ; /usr/bin/python3'
Nov 29 06:18:12 np0005539565.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:12 np0005539565.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:12 np0005539565.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:13 np0005539565.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsjxxlhmqpazgnfoceilflvhxslumghp ; /usr/bin/python3'
Nov 29 06:18:13 np0005539565.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:13 np0005539565.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:13 np0005539565.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:13 np0005539565.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoebbnpzsfugrikedthbncevxcjekhhl ; /usr/bin/python3'
Nov 29 06:18:13 np0005539565.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:13 np0005539565.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397092.7806308-34-78024703525908/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:13 np0005539565.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:14 np0005539565.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:14 np0005539565.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:15 np0005539565.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:15 np0005539565.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:15 np0005539565.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:15 np0005539565.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:16 np0005539565.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:16 np0005539565.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:16 np0005539565.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:17 np0005539565.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:17 np0005539565.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:17 np0005539565.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:18 np0005539565.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:18 np0005539565.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:18 np0005539565.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:18 np0005539565.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:19 np0005539565.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:19 np0005539565.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:19 np0005539565.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:20 np0005539565.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:20 np0005539565.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:20 np0005539565.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:21 np0005539565.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:21 np0005539565.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:21 np0005539565.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:22 np0005539565.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:18:24 np0005539565.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojhjtdedhlslvipyrvghdmvwzciotdb ; /usr/bin/python3'
Nov 29 06:18:24 np0005539565.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:24 np0005539565.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 06:18:24 np0005539565.novalocal systemd[1]: Starting Time & Date Service...
Nov 29 06:18:24 np0005539565.novalocal systemd[1]: Started Time & Date Service.
Nov 29 06:18:24 np0005539565.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Nov 29 06:18:24 np0005539565.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 np0005539565.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvjxffnkkukscieifsbepwkrrygnhkm ; /usr/bin/python3'
Nov 29 06:18:26 np0005539565.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:26 np0005539565.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:26 np0005539565.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 np0005539565.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:27 np0005539565.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397106.427033-255-210584390094441/source _original_basename=tmplkztt5y9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:27 np0005539565.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:27 np0005539565.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397107.351618-305-205537382018776/source _original_basename=tmpqdz1y_wf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:28 np0005539565.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoaarmkozcgpnjecwljcsdvpuasygarc ; /usr/bin/python3'
Nov 29 06:18:28 np0005539565.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:28 np0005539565.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:28 np0005539565.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:29 np0005539565.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltjucpznlmlbusifxiesqxyhlrarjiu ; /usr/bin/python3'
Nov 29 06:18:29 np0005539565.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:29 np0005539565.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397108.5111008-385-272003815597344/source _original_basename=tmp0l4aqwzg follow=False checksum=d3787dbc1d919dd7098cc7939d07e9b9a9d1522d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:29 np0005539565.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:29 np0005539565.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:30 np0005539565.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:30 np0005539565.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcxgkebyjancakndyicgekzcnlzummk ; /usr/bin/python3'
Nov 29 06:18:30 np0005539565.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:30 np0005539565.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:18:30 np0005539565.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:30 np0005539565.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dliwebhfozcoehmhswlcjbqzkngniprf ; /usr/bin/python3'
Nov 29 06:18:30 np0005539565.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:30 np0005539565.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397110.2804787-454-141795246795060/source _original_basename=tmpzaya1skk follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:30 np0005539565.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:31 np0005539565.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiunxvkjldjpwckwgkgdxssshdkwtjku ; /usr/bin/python3'
Nov 29 06:18:31 np0005539565.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:31 np0005539565.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-beaa-304e-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:31 np0005539565.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:32 np0005539565.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-beaa-304e-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 06:18:33 np0005539565.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:38 np0005539565.novalocal chronyd[798]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Nov 29 06:18:43 np0005539565.novalocal sshd-session[6918]: Invalid user support from 78.128.112.74 port 36398
Nov 29 06:18:43 np0005539565.novalocal sshd-session[6918]: Connection closed by invalid user support 78.128.112.74 port 36398 [preauth]
Nov 29 06:18:51 np0005539565.novalocal sudo[6943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfeivgcidukoogqchqnwnsoqsehmsrd ; /usr/bin/python3'
Nov 29 06:18:51 np0005539565.novalocal sudo[6943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:51 np0005539565.novalocal python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:18:51 np0005539565.novalocal sudo[6943]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:54 np0005539565.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 06:19:52 np0005539565.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 46830:11: disconnected by user
Nov 29 06:19:52 np0005539565.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 46830
Nov 29 06:19:52 np0005539565.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:19:52 np0005539565.novalocal systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Nov 29 06:19:57 np0005539565.novalocal systemd[4302]: Starting Mark boot as successful...
Nov 29 06:19:57 np0005539565.novalocal systemd[4302]: Finished Mark boot as successful.
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 06:20:27 np0005539565.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 06:20:27 np0005539565.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4218] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:20:27 np0005539565.novalocal systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4533] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4569] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4574] device (eth1): carrier: link connected
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4577] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4584] policy: auto-activating connection 'Wired connection 1' (266654df-4486-3314-93a9-b69e59309012)
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4588] device (eth1): Activation: starting connection 'Wired connection 1' (266654df-4486-3314-93a9-b69e59309012)
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4589] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4592] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4596] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:20:27 np0005539565.novalocal NetworkManager[860]: <info>  [1764397227.4601] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:20:28 np0005539565.novalocal sshd-session[6953]: Accepted publickey for zuul from 38.102.83.114 port 42416 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:20:28 np0005539565.novalocal systemd-logind[787]: New session 3 of user zuul.
Nov 29 06:20:28 np0005539565.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 29 06:20:28 np0005539565.novalocal sshd-session[6953]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:20:28 np0005539565.novalocal python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-957e-49b2-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:20:38 np0005539565.novalocal sudo[7058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zilgkhfmkqojpnjxhcxxannaxlqnfwwr ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:20:38 np0005539565.novalocal sudo[7058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:38 np0005539565.novalocal python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:20:38 np0005539565.novalocal sudo[7058]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:38 np0005539565.novalocal sudo[7131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vddcemetnuidmealujxrkvgpoupjtgjo ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:20:38 np0005539565.novalocal sudo[7131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:38 np0005539565.novalocal python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397238.251819-206-50303488201345/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=4e074cdfafd31d6c8286d8ee867b60c0cf0cc9cd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:20:39 np0005539565.novalocal sudo[7131]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:39 np0005539565.novalocal sudo[7181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfkkeuwlepqyldafuorgthfpclysrtka ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:20:39 np0005539565.novalocal sudo[7181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:39 np0005539565.novalocal python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Stopping Network Manager...
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6222] caught SIGTERM, shutting down normally.
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6247] dhcp4 (eth0): canceled DHCP transaction
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6248] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6248] dhcp4 (eth0): state changed no lease
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6252] manager: NetworkManager state is now CONNECTING
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6344] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.6345] dhcp4 (eth1): state changed no lease
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:20:39 np0005539565.novalocal NetworkManager[860]: <info>  [1764397239.9160] exiting (success)
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Stopped Network Manager.
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: NetworkManager.service: Consumed 1.592s CPU time, 10.1M memory peak.
Nov 29 06:20:39 np0005539565.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397239.9999] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7d4aa993-1c07-4b6d-9a71-a6bd2dda9a8d)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.0000] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.0086] manager[0x558264d29070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:20:40 np0005539565.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:20:40 np0005539565.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1211] hostname: hostname: using hostnamed
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1211] hostname: static hostname changed from (none) to "np0005539565.novalocal"
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1219] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1225] manager[0x558264d29070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1225] manager[0x558264d29070]: rfkill: WWAN hardware radio set enabled
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1257] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1258] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1258] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1259] manager: Networking is enabled by state file
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1262] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1268] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1322] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1344] dhcp: init: Using DHCP client 'internal'
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1350] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1359] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1369] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1381] device (lo): Activation: starting connection 'lo' (45524fb0-33f8-4a9e-bdee-77d73a355010)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1391] device (eth0): carrier: link connected
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1399] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1406] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1407] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1417] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1428] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1437] device (eth1): carrier: link connected
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1445] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1452] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (266654df-4486-3314-93a9-b69e59309012) (indicated)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1453] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1461] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1472] device (eth1): Activation: starting connection 'Wired connection 1' (266654df-4486-3314-93a9-b69e59309012)
Nov 29 06:20:40 np0005539565.novalocal systemd[1]: Started Network Manager.
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1480] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1486] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1490] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1494] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1497] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1502] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1506] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1510] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1515] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1525] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1529] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1542] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1547] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1571] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1579] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1588] device (lo): Activation: successful, device activated.
Nov 29 06:20:40 np0005539565.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1599] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.1611] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:20:40 np0005539565.novalocal sudo[7181]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.3983] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.4728] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.4732] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.4739] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.4747] device (eth0): Activation: successful, device activated.
Nov 29 06:20:40 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397240.4758] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:20:40 np0005539565.novalocal python3[7249]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-957e-49b2-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:20:50 np0005539565.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:21:10 np0005539565.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.2969] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:21:25 np0005539565.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:21:25 np0005539565.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3386] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3390] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3405] device (eth1): Activation: successful, device activated.
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3419] manager: startup complete
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3421] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <warn>  [1764397285.3432] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3445] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3564] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3564] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3564] dhcp4 (eth1): state changed no lease
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3581] policy: auto-activating connection 'ci-private-network' (76219567-d3ca-5abd-b302-35afcca51805)
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3587] device (eth1): Activation: starting connection 'ci-private-network' (76219567-d3ca-5abd-b302-35afcca51805)
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3588] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3590] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3599] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.3608] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.6976] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.6981] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:21:25 np0005539565.novalocal NetworkManager[7200]: <info>  [1764397285.6995] device (eth1): Activation: successful, device activated.
Nov 29 06:21:35 np0005539565.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:21:40 np0005539565.novalocal sshd-session[6956]: Received disconnect from 38.102.83.114 port 42416:11: disconnected by user
Nov 29 06:21:40 np0005539565.novalocal sshd-session[6956]: Disconnected from user zuul 38.102.83.114 port 42416
Nov 29 06:21:40 np0005539565.novalocal sshd-session[6953]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:21:40 np0005539565.novalocal systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Nov 29 06:21:40 np0005539565.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 06:21:40 np0005539565.novalocal systemd[1]: session-3.scope: Consumed 1.802s CPU time.
Nov 29 06:21:40 np0005539565.novalocal systemd-logind[787]: Removed session 3.
Nov 29 06:21:57 np0005539565.novalocal sshd-session[7296]: Accepted publickey for zuul from 38.102.83.114 port 40794 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:21:57 np0005539565.novalocal systemd-logind[787]: New session 4 of user zuul.
Nov 29 06:21:57 np0005539565.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 29 06:21:57 np0005539565.novalocal sshd-session[7296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:21:57 np0005539565.novalocal sudo[7375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkcarrxgqvhhrbvemwozgqcnnwzzdnq ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:21:57 np0005539565.novalocal sudo[7375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:21:58 np0005539565.novalocal python3[7377]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:21:58 np0005539565.novalocal sudo[7375]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:58 np0005539565.novalocal sudo[7448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohfijcukelwjfjqajxghwryotjzyfaxg ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:21:58 np0005539565.novalocal sudo[7448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:21:58 np0005539565.novalocal python3[7450]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397317.7225409-373-194225482441264/source _original_basename=tmpgqb3omwy follow=False checksum=c97a8eb9e2d79dc37ef10c85c5553d4edb376b92 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:21:58 np0005539565.novalocal sudo[7448]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:00 np0005539565.novalocal sshd-session[7299]: Connection closed by 38.102.83.114 port 40794
Nov 29 06:22:00 np0005539565.novalocal sshd-session[7296]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:22:00 np0005539565.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 06:22:00 np0005539565.novalocal systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Nov 29 06:22:00 np0005539565.novalocal systemd-logind[787]: Removed session 4.
Nov 29 06:22:57 np0005539565.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Nov 29 06:22:57 np0005539565.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 06:22:57 np0005539565.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 06:24:00 np0005539565.novalocal chronyd[798]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 29 06:29:00 np0005539565.novalocal sshd-session[7482]: Accepted publickey for zuul from 38.102.83.114 port 46868 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:29:00 np0005539565.novalocal systemd-logind[787]: New session 5 of user zuul.
Nov 29 06:29:00 np0005539565.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 29 06:29:00 np0005539565.novalocal sshd-session[7482]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:29:00 np0005539565.novalocal sudo[7509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitamvnrhenlnkldxubvufmwtjdrsqwv ; /usr/bin/python3'
Nov 29 06:29:00 np0005539565.novalocal sudo[7509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:00 np0005539565.novalocal python3[7511]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000ca8-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:00 np0005539565.novalocal sudo[7509]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:01 np0005539565.novalocal sudo[7538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndsckgdmbsokfszhavfeaiilskwhfvf ; /usr/bin/python3'
Nov 29 06:29:01 np0005539565.novalocal sudo[7538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:01 np0005539565.novalocal python3[7540]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:01 np0005539565.novalocal sudo[7538]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:01 np0005539565.novalocal sudo[7564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmchdjpahkilgebzursqugcxvtfpmvbl ; /usr/bin/python3'
Nov 29 06:29:01 np0005539565.novalocal sudo[7564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:01 np0005539565.novalocal python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:01 np0005539565.novalocal sudo[7564]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:01 np0005539565.novalocal sudo[7590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiyammiuwplrrludwsurzwmnklwxjae ; /usr/bin/python3'
Nov 29 06:29:01 np0005539565.novalocal sudo[7590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:01 np0005539565.novalocal python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:01 np0005539565.novalocal sudo[7590]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:02 np0005539565.novalocal sudo[7616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govxrhmzsoghtmtyjgbgnoeqfrjoewfo ; /usr/bin/python3'
Nov 29 06:29:02 np0005539565.novalocal sudo[7616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:02 np0005539565.novalocal python3[7618]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:02 np0005539565.novalocal sudo[7616]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:02 np0005539565.novalocal sudo[7642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgqdskffxtyopbesqdqjylerqdsktwf ; /usr/bin/python3'
Nov 29 06:29:02 np0005539565.novalocal sudo[7642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:02 np0005539565.novalocal python3[7644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:02 np0005539565.novalocal sudo[7642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:03 np0005539565.novalocal sudo[7720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyrcflqjknzyecpuiicuhxrjhninbfv ; /usr/bin/python3'
Nov 29 06:29:03 np0005539565.novalocal sudo[7720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:03 np0005539565.novalocal python3[7722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:29:03 np0005539565.novalocal sudo[7720]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:03 np0005539565.novalocal sudo[7793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbxvoarfifkxsgtvlhpsumwyrgkocpc ; /usr/bin/python3'
Nov 29 06:29:03 np0005539565.novalocal sudo[7793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:04 np0005539565.novalocal python3[7795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397743.4167497-371-9596252235664/source _original_basename=tmpm2ooef55 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:04 np0005539565.novalocal sudo[7793]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:05 np0005539565.novalocal sudo[7843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjfhepbgpnmwdukfbfjaoogubhppbdxt ; /usr/bin/python3'
Nov 29 06:29:05 np0005539565.novalocal sudo[7843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:05 np0005539565.novalocal python3[7845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:29:05 np0005539565.novalocal systemd[1]: Reloading.
Nov 29 06:29:05 np0005539565.novalocal systemd-rc-local-generator[7863]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:29:05 np0005539565.novalocal sudo[7843]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:43 np0005539565.novalocal sudo[7899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiasrihgqkudejjesvdwsjktdeblzkva ; /usr/bin/python3'
Nov 29 06:29:43 np0005539565.novalocal sudo[7899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:44 np0005539565.novalocal python3[7901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 06:29:44 np0005539565.novalocal sudo[7899]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:46 np0005539565.novalocal sudo[7925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlayuaxqdipninlkanoqtpcfblhhptx ; /usr/bin/python3'
Nov 29 06:29:46 np0005539565.novalocal sudo[7925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:46 np0005539565.novalocal python3[7927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:46 np0005539565.novalocal sudo[7925]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:46 np0005539565.novalocal sudo[7953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjhcelndajrwnpnjwdtleaiyruuyhmoj ; /usr/bin/python3'
Nov 29 06:29:46 np0005539565.novalocal sudo[7953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:47 np0005539565.novalocal python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:47 np0005539565.novalocal sudo[7953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:47 np0005539565.novalocal sudo[7981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sydgjvkmwvoitbdnxhfawyfehlgrooiu ; /usr/bin/python3'
Nov 29 06:29:47 np0005539565.novalocal sudo[7981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:47 np0005539565.novalocal python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:47 np0005539565.novalocal sudo[7981]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:47 np0005539565.novalocal sudo[8009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yruknrgsyrvqwhmpsewjklhajqwodwyb ; /usr/bin/python3'
Nov 29 06:29:47 np0005539565.novalocal sudo[8009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:47 np0005539565.novalocal python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:47 np0005539565.novalocal sudo[8009]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:48 np0005539565.novalocal python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000caf-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:48 np0005539565.novalocal python3[8068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:29:51 np0005539565.novalocal sshd-session[7485]: Connection closed by 38.102.83.114 port 46868
Nov 29 06:29:51 np0005539565.novalocal sshd-session[7482]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:29:51 np0005539565.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 06:29:51 np0005539565.novalocal systemd[1]: session-5.scope: Consumed 4.422s CPU time.
Nov 29 06:29:51 np0005539565.novalocal systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Nov 29 06:29:51 np0005539565.novalocal systemd-logind[787]: Removed session 5.
Nov 29 06:29:53 np0005539565.novalocal sshd-session[8072]: Accepted publickey for zuul from 38.102.83.114 port 48196 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:29:53 np0005539565.novalocal systemd-logind[787]: New session 6 of user zuul.
Nov 29 06:29:53 np0005539565.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 29 06:29:53 np0005539565.novalocal sshd-session[8072]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:29:53 np0005539565.novalocal sudo[8099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsyvpbvjhcsbjillcpwrelppfazszab ; /usr/bin/python3'
Nov 29 06:29:53 np0005539565.novalocal sudo[8099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:53 np0005539565.novalocal python3[8101]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:30:17 np0005539565.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:30:27 np0005539565.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:30:37 np0005539565.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:30:38 np0005539565.novalocal setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 06:30:38 np0005539565.novalocal setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:30:52 np0005539565.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:31:11 np0005539565.novalocal dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:31:11 np0005539565.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:31:11 np0005539565.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:31:11 np0005539565.novalocal systemd[1]: Reloading.
Nov 29 06:31:11 np0005539565.novalocal systemd-rc-local-generator[8925]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:31:12 np0005539565.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:31:13 np0005539565.novalocal sudo[8099]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:14 np0005539565.novalocal python3[10691]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-59ff-55d2-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:31:15 np0005539565.novalocal kernel: evm: overlay not supported
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Nov 29 06:31:15 np0005539565.novalocal dbus-broker-launch[11888]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 06:31:15 np0005539565.novalocal dbus-broker-launch[11888]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: Started D-Bus User Message Bus.
Nov 29 06:31:15 np0005539565.novalocal dbus-broker-lau[11888]: Ready
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: Created slice Slice /user.
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: podman-11741.scope: unit configures an IP firewall, but not running as root.
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: Started podman-11741.scope.
Nov 29 06:31:15 np0005539565.novalocal systemd[4302]: Started podman-pause-d4c4499e.scope.
Nov 29 06:31:16 np0005539565.novalocal sudo[12659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxljbzrvrxajdrvzndvjskqraaduypcn ; /usr/bin/python3'
Nov 29 06:31:16 np0005539565.novalocal sudo[12659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:16 np0005539565.novalocal python3[12684]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.39:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.39:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:16 np0005539565.novalocal python3[12684]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 06:31:16 np0005539565.novalocal sudo[12659]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:16 np0005539565.novalocal sshd-session[8075]: Connection closed by 38.102.83.114 port 48196
Nov 29 06:31:16 np0005539565.novalocal sshd-session[8072]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:31:16 np0005539565.novalocal systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Nov 29 06:31:16 np0005539565.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 06:31:16 np0005539565.novalocal systemd[1]: session-6.scope: Consumed 1min 9.164s CPU time.
Nov 29 06:31:16 np0005539565.novalocal systemd-logind[787]: Removed session 6.
Nov 29 06:31:35 np0005539565.novalocal sshd-session[22502]: Unable to negotiate with 38.102.83.151 port 43290: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 06:31:35 np0005539565.novalocal sshd-session[22501]: Unable to negotiate with 38.102.83.151 port 43316: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 06:31:35 np0005539565.novalocal sshd-session[22508]: Connection closed by 38.102.83.151 port 43272 [preauth]
Nov 29 06:31:35 np0005539565.novalocal sshd-session[22504]: Connection closed by 38.102.83.151 port 43280 [preauth]
Nov 29 06:31:35 np0005539565.novalocal sshd-session[22505]: Unable to negotiate with 38.102.83.151 port 43304: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 06:31:40 np0005539565.novalocal sshd-session[24604]: Accepted publickey for zuul from 38.102.83.114 port 34542 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:31:40 np0005539565.novalocal systemd-logind[787]: New session 7 of user zuul.
Nov 29 06:31:40 np0005539565.novalocal systemd[1]: Started Session 7 of User zuul.
Nov 29 06:31:40 np0005539565.novalocal sshd-session[24604]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:31:41 np0005539565.novalocal python3[24735]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:31:41 np0005539565.novalocal sudo[24951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigpzkkeqyuqlgfzibobobgglqfozrra ; /usr/bin/python3'
Nov 29 06:31:41 np0005539565.novalocal sudo[24951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:41 np0005539565.novalocal python3[24963]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:31:41 np0005539565.novalocal sudo[24951]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:42 np0005539565.novalocal sudo[25398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgoojswsqfwxctbyohtsdkpnvbxbmhfc ; /usr/bin/python3'
Nov 29 06:31:42 np0005539565.novalocal sudo[25398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:42 np0005539565.novalocal python3[25409]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539565.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 06:31:42 np0005539565.novalocal useradd[25491]: new group: name=cloud-admin, GID=1002
Nov 29 06:31:42 np0005539565.novalocal useradd[25491]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 29 06:31:42 np0005539565.novalocal sudo[25398]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:42 np0005539565.novalocal sudo[25642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwcixawjispofuqfuiepbyudlsfsfov ; /usr/bin/python3'
Nov 29 06:31:42 np0005539565.novalocal sudo[25642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:42 np0005539565.novalocal python3[25651]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:31:42 np0005539565.novalocal sudo[25642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:43 np0005539565.novalocal sudo[25945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkumkaqxefxixjqnejlqwsgnqlwoxbv ; /usr/bin/python3'
Nov 29 06:31:43 np0005539565.novalocal sudo[25945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:43 np0005539565.novalocal python3[25955]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:31:43 np0005539565.novalocal sudo[25945]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:43 np0005539565.novalocal sudo[26258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhaciafzxrclramjktoqrzjdzhucxtz ; /usr/bin/python3'
Nov 29 06:31:43 np0005539565.novalocal sudo[26258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:43 np0005539565.novalocal python3[26266]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397903.0374665-170-226713968270140/source _original_basename=tmpxm3pltfc follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:43 np0005539565.novalocal sudo[26258]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:44 np0005539565.novalocal sudo[26662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gccxzmpxvwxhdzhxongyottkwgmcjvve ; /usr/bin/python3'
Nov 29 06:31:44 np0005539565.novalocal sudo[26662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:44 np0005539565.novalocal python3[26672]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Nov 29 06:31:44 np0005539565.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:31:44 np0005539565.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:31:44 np0005539565.novalocal systemd-hostnamed[26791]: Changed pretty hostname to 'compute-2'
Nov 29 06:31:44 compute-2 systemd-hostnamed[26791]: Hostname set to <compute-2> (static)
Nov 29 06:31:44 compute-2 NetworkManager[7200]: <info>  [1764397904.8012] hostname: static hostname changed from "np0005539565.novalocal" to "compute-2"
Nov 29 06:31:44 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:31:44 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:31:44 compute-2 sudo[26662]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:45 compute-2 sshd-session[24666]: Connection closed by 38.102.83.114 port 34542
Nov 29 06:31:45 compute-2 sshd-session[24604]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:31:45 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 06:31:45 compute-2 systemd[1]: session-7.scope: Consumed 2.136s CPU time.
Nov 29 06:31:45 compute-2 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Nov 29 06:31:45 compute-2 systemd-logind[787]: Removed session 7.
Nov 29 06:31:54 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:31:54 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:31:54 compute-2 systemd[1]: man-db-cache-update.service: Consumed 49.795s CPU time.
Nov 29 06:31:54 compute-2 systemd[1]: run-r86585fea04714828b16300b15fd100b7.service: Deactivated successfully.
Nov 29 06:31:54 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:32:14 compute-2 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 06:32:14 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:32:14 compute-2 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 06:32:14 compute-2 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 06:32:14 compute-2 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 06:35:48 compute-2 sshd-session[29928]: Accepted publickey for zuul from 38.102.83.151 port 49710 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 06:35:48 compute-2 systemd-logind[787]: New session 8 of user zuul.
Nov 29 06:35:48 compute-2 systemd[1]: Started Session 8 of User zuul.
Nov 29 06:35:48 compute-2 sshd-session[29928]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:35:49 compute-2 python3[30004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:35:51 compute-2 sudo[30118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmyrcfddtwneumelcgzvhylwwvftdtrq ; /usr/bin/python3'
Nov 29 06:35:51 compute-2 sudo[30118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:51 compute-2 python3[30120]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:51 compute-2 sudo[30118]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:51 compute-2 sudo[30191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwurkdiucnffomhzpclqifgbyksdnqi ; /usr/bin/python3'
Nov 29 06:35:51 compute-2 sudo[30191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:51 compute-2 python3[30193]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:51 compute-2 sudo[30191]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:51 compute-2 sudo[30217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsbewvhyejclhpzrjlxnxkggzpbjrtxs ; /usr/bin/python3'
Nov 29 06:35:51 compute-2 sudo[30217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:52 compute-2 python3[30219]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:52 compute-2 sudo[30217]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:52 compute-2 sudo[30290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxqnzsfcxtirbcvzbbufuhgsffrefsmc ; /usr/bin/python3'
Nov 29 06:35:52 compute-2 sudo[30290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:52 compute-2 python3[30292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:52 compute-2 sudo[30290]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:52 compute-2 sudo[30316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alcuhufzwgolnzfctnzvjpvroebazndq ; /usr/bin/python3'
Nov 29 06:35:52 compute-2 sudo[30316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:52 compute-2 python3[30318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:52 compute-2 sudo[30316]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:52 compute-2 sudo[30389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrlmwzxlyrqmvkqwwbbxlbpxuturogtx ; /usr/bin/python3'
Nov 29 06:35:52 compute-2 sudo[30389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:53 compute-2 python3[30391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:53 compute-2 sudo[30389]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:53 compute-2 sudo[30415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnlybfbqruufwqhfviktyshxgfqlxdqz ; /usr/bin/python3'
Nov 29 06:35:53 compute-2 sudo[30415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:53 compute-2 python3[30417]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:53 compute-2 sudo[30415]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:53 compute-2 sudo[30488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbttzsvkneywbzmgahfhdmelechfryyu ; /usr/bin/python3'
Nov 29 06:35:53 compute-2 sudo[30488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:53 compute-2 python3[30490]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:53 compute-2 sudo[30488]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:53 compute-2 sudo[30514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enioqfakpvdiuwvubryfktsnllkniwee ; /usr/bin/python3'
Nov 29 06:35:53 compute-2 sudo[30514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:53 compute-2 python3[30516]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:53 compute-2 sudo[30514]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:53 compute-2 sudo[30587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uslotafodgsaxzywxeazgibyafjgkzfr ; /usr/bin/python3'
Nov 29 06:35:53 compute-2 sudo[30587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:54 compute-2 python3[30589]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:54 compute-2 sudo[30587]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:54 compute-2 sudo[30613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinwabtwlfhcntabwwxvwuijcfsnmdqv ; /usr/bin/python3'
Nov 29 06:35:54 compute-2 sudo[30613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:54 compute-2 python3[30615]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:54 compute-2 sudo[30613]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:54 compute-2 sudo[30686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmjsjiloapgxfldbcscflcwwuqgsanp ; /usr/bin/python3'
Nov 29 06:35:54 compute-2 sudo[30686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:54 compute-2 python3[30688]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:54 compute-2 sudo[30686]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:54 compute-2 sudo[30712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkubaiosswjoucqdhvimivpjmpbqvesh ; /usr/bin/python3'
Nov 29 06:35:54 compute-2 sudo[30712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:54 compute-2 python3[30714]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:35:54 compute-2 sudo[30712]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:55 compute-2 sudo[30785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivanoofjdrohadenjzjqhyrhsyzzilyl ; /usr/bin/python3'
Nov 29 06:35:55 compute-2 sudo[30785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:35:55 compute-2 python3[30787]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1747353-34063-187323843788305/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:35:55 compute-2 sudo[30785]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:07 compute-2 sshd-session[30813]: Received disconnect from 114.66.38.28 port 46560:11:  [preauth]
Nov 29 06:36:07 compute-2 sshd-session[30813]: Disconnected from authenticating user root 114.66.38.28 port 46560 [preauth]
Nov 29 06:36:09 compute-2 python3[30838]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:09 compute-2 sshd-session[29931]: Received disconnect from 38.102.83.151 port 49710:11: disconnected by user
Nov 29 06:41:09 compute-2 sshd-session[29931]: Disconnected from user zuul 38.102.83.151 port 49710
Nov 29 06:41:09 compute-2 sshd-session[29928]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:41:09 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 06:41:09 compute-2 systemd[1]: session-8.scope: Consumed 4.717s CPU time.
Nov 29 06:41:09 compute-2 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Nov 29 06:41:09 compute-2 systemd-logind[787]: Removed session 8.
Nov 29 06:52:43 compute-2 sshd-session[30850]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 06:52:43 compute-2 sshd-session[30850]: Connection reset by 45.140.17.97 port 32401
Nov 29 06:57:49 compute-2 sshd-session[30855]: Accepted publickey for zuul from 192.168.122.30 port 45614 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 06:57:49 compute-2 systemd-logind[787]: New session 9 of user zuul.
Nov 29 06:57:49 compute-2 systemd[1]: Started Session 9 of User zuul.
Nov 29 06:57:49 compute-2 sshd-session[30855]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:57:50 compute-2 python3.9[31008]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:57:51 compute-2 sudo[31187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvmdhlycvdakbaxbqwvsmyrimcroicgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399471.0321782-64-278902420259460/AnsiballZ_command.py'
Nov 29 06:57:51 compute-2 sudo[31187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:57:51 compute-2 python3.9[31189]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:58:17 compute-2 sudo[31187]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:18 compute-2 sshd-session[30858]: Connection closed by 192.168.122.30 port 45614
Nov 29 06:58:18 compute-2 sshd-session[30855]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:58:18 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 06:58:18 compute-2 systemd[1]: session-9.scope: Consumed 8.564s CPU time.
Nov 29 06:58:18 compute-2 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Nov 29 06:58:18 compute-2 systemd-logind[787]: Removed session 9.
Nov 29 06:58:43 compute-2 sshd-session[31248]: Accepted publickey for zuul from 192.168.122.30 port 53332 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 06:58:43 compute-2 systemd-logind[787]: New session 10 of user zuul.
Nov 29 06:58:43 compute-2 systemd[1]: Started Session 10 of User zuul.
Nov 29 06:58:43 compute-2 sshd-session[31248]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:58:44 compute-2 python3.9[31401]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 06:58:45 compute-2 python3.9[31575]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:58:46 compute-2 sudo[31725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouddzndmkdrjmqbrmeuilocwhmrgxvup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399525.8772433-100-217629672249183/AnsiballZ_command.py'
Nov 29 06:58:46 compute-2 sudo[31725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:46 compute-2 python3.9[31727]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:58:46 compute-2 sudo[31725]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:48 compute-2 sudo[31878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-redqcbbwowodxofqknaiafcokxanshbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399528.558577-136-83989088861817/AnsiballZ_stat.py'
Nov 29 06:58:48 compute-2 sudo[31878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:49 compute-2 python3.9[31880]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:58:49 compute-2 sudo[31878]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:50 compute-2 sudo[32030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucjdbrgqzmdxdaolncxdajbxcjatmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399529.84669-161-99197853000076/AnsiballZ_file.py'
Nov 29 06:58:50 compute-2 sudo[32030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:50 compute-2 python3.9[32032]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:58:50 compute-2 sudo[32030]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:51 compute-2 sudo[32182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skykxbdpfxsstvggnuwqlzgxbnvjstrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399530.7239208-185-123489201157412/AnsiballZ_stat.py'
Nov 29 06:58:51 compute-2 sudo[32182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:51 compute-2 python3.9[32184]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:58:51 compute-2 sudo[32182]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:51 compute-2 sudo[32305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlusaeoxlenutzpdlyjnhuszsuqnikny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399530.7239208-185-123489201157412/AnsiballZ_copy.py'
Nov 29 06:58:51 compute-2 sudo[32305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:51 compute-2 python3.9[32307]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399530.7239208-185-123489201157412/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:58:52 compute-2 sudo[32305]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:52 compute-2 sudo[32457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdswgxpmyqmrlylguulojkbeydouvfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399532.195207-230-70171973815299/AnsiballZ_setup.py'
Nov 29 06:58:52 compute-2 sudo[32457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:52 compute-2 python3.9[32459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:58:53 compute-2 sudo[32457]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:53 compute-2 sudo[32613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouquehqcaknhqavdzfsrlqmprnztwgid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399533.2299387-254-25989885766280/AnsiballZ_file.py'
Nov 29 06:58:53 compute-2 sudo[32613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:53 compute-2 python3.9[32615]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:58:53 compute-2 sudo[32613]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:54 compute-2 sudo[32765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdtnxoywisfzbeazmbizbkiudtcghyjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399533.933518-281-40107327390317/AnsiballZ_file.py'
Nov 29 06:58:54 compute-2 sudo[32765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:54 compute-2 python3.9[32767]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:58:54 compute-2 sudo[32765]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:55 compute-2 python3.9[32917]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:58:59 compute-2 python3.9[33170]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:59:00 compute-2 python3.9[33320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:59:02 compute-2 python3.9[33474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:59:03 compute-2 sudo[33630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqeahhvcuwdwnlazmpfosvkistfsiskd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399542.9862535-424-148191886624574/AnsiballZ_setup.py'
Nov 29 06:59:03 compute-2 sudo[33630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:59:03 compute-2 python3.9[33632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:59:03 compute-2 sudo[33630]: pam_unix(sudo:session): session closed for user root
Nov 29 06:59:04 compute-2 sudo[33714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhckiihtchugjmmublgmvwowppeyioyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399542.9862535-424-148191886624574/AnsiballZ_dnf.py'
Nov 29 06:59:04 compute-2 sudo[33714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:59:04 compute-2 python3.9[33716]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:59:56 compute-2 systemd[1]: Reloading.
Nov 29 06:59:56 compute-2 systemd-rc-local-generator[33911]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:59:56 compute-2 systemd[1]: Starting dnf makecache...
Nov 29 06:59:56 compute-2 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 06:59:56 compute-2 dnf[33926]: Failed determining last makecache time.
Nov 29 06:59:56 compute-2 dnf[33926]: delorean-openstack-barbican-42b4c41831408a8e323 116 kB/s | 3.0 kB     00:00
Nov 29 06:59:56 compute-2 dnf[33926]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 137 kB/s | 3.0 kB     00:00
Nov 29 06:59:56 compute-2 systemd[1]: Reloading.
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-cinder-1c00d6490d88e436f26ef 147 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-stevedore-c4acc5639fd2329372142 154 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 systemd-rc-local-generator[33961]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-cloudkitty-tests-tempest-2c80f8 128 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-os-net-config-9758ab42364673d01bc5014e 134 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 153 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-designate-tests-tempest-347fdbc 155 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-glance-1fd12c29b339f30fe823e 168 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 172 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-manila-3c01b7181572c95dac462 142 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-whitebox-neutron-tests-tempest- 154 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-octavia-ba397f07a7331190208c 170 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-watcher-c014f81a8647287f6dcc 161 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-tcib-1124124ec06aadbac34f0d340b 169 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 systemd[1]: Reloading.
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 156 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-swift-dc98a8463506ac520c469a 156 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-python-tempestconf-8515371b7cceebd4282 131 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 systemd-rc-local-generator[34015]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:59:57 compute-2 dnf[33926]: delorean-openstack-heat-ui-013accbfd179753bc3f0 146 kB/s | 3.0 kB     00:00
Nov 29 06:59:57 compute-2 dnf[33926]: CentOS Stream 9 - BaseOS                         73 kB/s | 7.3 kB     00:00
Nov 29 06:59:57 compute-2 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 06:59:57 compute-2 dnf[33926]: CentOS Stream 9 - AppStream                      66 kB/s | 7.4 kB     00:00
Nov 29 06:59:57 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 06:59:57 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 06:59:57 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 06:59:58 compute-2 dnf[33926]: CentOS Stream 9 - CRB                            32 kB/s | 7.2 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: CentOS Stream 9 - Extras packages                85 kB/s | 8.3 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: dlrn-antelope-testing                           154 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: dlrn-antelope-build-deps                        156 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: centos9-rabbitmq                                110 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: centos9-storage                                 124 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: centos9-opstools                                121 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: NFV SIG OpenvSwitch                             116 kB/s | 3.0 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: repo-setup-centos-appstream                     159 kB/s | 4.4 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: repo-setup-centos-baseos                        149 kB/s | 3.9 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: repo-setup-centos-highavailability              175 kB/s | 3.9 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: repo-setup-centos-powertools                    173 kB/s | 4.3 kB     00:00
Nov 29 06:59:58 compute-2 dnf[33926]: Extra Packages for Enterprise Linux 9 - x86_64  267 kB/s |  33 kB     00:00
Nov 29 06:59:59 compute-2 dnf[33926]: Metadata cache created.
Nov 29 06:59:59 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 06:59:59 compute-2 systemd[1]: Finished dnf makecache.
Nov 29 06:59:59 compute-2 systemd[1]: dnf-makecache.service: Consumed 2.112s CPU time.
Nov 29 07:01:01 compute-2 CROND[34245]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 07:01:01 compute-2 run-parts[34248]: (/etc/cron.hourly) starting 0anacron
Nov 29 07:01:01 compute-2 anacron[34256]: Anacron started on 2025-11-29
Nov 29 07:01:01 compute-2 anacron[34256]: Will run job `cron.daily' in 34 min.
Nov 29 07:01:01 compute-2 anacron[34256]: Will run job `cron.weekly' in 54 min.
Nov 29 07:01:01 compute-2 anacron[34256]: Will run job `cron.monthly' in 74 min.
Nov 29 07:01:01 compute-2 anacron[34256]: Jobs will be executed sequentially
Nov 29 07:01:01 compute-2 run-parts[34258]: (/etc/cron.hourly) finished 0anacron
Nov 29 07:01:01 compute-2 CROND[34244]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 07:01:16 compute-2 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:01:16 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:01:17 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 07:01:17 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:01:17 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:01:17 compute-2 systemd[1]: Reloading.
Nov 29 07:01:17 compute-2 systemd-rc-local-generator[34399]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:01:17 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:01:18 compute-2 sudo[33714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:18 compute-2 sudo[35310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrwewluoddhzdrjrhzarzvaachelluif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399678.605596-461-58449657796378/AnsiballZ_command.py'
Nov 29 07:01:18 compute-2 sudo[35310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:19 compute-2 python3.9[35312]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:01:20 compute-2 sudo[35310]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:21 compute-2 sudo[35591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycchlbqwdkspmesamreydbhostvcayao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399680.6193314-485-217340050271148/AnsiballZ_selinux.py'
Nov 29 07:01:21 compute-2 sudo[35591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:21 compute-2 python3.9[35593]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:01:21 compute-2 sudo[35591]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:21 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:01:21 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:01:21 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.592s CPU time.
Nov 29 07:01:21 compute-2 systemd[1]: run-r22311da154b34de4a7e17690ab2908bf.service: Deactivated successfully.
Nov 29 07:01:22 compute-2 sudo[35745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzumxgngkmwgrmipuewzyezdzqtykmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399682.1256943-518-58429508779742/AnsiballZ_command.py'
Nov 29 07:01:22 compute-2 sudo[35745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:22 compute-2 python3.9[35747]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:01:25 compute-2 sudo[35745]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:26 compute-2 sudo[35898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkndswwmgugltfhvmnlokkqpbzzbpwta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399685.6994023-541-2285379975485/AnsiballZ_file.py'
Nov 29 07:01:26 compute-2 sudo[35898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:30 compute-2 python3.9[35900]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:30 compute-2 sudo[35898]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:31 compute-2 sudo[36050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmeyvqchvmwfcbobhwqpmyjabszopjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399691.083404-566-84358814567612/AnsiballZ_mount.py'
Nov 29 07:01:31 compute-2 sudo[36050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:32 compute-2 python3.9[36052]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:01:32 compute-2 sudo[36050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:38 compute-2 sudo[36203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfjkobahlxrerqrrddvgalzomluekyfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399698.0752223-650-55860052831497/AnsiballZ_file.py'
Nov 29 07:01:38 compute-2 sudo[36203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:39 compute-2 python3.9[36205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:01:39 compute-2 sudo[36203]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:39 compute-2 sudo[36355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlqigmdjzoxqrpnerkcsdqgmpxfbxig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399699.5544362-674-187650394459978/AnsiballZ_stat.py'
Nov 29 07:01:39 compute-2 sudo[36355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:45 compute-2 python3.9[36357]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:01:45 compute-2 sudo[36355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:46 compute-2 sudo[36478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjggioulwaloofrzdkdeuvkcyjxhryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399699.5544362-674-187650394459978/AnsiballZ_copy.py'
Nov 29 07:01:46 compute-2 sudo[36478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:46 compute-2 python3.9[36480]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399699.5544362-674-187650394459978/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:46 compute-2 sudo[36478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:47 compute-2 sudo[36630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzpyjglqeqomcuqosqlmwpdfwnwlvdry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399707.1041873-745-241485587257393/AnsiballZ_stat.py'
Nov 29 07:01:47 compute-2 sudo[36630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:47 compute-2 python3.9[36632]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:01:47 compute-2 sudo[36630]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:48 compute-2 sudo[36782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzptaxvmgpodwrgystdfcxmxcsqpvgmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399707.883567-769-58394208208877/AnsiballZ_command.py'
Nov 29 07:01:48 compute-2 sudo[36782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:48 compute-2 python3.9[36784]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:01:48 compute-2 sudo[36782]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:48 compute-2 sudo[36935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjplfgdmarhtsstvnfmiunaxqqdxuznj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399708.655175-793-96306094375522/AnsiballZ_file.py'
Nov 29 07:01:48 compute-2 sudo[36935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:49 compute-2 python3.9[36937]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:49 compute-2 sudo[36935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:50 compute-2 sudo[37087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombbkhbqigsactmrangiijqtprcorbpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399709.6667883-827-33846066350467/AnsiballZ_getent.py'
Nov 29 07:01:50 compute-2 sudo[37087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:50 compute-2 python3.9[37089]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:01:50 compute-2 sudo[37087]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:50 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:01:50 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:01:51 compute-2 sudo[37241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxkuiichpghafeoqcrefscrtxuvglxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399710.724982-851-205201146662117/AnsiballZ_group.py'
Nov 29 07:01:51 compute-2 sudo[37241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:51 compute-2 python3.9[37243]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:01:51 compute-2 groupadd[37244]: group added to /etc/group: name=qemu, GID=107
Nov 29 07:01:51 compute-2 groupadd[37244]: group added to /etc/gshadow: name=qemu
Nov 29 07:01:51 compute-2 groupadd[37244]: new group: name=qemu, GID=107
Nov 29 07:01:51 compute-2 sudo[37241]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:52 compute-2 sudo[37399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjbpyvptrfnwgmjsgasxqpdzxperdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399712.005136-875-240494600094927/AnsiballZ_user.py'
Nov 29 07:01:52 compute-2 sudo[37399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:52 compute-2 python3.9[37401]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:01:52 compute-2 useradd[37403]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:01:52 compute-2 sudo[37399]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:53 compute-2 sudo[37559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eowxzkkghanifjwyypgpmypdpkcfmxwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399713.4463274-900-273118791314063/AnsiballZ_getent.py'
Nov 29 07:01:53 compute-2 sudo[37559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:53 compute-2 python3.9[37561]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:01:53 compute-2 sudo[37559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:54 compute-2 sudo[37712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqmwjyzqkkqqydlqbqqrtdcdpuidqpsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399714.2561505-923-217887908839515/AnsiballZ_group.py'
Nov 29 07:01:54 compute-2 sudo[37712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:54 compute-2 python3.9[37714]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:01:54 compute-2 groupadd[37715]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 29 07:01:54 compute-2 groupadd[37715]: group added to /etc/gshadow: name=hugetlbfs
Nov 29 07:01:54 compute-2 groupadd[37715]: new group: name=hugetlbfs, GID=42477
Nov 29 07:01:54 compute-2 sudo[37712]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:55 compute-2 sudo[37870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvjmzimzooirmsllgvmbiscwkjdnzcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399715.1779375-949-176488219069644/AnsiballZ_file.py'
Nov 29 07:01:55 compute-2 sudo[37870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:55 compute-2 python3.9[37872]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:01:55 compute-2 sudo[37870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:56 compute-2 sudo[38022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifijcaaljzucwjvlfvnnjbvzngmsksvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399716.3535388-983-27746327943299/AnsiballZ_dnf.py'
Nov 29 07:01:56 compute-2 sudo[38022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:56 compute-2 python3.9[38024]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:01:59 compute-2 sudo[38022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:00 compute-2 sudo[38175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isrjnlsyaxlrspfecykgpfzkfjywadig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399720.1883826-1007-120216415069444/AnsiballZ_file.py'
Nov 29 07:02:00 compute-2 sudo[38175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:00 compute-2 python3.9[38177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:02:00 compute-2 sudo[38175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:01 compute-2 sudo[38327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpugusfjiuizxgphxtlxiuovewihttnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399720.9113088-1030-230941113409813/AnsiballZ_stat.py'
Nov 29 07:02:01 compute-2 sudo[38327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:01 compute-2 python3.9[38329]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:02:01 compute-2 sudo[38327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:01 compute-2 sudo[38450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiuioswhvvmngddovjsudafqlrzxhswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399720.9113088-1030-230941113409813/AnsiballZ_copy.py'
Nov 29 07:02:01 compute-2 sudo[38450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:01 compute-2 python3.9[38452]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399720.9113088-1030-230941113409813/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:02:01 compute-2 sudo[38450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:02 compute-2 sudo[38602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnqosywswqmsrbdtogqdtoqhuvavhkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399722.2629774-1076-239702799723896/AnsiballZ_systemd.py'
Nov 29 07:02:02 compute-2 sudo[38602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:03 compute-2 python3.9[38604]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:02:03 compute-2 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:02:03 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 07:02:03 compute-2 kernel: Bridge firewalling registered
Nov 29 07:02:03 compute-2 systemd-modules-load[38608]: Inserted module 'br_netfilter'
Nov 29 07:02:03 compute-2 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:02:03 compute-2 sudo[38602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:05 compute-2 sudo[38761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwjhbjbylisyjgdzyqutknqevzjgjtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399724.758278-1100-104557682762269/AnsiballZ_stat.py'
Nov 29 07:02:05 compute-2 sudo[38761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:05 compute-2 python3.9[38763]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:02:05 compute-2 sudo[38761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:05 compute-2 sudo[38884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoxoiynglgrikrvbalqimjjwiktfolip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399724.758278-1100-104557682762269/AnsiballZ_copy.py'
Nov 29 07:02:05 compute-2 sudo[38884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:06 compute-2 python3.9[38886]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399724.758278-1100-104557682762269/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:02:06 compute-2 sudo[38884]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:07 compute-2 sudo[39036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oexyofbgslmogvoaprdxdqtawqdiptqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399726.9786081-1154-30317754568131/AnsiballZ_dnf.py'
Nov 29 07:02:07 compute-2 sudo[39036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:07 compute-2 python3.9[39038]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:02:11 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 07:02:11 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 07:02:11 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:02:11 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:02:11 compute-2 systemd[1]: Reloading.
Nov 29 07:02:11 compute-2 systemd-rc-local-generator[39095]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:12 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:02:12 compute-2 sudo[39036]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:13 compute-2 python3.9[40527]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:02:14 compute-2 python3.9[41533]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:02:15 compute-2 python3.9[42590]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:02:16 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:02:16 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:02:16 compute-2 systemd[1]: man-db-cache-update.service: Consumed 5.779s CPU time.
Nov 29 07:02:16 compute-2 systemd[1]: run-r8278f553f9b84317b6ea63b521218a84.service: Deactivated successfully.
Nov 29 07:02:16 compute-2 sudo[43264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snykflrkqfioaprkvhojaksqcowsatvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399736.1484241-1270-53573510511187/AnsiballZ_command.py'
Nov 29 07:02:16 compute-2 sudo[43264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:16 compute-2 python3.9[43266]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:02:16 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:02:17 compute-2 systemd[1]: Starting Authorization Manager...
Nov 29 07:02:17 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:02:17 compute-2 polkitd[43483]: Started polkitd version 0.117
Nov 29 07:02:17 compute-2 polkitd[43483]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:02:17 compute-2 polkitd[43483]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:02:17 compute-2 polkitd[43483]: Finished loading, compiling and executing 2 rules
Nov 29 07:02:17 compute-2 systemd[1]: Started Authorization Manager.
Nov 29 07:02:17 compute-2 polkitd[43483]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 29 07:02:17 compute-2 sudo[43264]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:18 compute-2 sudo[43651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcizygjtkmwjnmqgreozxagsegtojyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399737.9828258-1298-92085831531135/AnsiballZ_systemd.py'
Nov 29 07:02:18 compute-2 sudo[43651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:18 compute-2 python3.9[43653]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:02:18 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:02:18 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:02:18 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:02:18 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:02:19 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:02:19 compute-2 sudo[43651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:19 compute-2 python3.9[43815]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:02:23 compute-2 sudo[43965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvrwkpovkbnqdnmekpxbdwgshjkrdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399743.1652894-1468-100637897514065/AnsiballZ_systemd.py'
Nov 29 07:02:23 compute-2 sudo[43965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:23 compute-2 python3.9[43967]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:02:23 compute-2 systemd[1]: Reloading.
Nov 29 07:02:23 compute-2 systemd-rc-local-generator[43993]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:24 compute-2 sudo[43965]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:24 compute-2 sudo[44153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umleaollxwrvxgbzvzqpfldrlpfatkur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399744.222459-1468-10601206794026/AnsiballZ_systemd.py'
Nov 29 07:02:24 compute-2 sudo[44153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:24 compute-2 python3.9[44155]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:02:24 compute-2 systemd[1]: Reloading.
Nov 29 07:02:24 compute-2 systemd-rc-local-generator[44183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:25 compute-2 sudo[44153]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:25 compute-2 sudo[44341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crzcsdsjaiptdkoaexzcdaikwomrhyqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399745.655517-1517-119174162108376/AnsiballZ_command.py'
Nov 29 07:02:25 compute-2 sudo[44341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:26 compute-2 python3.9[44343]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:02:26 compute-2 sudo[44341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:26 compute-2 sudo[44494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygbqbbnjsrxwhsqlwqcfkkjsaetbzht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399746.4438558-1541-67907905504739/AnsiballZ_command.py'
Nov 29 07:02:26 compute-2 sudo[44494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:26 compute-2 python3.9[44496]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:02:26 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 07:02:27 compute-2 sudo[44494]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:27 compute-2 sudo[44647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqsmyrhdasetrvpcwdfcetngtjxiqkny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399747.2342422-1565-71894172592958/AnsiballZ_command.py'
Nov 29 07:02:27 compute-2 sudo[44647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:27 compute-2 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:02:29 compute-2 sudo[44647]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:29 compute-2 sudo[44809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplgjwehnknlrdqqlykacjvswusijynr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399749.5355256-1589-4206573752479/AnsiballZ_command.py'
Nov 29 07:02:29 compute-2 sudo[44809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:30 compute-2 python3.9[44811]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:02:30 compute-2 sudo[44809]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:30 compute-2 sudo[44962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izykiouyijoekazbclseammqdtlkkztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399750.3027549-1613-57812870302041/AnsiballZ_systemd.py'
Nov 29 07:02:30 compute-2 sudo[44962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:30 compute-2 python3.9[44964]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:02:30 compute-2 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 07:02:30 compute-2 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 07:02:31 compute-2 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 07:02:31 compute-2 systemd[1]: Starting Apply Kernel Variables...
Nov 29 07:02:31 compute-2 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 07:02:31 compute-2 systemd[1]: Finished Apply Kernel Variables.
Nov 29 07:02:31 compute-2 sudo[44962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:31 compute-2 sshd-session[31251]: Connection closed by 192.168.122.30 port 53332
Nov 29 07:02:31 compute-2 sshd-session[31248]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:02:31 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 07:02:31 compute-2 systemd[1]: session-10.scope: Consumed 2min 26.809s CPU time.
Nov 29 07:02:31 compute-2 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Nov 29 07:02:31 compute-2 systemd-logind[787]: Removed session 10.
Nov 29 07:02:36 compute-2 sshd-session[44995]: Accepted publickey for zuul from 192.168.122.30 port 44768 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:02:36 compute-2 systemd-logind[787]: New session 11 of user zuul.
Nov 29 07:02:36 compute-2 systemd[1]: Started Session 11 of User zuul.
Nov 29 07:02:36 compute-2 sshd-session[44995]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:02:37 compute-2 python3.9[45148]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:02:39 compute-2 sudo[45302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brgrvaeewmpvepxryrecloapypbnvzyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399759.2188222-76-278172973466060/AnsiballZ_getent.py'
Nov 29 07:02:39 compute-2 sudo[45302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:39 compute-2 python3.9[45304]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:02:39 compute-2 sudo[45302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:40 compute-2 sudo[45455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsygjtzojbxeinctrvfajmcdajsbsdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399760.1790493-99-71443597459139/AnsiballZ_group.py'
Nov 29 07:02:40 compute-2 sudo[45455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:40 compute-2 python3.9[45457]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:02:40 compute-2 groupadd[45458]: group added to /etc/group: name=openvswitch, GID=42476
Nov 29 07:02:40 compute-2 groupadd[45458]: group added to /etc/gshadow: name=openvswitch
Nov 29 07:02:40 compute-2 groupadd[45458]: new group: name=openvswitch, GID=42476
Nov 29 07:02:40 compute-2 sudo[45455]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:41 compute-2 sudo[45613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjbtlbfdvykjsicfdkvcemgovbobyrzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399761.279352-123-128004712375174/AnsiballZ_user.py'
Nov 29 07:02:41 compute-2 sudo[45613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:42 compute-2 python3.9[45615]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:02:42 compute-2 useradd[45617]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:02:42 compute-2 useradd[45617]: add 'openvswitch' to group 'hugetlbfs'
Nov 29 07:02:42 compute-2 useradd[45617]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 29 07:02:42 compute-2 sudo[45613]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:42 compute-2 sudo[45773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypfnpidvxmtgatcoubjqihhcldxugcmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399762.6554863-153-69761694937883/AnsiballZ_setup.py'
Nov 29 07:02:42 compute-2 sudo[45773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:43 compute-2 python3.9[45775]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:02:43 compute-2 sudo[45773]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:43 compute-2 sudo[45857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwkipxxdhmwlfrimerteegobpdfxslnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399762.6554863-153-69761694937883/AnsiballZ_dnf.py'
Nov 29 07:02:43 compute-2 sudo[45857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:44 compute-2 python3.9[45859]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:02:46 compute-2 sudo[45857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:02:48 compute-2 sudo[46022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfcszdzdijeantkzqamgvdozpazfyjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399767.7026649-195-278498139716742/AnsiballZ_dnf.py'
Nov 29 07:02:48 compute-2 sudo[46022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:02:48 compute-2 python3.9[46024]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:03:08 compute-2 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:03:08 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:03:08 compute-2 groupadd[46047]: group added to /etc/group: name=unbound, GID=993
Nov 29 07:03:08 compute-2 groupadd[46047]: group added to /etc/gshadow: name=unbound
Nov 29 07:03:08 compute-2 groupadd[46047]: new group: name=unbound, GID=993
Nov 29 07:03:09 compute-2 useradd[46054]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 29 07:03:09 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 07:03:09 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 07:03:10 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:03:10 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:03:10 compute-2 systemd[1]: Reloading.
Nov 29 07:03:10 compute-2 systemd-rc-local-generator[46549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:03:10 compute-2 systemd-sysv-generator[46555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:03:10 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:03:12 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:03:12 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:03:12 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.021s CPU time.
Nov 29 07:03:12 compute-2 systemd[1]: run-r8fc835f25a0c43b8904c232b6c319bba.service: Deactivated successfully.
Nov 29 07:03:12 compute-2 sudo[46022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:13 compute-2 sudo[47121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipeewjnqgigichugohjvxuwhbfaswrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399792.686701-219-193544847993159/AnsiballZ_systemd.py'
Nov 29 07:03:13 compute-2 sudo[47121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:13 compute-2 python3.9[47123]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:03:13 compute-2 systemd[1]: Reloading.
Nov 29 07:03:13 compute-2 systemd-sysv-generator[47157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:03:13 compute-2 systemd-rc-local-generator[47153]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:03:13 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 07:03:13 compute-2 chown[47165]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 07:03:14 compute-2 ovs-ctl[47170]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 07:03:14 compute-2 ovs-ctl[47170]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 07:03:14 compute-2 ovs-ctl[47170]: Starting ovsdb-server [  OK  ]
Nov 29 07:03:14 compute-2 ovs-vsctl[47219]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 07:03:14 compute-2 ovs-vsctl[47235]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c8abfd39-a629-4854-b6ed-e2d68f35f5fb\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 07:03:14 compute-2 ovs-ctl[47170]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 07:03:14 compute-2 ovs-ctl[47170]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:03:14 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 07:03:14 compute-2 ovs-vsctl[47244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 29 07:03:14 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 07:03:14 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 07:03:14 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 07:03:14 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 07:03:14 compute-2 ovs-ctl[47288]: Inserting openvswitch module [  OK  ]
Nov 29 07:03:14 compute-2 ovs-ctl[47257]: Starting ovs-vswitchd [  OK  ]
Nov 29 07:03:14 compute-2 ovs-ctl[47257]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:03:14 compute-2 ovs-vsctl[47306]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 29 07:03:14 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 07:03:14 compute-2 systemd[1]: Starting Open vSwitch...
Nov 29 07:03:14 compute-2 systemd[1]: Finished Open vSwitch.
Nov 29 07:03:14 compute-2 sudo[47121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:15 compute-2 python3.9[47457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:03:16 compute-2 sudo[47607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyysbkopgburzjrpjueftqzbejxnmcmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399795.8654096-273-239626616430659/AnsiballZ_sefcontext.py'
Nov 29 07:03:16 compute-2 sudo[47607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:16 compute-2 python3.9[47609]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:03:17 compute-2 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:03:17 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:03:18 compute-2 sudo[47607]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:20 compute-2 python3.9[47764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:03:21 compute-2 sudo[47920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbriqwrpsmykqqdtvmkvyyhkyfdnwyqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399800.7551925-327-61348865294319/AnsiballZ_dnf.py'
Nov 29 07:03:21 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 07:03:21 compute-2 sudo[47920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:21 compute-2 python3.9[47922]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:03:22 compute-2 sudo[47920]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:23 compute-2 sudo[48073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axpdulcjtzncfkspebnpjfscoewjhczx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399802.915435-352-252204517099879/AnsiballZ_command.py'
Nov 29 07:03:23 compute-2 sudo[48073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:23 compute-2 python3.9[48075]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:03:24 compute-2 sudo[48073]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:25 compute-2 sudo[48360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcmyigjvelphzvqwjdppvnxkbbszpir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399804.6152382-375-92980975182825/AnsiballZ_file.py'
Nov 29 07:03:25 compute-2 sudo[48360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:25 compute-2 python3.9[48362]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:03:25 compute-2 sudo[48360]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:26 compute-2 python3.9[48512]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:03:26 compute-2 sudo[48664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dviksybrbvdtillvgykklckytidbhllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399806.343505-423-146865800698264/AnsiballZ_dnf.py'
Nov 29 07:03:26 compute-2 sudo[48664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:26 compute-2 python3.9[48666]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:03:30 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:03:30 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:03:30 compute-2 systemd[1]: Reloading.
Nov 29 07:03:30 compute-2 systemd-rc-local-generator[48704]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:03:30 compute-2 systemd-sysv-generator[48710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:03:30 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:03:30 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:03:30 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:03:30 compute-2 systemd[1]: run-r2c1f0eed4b8e4ea18ac030e908eb30e4.service: Deactivated successfully.
Nov 29 07:03:30 compute-2 sudo[48664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:32 compute-2 sudo[48982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzokvhwgolshckocaxzyokxdcsfxegkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399811.8801384-447-101328233145034/AnsiballZ_systemd.py'
Nov 29 07:03:32 compute-2 sudo[48982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:32 compute-2 python3.9[48984]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:03:32 compute-2 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 07:03:32 compute-2 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 07:03:32 compute-2 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 07:03:32 compute-2 systemd[1]: Stopping Network Manager...
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4835] caught SIGTERM, shutting down normally.
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4855] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4855] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4855] dhcp4 (eth0): state changed no lease
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4858] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:03:32 compute-2 NetworkManager[7200]: <info>  [1764399812.4943] exiting (success)
Nov 29 07:03:32 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:03:32 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:03:32 compute-2 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 07:03:32 compute-2 systemd[1]: Stopped Network Manager.
Nov 29 07:03:32 compute-2 systemd[1]: NetworkManager.service: Consumed 18.404s CPU time, 4.3M memory peak, read 0B from disk, written 31.5K to disk.
Nov 29 07:03:32 compute-2 systemd[1]: Starting Network Manager...
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.5632] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7d4aa993-1c07-4b6d-9a71-a6bd2dda9a8d)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.5635] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.5707] manager[0x560d77407090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 07:03:32 compute-2 systemd[1]: Starting Hostname Service...
Nov 29 07:03:32 compute-2 systemd[1]: Started Hostname Service.
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6714] hostname: hostname: using hostnamed
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6714] hostname: static hostname changed from (none) to "compute-2"
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6721] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6728] manager[0x560d77407090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6728] manager[0x560d77407090]: rfkill: WWAN hardware radio set enabled
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6756] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6769] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6770] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6771] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6771] manager: Networking is enabled by state file
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6774] settings: Loaded settings plugin: keyfile (internal)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6781] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6811] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6824] dhcp: init: Using DHCP client 'internal'
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6827] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6834] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6840] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6849] device (lo): Activation: starting connection 'lo' (45524fb0-33f8-4a9e-bdee-77d73a355010)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6858] device (eth0): carrier: link connected
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6864] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6869] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6870] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6878] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6886] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6893] device (eth1): carrier: link connected
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6898] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6904] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (76219567-d3ca-5abd-b302-35afcca51805) (indicated)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6905] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6911] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6919] device (eth1): Activation: starting connection 'ci-private-network' (76219567-d3ca-5abd-b302-35afcca51805)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6928] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 07:03:32 compute-2 systemd[1]: Started Network Manager.
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6944] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6948] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6951] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6953] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6956] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6958] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6960] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6965] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6971] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6975] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6984] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.6998] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7021] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7027] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 07:03:32 compute-2 systemd[1]: Starting Network Manager Wait Online...
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7109] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7115] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7117] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7119] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7124] device (lo): Activation: successful, device activated.
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7130] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7133] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7138] device (eth1): Activation: successful, device activated.
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7202] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7205] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7209] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7218] device (eth0): Activation: successful, device activated.
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7222] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 07:03:32 compute-2 NetworkManager[48993]: <info>  [1764399812.7226] manager: startup complete
Nov 29 07:03:32 compute-2 sudo[48982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:32 compute-2 systemd[1]: Finished Network Manager Wait Online.
Nov 29 07:03:33 compute-2 sudo[49208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbimeuudinyljfmwvqjxjwetkowjsgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399813.2919674-471-88660334196837/AnsiballZ_dnf.py'
Nov 29 07:03:33 compute-2 sudo[49208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:33 compute-2 python3.9[49210]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:03:38 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:03:38 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:03:38 compute-2 systemd[1]: Reloading.
Nov 29 07:03:38 compute-2 systemd-rc-local-generator[49261]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:03:38 compute-2 systemd-sysv-generator[49265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:03:38 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:03:39 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:03:39 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:03:39 compute-2 systemd[1]: run-r0ccca85708a54cc08506622c51740c9b.service: Deactivated successfully.
Nov 29 07:03:40 compute-2 sudo[49208]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:42 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:03:54 compute-2 sudo[49665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzdwasfodlvmjlkdttocycovvyoucfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399834.4410086-508-204979159076685/AnsiballZ_stat.py'
Nov 29 07:03:54 compute-2 sudo[49665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:54 compute-2 python3.9[49667]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:03:54 compute-2 sudo[49665]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:55 compute-2 sudo[49817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pduuotqublelrphvikadmvlzgblytjhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399835.2363944-534-53743235609250/AnsiballZ_ini_file.py'
Nov 29 07:03:55 compute-2 sudo[49817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:55 compute-2 python3.9[49819]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:03:55 compute-2 sudo[49817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:56 compute-2 sudo[49971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrssxbvhbtzrlntkvkqfzlxebgmmixyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399836.2748835-564-100151087441108/AnsiballZ_ini_file.py'
Nov 29 07:03:56 compute-2 sudo[49971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:56 compute-2 python3.9[49973]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:03:56 compute-2 sudo[49971]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:57 compute-2 sudo[50123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxyahwiatbhycuwphwtwemibtwedvmla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399836.9647858-564-71299510003473/AnsiballZ_ini_file.py'
Nov 29 07:03:57 compute-2 sudo[50123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:57 compute-2 python3.9[50125]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:03:57 compute-2 sudo[50123]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:58 compute-2 sudo[50275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whhfjjmeeueksxrmpzhqneymozlkoccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399837.685671-609-162614105538385/AnsiballZ_ini_file.py'
Nov 29 07:03:58 compute-2 sudo[50275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:58 compute-2 python3.9[50277]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:03:58 compute-2 sudo[50275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:58 compute-2 sudo[50427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopgsmypzcsupujtemrtipnyavwbirmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399838.3390362-609-265425127322608/AnsiballZ_ini_file.py'
Nov 29 07:03:58 compute-2 sudo[50427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:58 compute-2 python3.9[50429]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:03:58 compute-2 sudo[50427]: pam_unix(sudo:session): session closed for user root
Nov 29 07:03:59 compute-2 sudo[50579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-murljbpqwstsbgwlvfswvwexnzwgvnja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399839.1649163-654-2038282471900/AnsiballZ_stat.py'
Nov 29 07:03:59 compute-2 sudo[50579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:03:59 compute-2 python3.9[50581]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:03:59 compute-2 sudo[50579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:00 compute-2 sudo[50702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbwfpqrcftcnnbhnrfwojeeypdmstmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399839.1649163-654-2038282471900/AnsiballZ_copy.py'
Nov 29 07:04:00 compute-2 sudo[50702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:00 compute-2 python3.9[50704]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399839.1649163-654-2038282471900/.source _original_basename=.qc_3jbkl follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:00 compute-2 sudo[50702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:00 compute-2 sudo[50854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvvxaxdvozpooziozvcxaaqtybjeiydt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399840.6147792-699-187943296966109/AnsiballZ_file.py'
Nov 29 07:04:00 compute-2 sudo[50854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:01 compute-2 python3.9[50856]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:01 compute-2 sudo[50854]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:01 compute-2 sudo[51006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gucwkvirhfbonpaqthvieqnvxltwwndr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399841.2977457-723-228093956285847/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 29 07:04:01 compute-2 sudo[51006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:01 compute-2 python3.9[51008]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 07:04:01 compute-2 sudo[51006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:02 compute-2 sudo[51158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giffnimttfcxabkzrwckslmfzmofrgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399842.2260845-751-188294417728082/AnsiballZ_file.py'
Nov 29 07:04:02 compute-2 sudo[51158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:02 compute-2 python3.9[51160]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:02 compute-2 sudo[51158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:02 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 07:04:03 compute-2 sudo[51312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljshijhhgodsgqldiuiznzdesrtaeix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399843.1761026-780-62487524054529/AnsiballZ_stat.py'
Nov 29 07:04:03 compute-2 sudo[51312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:03 compute-2 sudo[51312]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:04 compute-2 sudo[51435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbrjezthqeswajlrowpxfjwbejsvbstf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399843.1761026-780-62487524054529/AnsiballZ_copy.py'
Nov 29 07:04:04 compute-2 sudo[51435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:04 compute-2 sudo[51435]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:04 compute-2 sudo[51587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrqyhtzsyvwlzzjoymqpzdnruuzuabu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399844.594557-825-74807584820891/AnsiballZ_slurp.py'
Nov 29 07:04:04 compute-2 sudo[51587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:05 compute-2 python3.9[51589]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 07:04:05 compute-2 sudo[51587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:06 compute-2 sudo[51762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgfvjdqmqwybylvmfhrbinhjxrzashea ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399845.5224857-853-182665534857929/async_wrapper.py j754297154475 300 /home/zuul/.ansible/tmp/ansible-tmp-1764399845.5224857-853-182665534857929/AnsiballZ_edpm_os_net_config.py _'
Nov 29 07:04:06 compute-2 sudo[51762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:06 compute-2 ansible-async_wrapper.py[51764]: Invoked with j754297154475 300 /home/zuul/.ansible/tmp/ansible-tmp-1764399845.5224857-853-182665534857929/AnsiballZ_edpm_os_net_config.py _
Nov 29 07:04:06 compute-2 ansible-async_wrapper.py[51767]: Starting module and watcher
Nov 29 07:04:06 compute-2 ansible-async_wrapper.py[51767]: Start watching 51768 (300)
Nov 29 07:04:06 compute-2 ansible-async_wrapper.py[51768]: Start module (51768)
Nov 29 07:04:06 compute-2 ansible-async_wrapper.py[51764]: Return async_wrapper task started.
Nov 29 07:04:06 compute-2 sudo[51762]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:06 compute-2 python3.9[51769]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 07:04:07 compute-2 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 07:04:07 compute-2 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 07:04:07 compute-2 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 07:04:07 compute-2 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 07:04:07 compute-2 kernel: cfg80211: failed to load regulatory.db
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4137] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4156] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4709] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4710] audit: op="connection-add" uuid="c1ea1ee1-7339-47a9-be24-8e54bda5c022" name="br-ex-br" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4726] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4727] audit: op="connection-add" uuid="f76aa1b6-18ae-4484-9bff-25405eecc7e1" name="br-ex-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4738] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4739] audit: op="connection-add" uuid="cb7cff5c-25e9-4ace-b25c-385d9de64365" name="eth1-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4755] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4756] audit: op="connection-add" uuid="5a59bc20-fb50-4a4e-b99d-4b3ecd2aa80a" name="vlan20-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4768] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4770] audit: op="connection-add" uuid="a58082af-ee16-4910-b413-e451df816771" name="vlan21-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4780] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4782] audit: op="connection-add" uuid="4671b770-1507-456f-a14e-9852f9b909cf" name="vlan22-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4793] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4794] audit: op="connection-add" uuid="9562884b-fb07-4a9b-9e28-138afeefb351" name="vlan23-port" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4817] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4833] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4835] audit: op="connection-add" uuid="308f7fd5-4f56-41df-8794-2fa0ce5433d0" name="br-ex-if" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4879] audit: op="connection-update" uuid="76219567-d3ca-5abd-b302-35afcca51805" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.routing-rules,ipv6.routes,ipv6.dns,ipv4.method,ipv4.addresses,ipv4.routing-rules,ipv4.routes,ipv4.dns,ipv4.never-default,connection.timestamp,connection.port-type,connection.slave-type,connection.master,connection.controller,ovs-external-ids.data,ovs-interface.type" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4895] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4896] audit: op="connection-add" uuid="79f0a4c9-beda-421a-82c7-5a5d5c7623e1" name="vlan20-if" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4914] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4916] audit: op="connection-add" uuid="c688e1f2-d7b6-43dd-9e9b-a18bb55b7129" name="vlan21-if" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4932] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4934] audit: op="connection-add" uuid="4484f9e4-6b47-47f4-a7cf-de708f127442" name="vlan22-if" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4950] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4952] audit: op="connection-add" uuid="44002dd5-efca-42df-8a96-2e2f6af7a4ac" name="vlan23-if" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4968] audit: op="connection-delete" uuid="266654df-4486-3314-93a9-b69e59309012" name="Wired connection 1" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.4982] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5143] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5147] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c1ea1ee1-7339-47a9-be24-8e54bda5c022)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5147] audit: op="connection-activate" uuid="c1ea1ee1-7339-47a9-be24-8e54bda5c022" name="br-ex-br" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5149] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5155] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5158] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f76aa1b6-18ae-4484-9bff-25405eecc7e1)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5160] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5165] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5168] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (cb7cff5c-25e9-4ace-b25c-385d9de64365)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5170] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5176] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5180] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (5a59bc20-fb50-4a4e-b99d-4b3ecd2aa80a)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5182] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5188] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5192] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (a58082af-ee16-4910-b413-e451df816771)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5193] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5199] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5202] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4671b770-1507-456f-a14e-9852f9b909cf)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5204] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5210] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5214] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9562884b-fb07-4a9b-9e28-138afeefb351)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5215] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5217] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5218] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5225] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5229] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5233] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (308f7fd5-4f56-41df-8794-2fa0ce5433d0)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5234] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5237] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5238] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5239] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5240] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5251] device (eth1): disconnecting for new activation request.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5251] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5254] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5255] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5257] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5259] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5264] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5267] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (79f0a4c9-beda-421a-82c7-5a5d5c7623e1)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5268] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5271] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5272] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5275] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5277] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5281] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5285] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c688e1f2-d7b6-43dd-9e9b-a18bb55b7129)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5286] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5288] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5289] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5291] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5293] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5297] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5301] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (4484f9e4-6b47-47f4-a7cf-de708f127442)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5301] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5304] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5305] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5307] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5309] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5313] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5317] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (44002dd5-efca-42df-8a96-2e2f6af7a4ac)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5317] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5320] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5321] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5322] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5324] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5336] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5337] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5340] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5341] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5347] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5352] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5355] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5368] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5371] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5378] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 kernel: ovs-system: entered promiscuous mode
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5383] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5387] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5390] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 kernel: Timeout policy base is empty
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5394] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5399] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5402] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5404] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5409] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5415] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5419] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5421] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 systemd-udevd[51775]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5426] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5431] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5431] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5431] dhcp4 (eth0): state changed no lease
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5433] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5445] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5449] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51770 uid=0 result="fail" reason="Device is not activated"
Nov 29 07:04:08 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5492] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5501] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5528] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5546] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5564] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 07:04:08 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5616] device (eth1): disconnecting for new activation request.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5617] audit: op="connection-activate" uuid="76219567-d3ca-5abd-b302-35afcca51805" name="ci-private-network" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5621] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5727] device (eth1): Activation: starting connection 'ci-private-network' (76219567-d3ca-5abd-b302-35afcca51805)
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5734] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5749] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5755] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5764] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5770] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51770 uid=0 result="success"
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5780] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5782] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5784] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5786] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5788] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5790] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5794] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5804] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5810] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5815] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5821] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5826] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5831] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5837] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5843] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5848] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5854] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5859] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5865] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5872] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5877] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5915] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5925] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.5934] device (eth1): Activation: successful, device activated.
Nov 29 07:04:08 compute-2 kernel: br-ex: entered promiscuous mode
Nov 29 07:04:08 compute-2 kernel: vlan22: entered promiscuous mode
Nov 29 07:04:08 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 07:04:08 compute-2 systemd-udevd[51774]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6277] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6288] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 kernel: vlan21: entered promiscuous mode
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6342] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6346] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6352] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6377] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:04:08 compute-2 kernel: vlan20: entered promiscuous mode
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6395] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 kernel: vlan23: entered promiscuous mode
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6480] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6486] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6491] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6497] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6511] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6521] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6538] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6548] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6550] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6556] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6591] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6592] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6594] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6601] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6614] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6637] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6638] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:04:08 compute-2 NetworkManager[48993]: <info>  [1764399848.6644] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:04:09 compute-2 NetworkManager[48993]: <info>  [1764399849.7795] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51770 uid=0 result="success"
Nov 29 07:04:09 compute-2 sudo[52125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvqipgzqpktqesgabduwvtfoxstsgon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399849.5113008-853-90266635678273/AnsiballZ_async_status.py'
Nov 29 07:04:09 compute-2 sudo[52125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:09 compute-2 NetworkManager[48993]: <info>  [1764399849.9686] checkpoint[0x560d773dc950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 07:04:09 compute-2 NetworkManager[48993]: <info>  [1764399849.9690] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 python3.9[52127]: ansible-ansible.legacy.async_status Invoked with jid=j754297154475.51764 mode=status _async_dir=/root/.ansible_async
Nov 29 07:04:10 compute-2 sudo[52125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.3106] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.3119] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.5573] audit: op="networking-control" arg="global-dns-configuration" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.5594] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.5620] audit: op="networking-control" arg="global-dns-configuration" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.5644] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.7078] checkpoint[0x560d773dca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 07:04:10 compute-2 NetworkManager[48993]: <info>  [1764399850.7082] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51770 uid=0 result="success"
Nov 29 07:04:10 compute-2 ansible-async_wrapper.py[51768]: Module complete (51768)
Nov 29 07:04:11 compute-2 ansible-async_wrapper.py[51767]: Done in kid B.
Nov 29 07:04:13 compute-2 sudo[52231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxvqhifknlmzkphxykcabkwtylihegb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399849.5113008-853-90266635678273/AnsiballZ_async_status.py'
Nov 29 07:04:13 compute-2 sudo[52231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:13 compute-2 python3.9[52233]: ansible-ansible.legacy.async_status Invoked with jid=j754297154475.51764 mode=status _async_dir=/root/.ansible_async
Nov 29 07:04:13 compute-2 sudo[52231]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:13 compute-2 sudo[52331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttbugelgxwdofjtgrudpaabyazajrrhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399849.5113008-853-90266635678273/AnsiballZ_async_status.py'
Nov 29 07:04:13 compute-2 sudo[52331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:14 compute-2 python3.9[52333]: ansible-ansible.legacy.async_status Invoked with jid=j754297154475.51764 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:04:14 compute-2 sudo[52331]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:14 compute-2 sudo[52483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddkbdrurniknkatrvfzsphuliyzqqsuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399854.49863-933-79322238535662/AnsiballZ_stat.py'
Nov 29 07:04:14 compute-2 sudo[52483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:15 compute-2 python3.9[52485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:15 compute-2 sudo[52483]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:15 compute-2 sudo[52606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ighzvjezuoablqfkuftucgccmlpgzqrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399854.49863-933-79322238535662/AnsiballZ_copy.py'
Nov 29 07:04:15 compute-2 sudo[52606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:15 compute-2 python3.9[52608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399854.49863-933-79322238535662/.source.returncode _original_basename=.5h0_ol53 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:15 compute-2 sudo[52606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:16 compute-2 sudo[52758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agoiiwasrghoxjhyeihhfodweklrjeux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399856.0667882-981-235461682118393/AnsiballZ_stat.py'
Nov 29 07:04:16 compute-2 sudo[52758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:16 compute-2 python3.9[52760]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:16 compute-2 sudo[52758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:16 compute-2 sudo[52882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emihuvsaeoseqeyiakuuqhjpmsxqlqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399856.0667882-981-235461682118393/AnsiballZ_copy.py'
Nov 29 07:04:16 compute-2 sudo[52882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:17 compute-2 python3.9[52884]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399856.0667882-981-235461682118393/.source.cfg _original_basename=.3v73nimq follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:17 compute-2 sudo[52882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:17 compute-2 sudo[53034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeeeidsyjhjjnsoppohfacvwhfdflbbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399857.382804-1026-237472829621659/AnsiballZ_systemd.py'
Nov 29 07:04:17 compute-2 sudo[53034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:18 compute-2 python3.9[53036]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:04:18 compute-2 systemd[1]: Reloading Network Manager...
Nov 29 07:04:18 compute-2 NetworkManager[48993]: <info>  [1764399858.1073] audit: op="reload" arg="0" pid=53040 uid=0 result="success"
Nov 29 07:04:18 compute-2 NetworkManager[48993]: <info>  [1764399858.1080] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 07:04:18 compute-2 systemd[1]: Reloaded Network Manager.
Nov 29 07:04:18 compute-2 sudo[53034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:18 compute-2 sshd-session[44998]: Connection closed by 192.168.122.30 port 44768
Nov 29 07:04:18 compute-2 sshd-session[44995]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:04:18 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 07:04:18 compute-2 systemd[1]: session-11.scope: Consumed 1min 834ms CPU time.
Nov 29 07:04:18 compute-2 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Nov 29 07:04:18 compute-2 systemd-logind[787]: Removed session 11.
Nov 29 07:04:23 compute-2 sshd-session[53071]: Accepted publickey for zuul from 192.168.122.30 port 45074 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:04:23 compute-2 systemd-logind[787]: New session 12 of user zuul.
Nov 29 07:04:23 compute-2 systemd[1]: Started Session 12 of User zuul.
Nov 29 07:04:23 compute-2 sshd-session[53071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:04:24 compute-2 python3.9[53224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:04:25 compute-2 python3.9[53378]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:04:27 compute-2 python3.9[53572]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:04:27 compute-2 sshd-session[53074]: Connection closed by 192.168.122.30 port 45074
Nov 29 07:04:27 compute-2 sshd-session[53071]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:04:27 compute-2 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Nov 29 07:04:27 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 07:04:27 compute-2 systemd[1]: session-12.scope: Consumed 2.694s CPU time.
Nov 29 07:04:27 compute-2 systemd-logind[787]: Removed session 12.
Nov 29 07:04:28 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:04:33 compute-2 sshd-session[53601]: Accepted publickey for zuul from 192.168.122.30 port 39768 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:04:33 compute-2 systemd-logind[787]: New session 13 of user zuul.
Nov 29 07:04:33 compute-2 systemd[1]: Started Session 13 of User zuul.
Nov 29 07:04:33 compute-2 sshd-session[53601]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:04:35 compute-2 python3.9[53754]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:04:35 compute-2 python3.9[53908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:04:36 compute-2 sudo[54063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnrnehperplycthoqyhquutjqfutzqfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399876.5204468-87-75482260426004/AnsiballZ_setup.py'
Nov 29 07:04:36 compute-2 sudo[54063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:37 compute-2 python3.9[54065]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:04:37 compute-2 sudo[54063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:37 compute-2 sudo[54147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawbxukvausjkixcejzczdsehwqtrvvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399876.5204468-87-75482260426004/AnsiballZ_dnf.py'
Nov 29 07:04:37 compute-2 sudo[54147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:37 compute-2 python3.9[54149]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:04:39 compute-2 sudo[54147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:40 compute-2 sudo[54301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlvdlxzvrpvoidvgfrkuodsyjfovnfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399879.8483841-123-198407757086057/AnsiballZ_setup.py'
Nov 29 07:04:40 compute-2 sudo[54301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:40 compute-2 python3.9[54303]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:04:40 compute-2 sudo[54301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:41 compute-2 sudo[54496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzokkhpqvtxaaawfowbgwvmgbyhlmcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399881.2208304-156-86717885230424/AnsiballZ_file.py'
Nov 29 07:04:41 compute-2 sudo[54496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:41 compute-2 python3.9[54498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:41 compute-2 sudo[54496]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:42 compute-2 sudo[54648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbnmncshuifrjxbqubsblzdpduacqbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399882.062197-180-198612080367256/AnsiballZ_command.py'
Nov 29 07:04:42 compute-2 sudo[54648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:42 compute-2 python3.9[54650]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:04:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat3316141015-merged.mount: Deactivated successfully.
Nov 29 07:04:43 compute-2 podman[54651]: 2025-11-29 07:04:43.811054173 +0000 UTC m=+1.041578509 system refresh
Nov 29 07:04:43 compute-2 sudo[54648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:44 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:04:44 compute-2 sudo[54811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvfyntxtfrciimdjcirugdnqyumysynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399884.0982769-204-32963423143470/AnsiballZ_stat.py'
Nov 29 07:04:44 compute-2 sudo[54811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:44 compute-2 python3.9[54813]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:44 compute-2 sudo[54811]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:45 compute-2 sudo[54934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-torzlpemvvrdeqdiuzcjjhzxwkpgjlfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399884.0982769-204-32963423143470/AnsiballZ_copy.py'
Nov 29 07:04:45 compute-2 sudo[54934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:45 compute-2 python3.9[54936]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399884.0982769-204-32963423143470/.source.json follow=False _original_basename=podman_network_config.j2 checksum=fed561860ee0d9460928e3759916c26cc8613706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:45 compute-2 sudo[54934]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:46 compute-2 sudo[55086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khygyangckokeebywcsjghqpcwowqyfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399885.798758-249-177227111019444/AnsiballZ_stat.py'
Nov 29 07:04:46 compute-2 sudo[55086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:46 compute-2 python3.9[55088]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:46 compute-2 sudo[55086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:46 compute-2 sudo[55209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgymzswdanjmxnmkduquasrxhwsbyidm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399885.798758-249-177227111019444/AnsiballZ_copy.py'
Nov 29 07:04:46 compute-2 sudo[55209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:46 compute-2 python3.9[55211]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399885.798758-249-177227111019444/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f27f86218e398aa50b444b0bf8b9e443f3d2c120 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:47 compute-2 sudo[55209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:47 compute-2 sudo[55361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgwbpwedbeeirglsdvtjhxuqjgcngyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399887.318519-297-238305439936043/AnsiballZ_ini_file.py'
Nov 29 07:04:47 compute-2 sudo[55361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:48 compute-2 python3.9[55363]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:48 compute-2 sudo[55361]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:48 compute-2 sudo[55513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gosonsnwylhzfuynxhrxwzgnbwtygktw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399888.217392-297-229854457206661/AnsiballZ_ini_file.py'
Nov 29 07:04:48 compute-2 sudo[55513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:48 compute-2 python3.9[55515]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:48 compute-2 sudo[55513]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:49 compute-2 sudo[55665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlagjyciqezwzhdufxxiggljymhvqjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399888.9907174-297-278154294371517/AnsiballZ_ini_file.py'
Nov 29 07:04:49 compute-2 sudo[55665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:49 compute-2 python3.9[55667]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:49 compute-2 sudo[55665]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:50 compute-2 sudo[55817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhzwjcozvoageeodwtmmqzlfodbmqjti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399890.0918405-297-162051433952550/AnsiballZ_ini_file.py'
Nov 29 07:04:50 compute-2 sudo[55817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:50 compute-2 python3.9[55819]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:50 compute-2 sudo[55817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:51 compute-2 sudo[55969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehpgclfbaaotygtvannfudxmlgdlwzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399891.0843694-390-139990996917381/AnsiballZ_dnf.py'
Nov 29 07:04:51 compute-2 sudo[55969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:51 compute-2 python3.9[55971]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:04:52 compute-2 sudo[55969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:54 compute-2 sudo[56122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npsnweinqhmbxkoqguicjjsqsqbpsirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399894.0361884-423-126625976738900/AnsiballZ_setup.py'
Nov 29 07:04:54 compute-2 sudo[56122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:54 compute-2 python3.9[56124]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:04:54 compute-2 sudo[56122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:55 compute-2 sudo[56276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoswpoiutlevclkgtvpzturicsytckpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399895.1569006-447-17154435579385/AnsiballZ_stat.py'
Nov 29 07:04:55 compute-2 sudo[56276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:55 compute-2 python3.9[56278]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:04:55 compute-2 sudo[56276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:56 compute-2 sudo[56428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlpjdvfwlbuxvtlbtbfkevjppcaxunu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399895.978885-474-183705261770019/AnsiballZ_stat.py'
Nov 29 07:04:56 compute-2 sudo[56428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:56 compute-2 python3.9[56430]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:04:56 compute-2 sudo[56428]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:57 compute-2 sudo[56580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsqlupptttrplqparcgyosvjsjshwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399896.7910335-504-163194007634091/AnsiballZ_command.py'
Nov 29 07:04:57 compute-2 sudo[56580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:57 compute-2 python3.9[56582]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:04:57 compute-2 sudo[56580]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:58 compute-2 sudo[56733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evuoexmswsoyjoknayvunqbovvwzhhul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399897.627053-535-107609733446872/AnsiballZ_service_facts.py'
Nov 29 07:04:58 compute-2 sudo[56733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:58 compute-2 python3.9[56735]: ansible-service_facts Invoked
Nov 29 07:04:58 compute-2 network[56752]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:04:58 compute-2 network[56753]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:04:58 compute-2 network[56754]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:05:01 compute-2 sudo[56733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:02 compute-2 sudo[57037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbzffrinatlkwsyekqhtmupmtzeftan ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764399902.0887-580-96715622443635/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764399902.0887-580-96715622443635/args'
Nov 29 07:05:02 compute-2 sudo[57037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:02 compute-2 sudo[57037]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:03 compute-2 sudo[57204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjhvpbklpifkttvwkdgifqkafpamppno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399902.8858578-612-182170260236511/AnsiballZ_dnf.py'
Nov 29 07:05:03 compute-2 sudo[57204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:03 compute-2 python3.9[57206]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:05:05 compute-2 sudo[57204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:06 compute-2 sudo[57357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvmkcfbhmrtetddzwbyldszodcansad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399905.68268-652-47553860377132/AnsiballZ_package_facts.py'
Nov 29 07:05:06 compute-2 sudo[57357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:06 compute-2 python3.9[57359]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:05:06 compute-2 sudo[57357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:08 compute-2 sudo[57509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agratzvcmdrjqpyvsibqnzbitaubjnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399907.660174-683-32933592692182/AnsiballZ_stat.py'
Nov 29 07:05:08 compute-2 sudo[57509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:08 compute-2 python3.9[57511]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:08 compute-2 sudo[57509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:08 compute-2 sudo[57634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjmzyiygomrcoirjncfldkgovbqejdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399907.660174-683-32933592692182/AnsiballZ_copy.py'
Nov 29 07:05:08 compute-2 sudo[57634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:09 compute-2 python3.9[57636]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399907.660174-683-32933592692182/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:09 compute-2 sudo[57634]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:09 compute-2 sudo[57788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnagrtrcasjwnocgrorqgwyzcglvdlvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399909.3671157-728-111129835848864/AnsiballZ_stat.py'
Nov 29 07:05:09 compute-2 sudo[57788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:09 compute-2 python3.9[57790]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:09 compute-2 sudo[57788]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:10 compute-2 sudo[57913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaizwlmerzwsqggrajwjvxwgprqhlnlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399909.3671157-728-111129835848864/AnsiballZ_copy.py'
Nov 29 07:05:10 compute-2 sudo[57913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:10 compute-2 python3.9[57915]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399909.3671157-728-111129835848864/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:10 compute-2 sudo[57913]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:12 compute-2 sudo[58067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzonnozsxzhwewoyaakzwxcdmefkirhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399911.817535-791-134795784403577/AnsiballZ_lineinfile.py'
Nov 29 07:05:12 compute-2 sudo[58067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:12 compute-2 python3.9[58069]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:12 compute-2 sudo[58067]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:14 compute-2 sudo[58221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdmwygsyxcdxgltervzwxaylfgdpfjuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399913.6772816-835-30329098351017/AnsiballZ_setup.py'
Nov 29 07:05:14 compute-2 sudo[58221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:14 compute-2 python3.9[58223]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:05:14 compute-2 sudo[58221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:15 compute-2 sudo[58305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfdgqxwfzxxwxdqhvswoirbjvrxxrgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399913.6772816-835-30329098351017/AnsiballZ_systemd.py'
Nov 29 07:05:15 compute-2 sudo[58305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:15 compute-2 python3.9[58307]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:15 compute-2 sudo[58305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:16 compute-2 sudo[58459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufemrbwyxgehdqatcewxqawkhphppnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399916.4256358-883-188928959579980/AnsiballZ_setup.py'
Nov 29 07:05:16 compute-2 sudo[58459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:17 compute-2 python3.9[58461]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:05:17 compute-2 sudo[58459]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:17 compute-2 sudo[58543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpespwujunsjtbeendnmknjcgdfeqydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399916.4256358-883-188928959579980/AnsiballZ_systemd.py'
Nov 29 07:05:17 compute-2 sudo[58543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:17 compute-2 python3.9[58545]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:05:17 compute-2 chronyd[798]: chronyd exiting
Nov 29 07:05:17 compute-2 systemd[1]: Stopping NTP client/server...
Nov 29 07:05:17 compute-2 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 07:05:17 compute-2 systemd[1]: Stopped NTP client/server.
Nov 29 07:05:17 compute-2 systemd[1]: Starting NTP client/server...
Nov 29 07:05:18 compute-2 chronyd[58554]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 07:05:18 compute-2 chronyd[58554]: Frequency -31.764 +/- 0.121 ppm read from /var/lib/chrony/drift
Nov 29 07:05:18 compute-2 chronyd[58554]: Loaded seccomp filter (level 2)
Nov 29 07:05:18 compute-2 systemd[1]: Started NTP client/server.
Nov 29 07:05:18 compute-2 sudo[58543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:18 compute-2 sshd-session[53604]: Connection closed by 192.168.122.30 port 39768
Nov 29 07:05:18 compute-2 sshd-session[53601]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:05:19 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 07:05:19 compute-2 systemd[1]: session-13.scope: Consumed 27.438s CPU time.
Nov 29 07:05:19 compute-2 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Nov 29 07:05:19 compute-2 systemd-logind[787]: Removed session 13.
Nov 29 07:05:24 compute-2 sshd-session[58580]: Accepted publickey for zuul from 192.168.122.30 port 54918 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:05:24 compute-2 systemd-logind[787]: New session 14 of user zuul.
Nov 29 07:05:24 compute-2 systemd[1]: Started Session 14 of User zuul.
Nov 29 07:05:24 compute-2 sshd-session[58580]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:05:25 compute-2 sudo[58733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckmkctcokjjdeaaudqboywzkglvmbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399924.5527089-33-72011578749547/AnsiballZ_file.py'
Nov 29 07:05:25 compute-2 sudo[58733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:25 compute-2 python3.9[58735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:25 compute-2 sudo[58733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:25 compute-2 sudo[58885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsbmaxwkeobfcodweymidqnpoocnvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399925.4722698-70-257267909824566/AnsiballZ_stat.py'
Nov 29 07:05:25 compute-2 sudo[58885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:26 compute-2 python3.9[58887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:26 compute-2 sudo[58885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:26 compute-2 sudo[59008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlygbsfuqmdntihcakyrccjospmrewaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399925.4722698-70-257267909824566/AnsiballZ_copy.py'
Nov 29 07:05:26 compute-2 sudo[59008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:26 compute-2 python3.9[59010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399925.4722698-70-257267909824566/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:26 compute-2 sudo[59008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:27 compute-2 sshd-session[58583]: Connection closed by 192.168.122.30 port 54918
Nov 29 07:05:27 compute-2 sshd-session[58580]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:05:27 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 07:05:27 compute-2 systemd[1]: session-14.scope: Consumed 1.621s CPU time.
Nov 29 07:05:27 compute-2 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Nov 29 07:05:27 compute-2 systemd-logind[787]: Removed session 14.
Nov 29 07:05:32 compute-2 sshd-session[59036]: Accepted publickey for zuul from 192.168.122.30 port 35658 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:05:32 compute-2 systemd-logind[787]: New session 15 of user zuul.
Nov 29 07:05:32 compute-2 systemd[1]: Started Session 15 of User zuul.
Nov 29 07:05:32 compute-2 sshd-session[59036]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:05:33 compute-2 python3.9[59189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:05:34 compute-2 sudo[59343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkebumhnxekjhokwbtsoksagnhzfjvms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399933.801154-66-143750852986761/AnsiballZ_file.py'
Nov 29 07:05:34 compute-2 sudo[59343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:34 compute-2 python3.9[59345]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:34 compute-2 sudo[59343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:35 compute-2 sudo[59518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avkodokuxwgxcjnxxrtqlheouwyeavpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399934.840128-90-173474245810859/AnsiballZ_stat.py'
Nov 29 07:05:35 compute-2 sudo[59518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:35 compute-2 python3.9[59520]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:35 compute-2 sudo[59518]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:36 compute-2 sudo[59641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbkqaopvnmouwlmsftkvktddqrglbiwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399934.840128-90-173474245810859/AnsiballZ_copy.py'
Nov 29 07:05:36 compute-2 sudo[59641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:36 compute-2 python3.9[59643]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764399934.840128-90-173474245810859/.source.json _original_basename=.3s4wztpk follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:36 compute-2 sudo[59641]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:37 compute-2 sudo[59793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhdokxjorauvkilrcgsquynvnqnspvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399937.358746-160-263650151426191/AnsiballZ_stat.py'
Nov 29 07:05:37 compute-2 sudo[59793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:37 compute-2 python3.9[59795]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:37 compute-2 sudo[59793]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:38 compute-2 sudo[59916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtsdewcislkjsbogearzpnhahhrrkdpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399937.358746-160-263650151426191/AnsiballZ_copy.py'
Nov 29 07:05:38 compute-2 sudo[59916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:38 compute-2 python3.9[59918]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399937.358746-160-263650151426191/.source _original_basename=.yzcxaer6 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:38 compute-2 sudo[59916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:39 compute-2 sudo[60068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwmmnajcqoskrqiittoadashtdeiolgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399938.8473625-207-52028905252545/AnsiballZ_file.py'
Nov 29 07:05:39 compute-2 sudo[60068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:39 compute-2 python3.9[60070]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:05:39 compute-2 sudo[60068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:39 compute-2 sudo[60220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wseatlnmqwzheguhthgagayeyxfjxhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399939.5710742-231-148962451432768/AnsiballZ_stat.py'
Nov 29 07:05:39 compute-2 sudo[60220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:40 compute-2 python3.9[60222]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:40 compute-2 sudo[60220]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:40 compute-2 sudo[60343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skuamlqzoofilgbrfljylvokxcrsbbbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399939.5710742-231-148962451432768/AnsiballZ_copy.py'
Nov 29 07:05:40 compute-2 sudo[60343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:40 compute-2 python3.9[60345]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399939.5710742-231-148962451432768/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:05:40 compute-2 sudo[60343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:41 compute-2 sudo[60495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihyrgkkfsmepqksdmqlfueyldykiyhwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399940.836831-231-224728148235723/AnsiballZ_stat.py'
Nov 29 07:05:41 compute-2 sudo[60495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:41 compute-2 python3.9[60497]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:41 compute-2 sudo[60495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:41 compute-2 sudo[60618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnoquuhmifdybyvzwovblodbqyxresai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399940.836831-231-224728148235723/AnsiballZ_copy.py'
Nov 29 07:05:41 compute-2 sudo[60618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:41 compute-2 python3.9[60620]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399940.836831-231-224728148235723/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:05:41 compute-2 sudo[60618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:42 compute-2 sudo[60770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugizbpkdutvwhkxmrvdegyocxdfqjsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399942.1359556-319-198123665014031/AnsiballZ_file.py'
Nov 29 07:05:42 compute-2 sudo[60770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:42 compute-2 python3.9[60772]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:42 compute-2 sudo[60770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:43 compute-2 sudo[60922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrbndtyoytkngubmxvogrukopbjvzqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399942.8602471-343-137494616163451/AnsiballZ_stat.py'
Nov 29 07:05:43 compute-2 sudo[60922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:43 compute-2 python3.9[60924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:43 compute-2 sudo[60922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:43 compute-2 sudo[61045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwaaqxwsovytpcqmqvdahssxvgexxdey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399942.8602471-343-137494616163451/AnsiballZ_copy.py'
Nov 29 07:05:43 compute-2 sudo[61045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:43 compute-2 python3.9[61047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399942.8602471-343-137494616163451/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:43 compute-2 sudo[61045]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:44 compute-2 sudo[61197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgnslggnwortynipclcnndaydeztmazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399944.1932023-388-82668987683455/AnsiballZ_stat.py'
Nov 29 07:05:44 compute-2 sudo[61197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:44 compute-2 python3.9[61199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:44 compute-2 sudo[61197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:45 compute-2 sudo[61320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifydnuowqsbukqlwpznbrfttqitanah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399944.1932023-388-82668987683455/AnsiballZ_copy.py'
Nov 29 07:05:45 compute-2 sudo[61320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:45 compute-2 python3.9[61322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399944.1932023-388-82668987683455/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:45 compute-2 sudo[61320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:46 compute-2 sudo[61472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfhjoaexjrneobhpmfvgujoxlwesjbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399945.5022285-433-163271444465723/AnsiballZ_systemd.py'
Nov 29 07:05:46 compute-2 sudo[61472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:46 compute-2 python3.9[61474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:46 compute-2 systemd[1]: Reloading.
Nov 29 07:05:46 compute-2 systemd-sysv-generator[61497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:05:46 compute-2 systemd-rc-local-generator[61494]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:46 compute-2 systemd[1]: Reloading.
Nov 29 07:05:46 compute-2 systemd-sysv-generator[61544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:05:46 compute-2 systemd-rc-local-generator[61540]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:47 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 07:05:47 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 07:05:47 compute-2 sudo[61472]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:48 compute-2 sudo[61699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqlrzhggaengfpwicngtkzmegrsinmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399947.831352-457-182053074204683/AnsiballZ_stat.py'
Nov 29 07:05:48 compute-2 sudo[61699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:48 compute-2 python3.9[61701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:48 compute-2 sudo[61699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:48 compute-2 sudo[61822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylgkzndnkumnhizjngmuxrxtbhpbwkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399947.831352-457-182053074204683/AnsiballZ_copy.py'
Nov 29 07:05:48 compute-2 sudo[61822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:48 compute-2 python3.9[61824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399947.831352-457-182053074204683/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:48 compute-2 sudo[61822]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:49 compute-2 sudo[61974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkgtzndvykuscibeubmawdkpxnladgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399949.2912712-502-137041615907864/AnsiballZ_stat.py'
Nov 29 07:05:49 compute-2 sudo[61974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:49 compute-2 python3.9[61976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:05:49 compute-2 sudo[61974]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:50 compute-2 sudo[62097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksiogwvodrgerqcbtldbebfzkchbief ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399949.2912712-502-137041615907864/AnsiballZ_copy.py'
Nov 29 07:05:50 compute-2 sudo[62097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:50 compute-2 python3.9[62099]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399949.2912712-502-137041615907864/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:05:50 compute-2 sudo[62097]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:50 compute-2 sudo[62249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzpjdvrypbjmjjrhgpztjfylletlcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399950.5278425-547-179059869456376/AnsiballZ_systemd.py'
Nov 29 07:05:50 compute-2 sudo[62249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:51 compute-2 python3.9[62251]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:51 compute-2 systemd[1]: Reloading.
Nov 29 07:05:51 compute-2 systemd-rc-local-generator[62279]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:51 compute-2 systemd-sysv-generator[62282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:05:51 compute-2 systemd[1]: Reloading.
Nov 29 07:05:51 compute-2 systemd-sysv-generator[62319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:05:51 compute-2 systemd-rc-local-generator[62311]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:51 compute-2 systemd[1]: Starting Create netns directory...
Nov 29 07:05:51 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:05:51 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:05:51 compute-2 systemd[1]: Finished Create netns directory.
Nov 29 07:05:51 compute-2 sudo[62249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:52 compute-2 python3.9[62477]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:05:52 compute-2 network[62494]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:05:52 compute-2 network[62495]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:05:52 compute-2 network[62496]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:05:57 compute-2 sudo[62756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tofpmfshfjgtlldurkhocnhaxihafrkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399957.0293505-594-127737257204999/AnsiballZ_systemd.py'
Nov 29 07:05:57 compute-2 sudo[62756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:57 compute-2 python3.9[62758]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:57 compute-2 systemd[1]: Reloading.
Nov 29 07:05:57 compute-2 systemd-rc-local-generator[62787]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:57 compute-2 systemd-sysv-generator[62791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:05:57 compute-2 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 07:05:58 compute-2 iptables.init[62798]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 07:05:58 compute-2 iptables.init[62798]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 07:05:58 compute-2 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 07:05:58 compute-2 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 07:05:58 compute-2 sudo[62756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:58 compute-2 sudo[62994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-radbncshppucqnccsufebjtdqinzxfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399958.5175612-594-254275200440637/AnsiballZ_systemd.py'
Nov 29 07:05:58 compute-2 sudo[62994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:59 compute-2 python3.9[62996]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:59 compute-2 sudo[62994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:59 compute-2 sudo[63148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uajbvgwavvjsrpczpnlvbedqhivgwoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399959.6091166-642-229851586058902/AnsiballZ_systemd.py'
Nov 29 07:05:59 compute-2 sudo[63148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:00 compute-2 python3.9[63150]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:06:00 compute-2 systemd[1]: Reloading.
Nov 29 07:06:00 compute-2 systemd-rc-local-generator[63180]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:06:00 compute-2 systemd-sysv-generator[63184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:06:00 compute-2 systemd[1]: Starting Netfilter Tables...
Nov 29 07:06:00 compute-2 systemd[1]: Finished Netfilter Tables.
Nov 29 07:06:00 compute-2 sudo[63148]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:01 compute-2 sudo[63340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbrwjllhbuwesikradfrfgvcghqxsdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399960.94409-666-184357243477352/AnsiballZ_command.py'
Nov 29 07:06:01 compute-2 sudo[63340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:01 compute-2 python3.9[63342]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:01 compute-2 sudo[63340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:02 compute-2 sudo[63493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoeyfzcctgpeapecwdhashqjktlnvgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399962.2965634-709-5148119218898/AnsiballZ_stat.py'
Nov 29 07:06:02 compute-2 sudo[63493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:02 compute-2 python3.9[63495]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:02 compute-2 sudo[63493]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:03 compute-2 sudo[63618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmzoomjpxvxfpkwvnnjwmlmptbaegwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399962.2965634-709-5148119218898/AnsiballZ_copy.py'
Nov 29 07:06:03 compute-2 sudo[63618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:03 compute-2 python3.9[63620]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399962.2965634-709-5148119218898/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:03 compute-2 sudo[63618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:04 compute-2 sudo[63771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodmgwwyemmbhmjsdpmtcmkeyhjlwoxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399963.6854382-754-80450739506337/AnsiballZ_systemd.py'
Nov 29 07:06:04 compute-2 sudo[63771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:04 compute-2 python3.9[63773]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:06:04 compute-2 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 07:06:04 compute-2 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 07:06:04 compute-2 sshd[1009]: Received SIGHUP; restarting.
Nov 29 07:06:04 compute-2 sshd[1009]: Server listening on 0.0.0.0 port 22.
Nov 29 07:06:04 compute-2 sshd[1009]: Server listening on :: port 22.
Nov 29 07:06:04 compute-2 sudo[63771]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:04 compute-2 sudo[63927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-derosnramymrswmvddqzzxvbfykwsvqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399964.6128154-777-95988456718896/AnsiballZ_file.py'
Nov 29 07:06:04 compute-2 sudo[63927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:05 compute-2 python3.9[63929]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:05 compute-2 sudo[63927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:05 compute-2 sudo[64079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgabojpdqilyqtufctkuhkamtczmuij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399965.381646-801-97604734154884/AnsiballZ_stat.py'
Nov 29 07:06:05 compute-2 sudo[64079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:05 compute-2 python3.9[64081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:06 compute-2 sudo[64079]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:06 compute-2 sudo[64202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgzronuvyaklqhpyxusqmlztdktjbgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399965.381646-801-97604734154884/AnsiballZ_copy.py'
Nov 29 07:06:06 compute-2 sudo[64202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:06 compute-2 python3.9[64204]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399965.381646-801-97604734154884/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:06 compute-2 sudo[64202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:07 compute-2 sudo[64354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdycwhdpcnvfzeofhrsotuzllwsrmyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399967.154315-856-19280889880000/AnsiballZ_timezone.py'
Nov 29 07:06:07 compute-2 sudo[64354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:07 compute-2 python3.9[64356]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:06:07 compute-2 systemd[1]: Starting Time & Date Service...
Nov 29 07:06:07 compute-2 systemd[1]: Started Time & Date Service.
Nov 29 07:06:07 compute-2 sudo[64354]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:08 compute-2 sudo[64510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtckwhixyoobboyalkisjrwexyhisyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399968.1645987-883-199898009914821/AnsiballZ_file.py'
Nov 29 07:06:08 compute-2 sudo[64510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:08 compute-2 python3.9[64512]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:08 compute-2 sudo[64510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:09 compute-2 sudo[64662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayimposjsevqkzzjomctztidtzsvxnkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399968.8666277-907-52099540471133/AnsiballZ_stat.py'
Nov 29 07:06:09 compute-2 sudo[64662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:09 compute-2 python3.9[64664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:09 compute-2 sudo[64662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:09 compute-2 sudo[64785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznxftggqdhdgzodxzaxmcmlwxyhvmra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399968.8666277-907-52099540471133/AnsiballZ_copy.py'
Nov 29 07:06:09 compute-2 sudo[64785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:10 compute-2 python3.9[64787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399968.8666277-907-52099540471133/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:10 compute-2 sudo[64785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:10 compute-2 sudo[64937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-togoiahcqlhdeqggetgebodurybcatsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399970.3540804-952-90018270633259/AnsiballZ_stat.py'
Nov 29 07:06:10 compute-2 sudo[64937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:10 compute-2 python3.9[64939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:10 compute-2 sudo[64937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:11 compute-2 sudo[65060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxjzyzdgenokkltfjceunpzpsgdbgwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399970.3540804-952-90018270633259/AnsiballZ_copy.py'
Nov 29 07:06:11 compute-2 sudo[65060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:11 compute-2 python3.9[65062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399970.3540804-952-90018270633259/.source.yaml _original_basename=.jz7i4ppc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:11 compute-2 sudo[65060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:12 compute-2 sudo[65212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olugrvivxqmbascoklrugmqtehjwffzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399971.8029208-997-238846541360676/AnsiballZ_stat.py'
Nov 29 07:06:12 compute-2 sudo[65212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:12 compute-2 python3.9[65214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:12 compute-2 sudo[65212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:12 compute-2 sudo[65335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvqlsfktkhbkkjadwaxthpwgbmssonua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399971.8029208-997-238846541360676/AnsiballZ_copy.py'
Nov 29 07:06:12 compute-2 sudo[65335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:12 compute-2 python3.9[65337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399971.8029208-997-238846541360676/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:12 compute-2 sudo[65335]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:13 compute-2 sudo[65487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtzogftywgpgczynnykyljrruwmqqznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399973.1756358-1041-162424648249600/AnsiballZ_command.py'
Nov 29 07:06:13 compute-2 sudo[65487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:13 compute-2 python3.9[65489]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:13 compute-2 sudo[65487]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:14 compute-2 sudo[65640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwwrltpuqzbegwpeunjtmplkahojnwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399973.916373-1065-101380299835468/AnsiballZ_command.py'
Nov 29 07:06:14 compute-2 sudo[65640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:14 compute-2 python3.9[65642]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:14 compute-2 sudo[65640]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:15 compute-2 sudo[65793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwslhukhvpmpcaadohhqsdwjpoowlwpx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764399974.6088393-1090-151290418921205/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:06:15 compute-2 sudo[65793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:15 compute-2 python3[65795]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:06:15 compute-2 sudo[65793]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:15 compute-2 sudo[65945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcmvpbmyiovupidihtjwuicqfuzixpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399975.5292196-1113-224489584785560/AnsiballZ_stat.py'
Nov 29 07:06:15 compute-2 sudo[65945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:16 compute-2 python3.9[65947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:16 compute-2 sudo[65945]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:16 compute-2 sudo[66068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqrjlircimquvztthnxqquzugvgfdpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399975.5292196-1113-224489584785560/AnsiballZ_copy.py'
Nov 29 07:06:16 compute-2 sudo[66068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:16 compute-2 python3.9[66070]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399975.5292196-1113-224489584785560/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:16 compute-2 sudo[66068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:17 compute-2 sudo[66220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viteylrqnzsqzseyngscundspxqbkvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399976.9450555-1158-3751143900305/AnsiballZ_stat.py'
Nov 29 07:06:17 compute-2 sudo[66220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:17 compute-2 python3.9[66222]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:17 compute-2 sudo[66220]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:18 compute-2 sudo[66343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toatwrcdvbueknnnluuvokaovtlqmoom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399976.9450555-1158-3751143900305/AnsiballZ_copy.py'
Nov 29 07:06:18 compute-2 sudo[66343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:18 compute-2 python3.9[66345]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399976.9450555-1158-3751143900305/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:18 compute-2 sudo[66343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:19 compute-2 sudo[66495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdlbdwuserhubrtadgsdnrvxqkykxecc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399978.7292538-1204-148979712844045/AnsiballZ_stat.py'
Nov 29 07:06:19 compute-2 sudo[66495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:19 compute-2 python3.9[66497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:19 compute-2 sudo[66495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:19 compute-2 sudo[66618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmahpwxqdygzcjqcdzzoogogvfblfwvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399978.7292538-1204-148979712844045/AnsiballZ_copy.py'
Nov 29 07:06:19 compute-2 sudo[66618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:19 compute-2 python3.9[66620]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399978.7292538-1204-148979712844045/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:19 compute-2 sudo[66618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:20 compute-2 sudo[66770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzaapyfjkoolzjqkmaaomkeprnqtplnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399980.0321794-1249-248677897474179/AnsiballZ_stat.py'
Nov 29 07:06:20 compute-2 sudo[66770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:20 compute-2 python3.9[66772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:20 compute-2 sudo[66770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:20 compute-2 sudo[66893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgymedbbmyqlyiirdtrsnqxdkwiuvcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399980.0321794-1249-248677897474179/AnsiballZ_copy.py'
Nov 29 07:06:20 compute-2 sudo[66893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:21 compute-2 python3.9[66895]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399980.0321794-1249-248677897474179/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:21 compute-2 sudo[66893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:21 compute-2 sudo[67045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjbtjmxbzzzhkyeychlmtwfichjrync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399981.3109298-1294-127720046355666/AnsiballZ_stat.py'
Nov 29 07:06:21 compute-2 sudo[67045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:21 compute-2 python3.9[67047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:21 compute-2 sudo[67045]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:22 compute-2 sudo[67168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbuxxrzztcohrfigdybtofdfuyuxzxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399981.3109298-1294-127720046355666/AnsiballZ_copy.py'
Nov 29 07:06:22 compute-2 sudo[67168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:22 compute-2 python3.9[67170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399981.3109298-1294-127720046355666/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:22 compute-2 sudo[67168]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:22 compute-2 sudo[67320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iedrwptluiguxhbusrigvnspplgixuas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399982.6535966-1338-238184432403620/AnsiballZ_file.py'
Nov 29 07:06:22 compute-2 sudo[67320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:23 compute-2 python3.9[67322]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:23 compute-2 sudo[67320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:23 compute-2 sudo[67472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynwmgkbpkqkyjuyhpetlqfhzhwnfzxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399983.398752-1363-50645323518984/AnsiballZ_command.py'
Nov 29 07:06:23 compute-2 sudo[67472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:23 compute-2 python3.9[67474]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:23 compute-2 sudo[67472]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:24 compute-2 sudo[67631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmasnjifxyqzwrufktelshunhiyubfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399984.1983113-1387-150132756161388/AnsiballZ_blockinfile.py'
Nov 29 07:06:24 compute-2 sudo[67631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:24 compute-2 python3.9[67633]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:24 compute-2 sudo[67631]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:25 compute-2 sudo[67784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlarhhsjckqlwyfssdfsidghxlhvoltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399985.2159698-1414-17420872923664/AnsiballZ_file.py'
Nov 29 07:06:25 compute-2 sudo[67784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:25 compute-2 python3.9[67786]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:25 compute-2 sudo[67784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:26 compute-2 sudo[67936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnzrrasxeyaowzmvrvueajuqckudxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399986.0608356-1414-141305919738009/AnsiballZ_file.py'
Nov 29 07:06:26 compute-2 sudo[67936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:26 compute-2 python3.9[67938]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:26 compute-2 sudo[67936]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:27 compute-2 sudo[68088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peodncokougrjcketqkcasxqtgcpamoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399986.8477764-1459-64155514164418/AnsiballZ_mount.py'
Nov 29 07:06:27 compute-2 sudo[68088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:27 compute-2 python3.9[68090]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:06:27 compute-2 sudo[68088]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:27 compute-2 sudo[68241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygadefvpdhunotdlicmdnlzsmhipwqcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399987.6989257-1459-117045066663311/AnsiballZ_mount.py'
Nov 29 07:06:27 compute-2 sudo[68241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:28 compute-2 python3.9[68243]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:06:28 compute-2 sudo[68241]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:28 compute-2 sshd-session[59039]: Connection closed by 192.168.122.30 port 35658
Nov 29 07:06:28 compute-2 sshd-session[59036]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:06:28 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 07:06:28 compute-2 systemd[1]: session-15.scope: Consumed 37.536s CPU time.
Nov 29 07:06:28 compute-2 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Nov 29 07:06:28 compute-2 systemd-logind[787]: Removed session 15.
Nov 29 07:06:33 compute-2 sshd-session[68270]: Accepted publickey for zuul from 192.168.122.30 port 42052 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:06:34 compute-2 systemd-logind[787]: New session 16 of user zuul.
Nov 29 07:06:34 compute-2 systemd[1]: Started Session 16 of User zuul.
Nov 29 07:06:34 compute-2 sshd-session[68270]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:06:34 compute-2 sudo[68423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvttyklrqwaohgdtfdgdjjfzusiaogdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399994.111992-25-128797211986221/AnsiballZ_tempfile.py'
Nov 29 07:06:34 compute-2 sudo[68423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:34 compute-2 python3.9[68425]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:06:34 compute-2 sudo[68423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:35 compute-2 sudo[68575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkkwjnrjrrqcsepezrfqzwgqzdvlwlxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399994.954776-62-61416012216309/AnsiballZ_stat.py'
Nov 29 07:06:35 compute-2 sudo[68575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:35 compute-2 python3.9[68577]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:06:35 compute-2 sudo[68575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:36 compute-2 sudo[68727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strioulnaavbxomcqiwoerdwyocgdsda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399995.8654346-91-96948903773189/AnsiballZ_setup.py'
Nov 29 07:06:36 compute-2 sudo[68727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:36 compute-2 python3.9[68729]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:06:36 compute-2 sudo[68727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:37 compute-2 sudo[68879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouzyfntfbpjanpyebdficgwvswzcuyui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399996.9955573-116-114956642111900/AnsiballZ_blockinfile.py'
Nov 29 07:06:37 compute-2 sudo[68879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:37 compute-2 python3.9[68881]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=
                                             create=True mode=0644 path=/tmp/ansible.qcqwospz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:37 compute-2 sudo[68879]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:37 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:06:39 compute-2 sudo[69033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmvryheqaaoeyqituvmfrhjadfelwkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399998.7721295-141-198619433272533/AnsiballZ_command.py'
Nov 29 07:06:39 compute-2 sudo[69033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:39 compute-2 python3.9[69035]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.qcqwospz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:39 compute-2 sudo[69033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:40 compute-2 sudo[69187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwcfnizliaeeweafccvoozjbmreqyny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399999.6560242-164-165437691762918/AnsiballZ_file.py'
Nov 29 07:06:40 compute-2 sudo[69187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:40 compute-2 python3.9[69189]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.qcqwospz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:40 compute-2 sudo[69187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:40 compute-2 sshd-session[68273]: Connection closed by 192.168.122.30 port 42052
Nov 29 07:06:40 compute-2 sshd-session[68270]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:06:40 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 07:06:40 compute-2 systemd[1]: session-16.scope: Consumed 3.477s CPU time.
Nov 29 07:06:40 compute-2 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Nov 29 07:06:40 compute-2 systemd-logind[787]: Removed session 16.
Nov 29 07:06:46 compute-2 sshd-session[69214]: Accepted publickey for zuul from 192.168.122.30 port 46748 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:06:46 compute-2 systemd-logind[787]: New session 17 of user zuul.
Nov 29 07:06:46 compute-2 systemd[1]: Started Session 17 of User zuul.
Nov 29 07:06:46 compute-2 sshd-session[69214]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:06:47 compute-2 python3.9[69367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:06:48 compute-2 sudo[69521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejdeysuttpmglwblhzgpubcbiwctxocb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400007.5897198-64-229916870999838/AnsiballZ_systemd.py'
Nov 29 07:06:48 compute-2 sudo[69521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:48 compute-2 python3.9[69523]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:06:48 compute-2 sudo[69521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:49 compute-2 sudo[69675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exjeyijquultzjntstpcyvmxrmvxygjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400009.1639638-87-32036554868427/AnsiballZ_systemd.py'
Nov 29 07:06:49 compute-2 sudo[69675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:49 compute-2 python3.9[69677]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:06:49 compute-2 sudo[69675]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:50 compute-2 sudo[69828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elyytlimwperwvchmsprdgrgvbcqmvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400010.2062807-114-239648414762737/AnsiballZ_command.py'
Nov 29 07:06:50 compute-2 sudo[69828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:50 compute-2 python3.9[69830]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:50 compute-2 sudo[69828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:51 compute-2 sudo[69981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmlhpsljksgtyrdfpmsoqmoxayhzadra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400011.0585198-138-108822767753072/AnsiballZ_stat.py'
Nov 29 07:06:51 compute-2 sudo[69981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:51 compute-2 python3.9[69983]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:06:51 compute-2 sudo[69981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:52 compute-2 sudo[70135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnkqwyhrwknsumpnjiiunhhruwcyack ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400011.980697-162-226548217420787/AnsiballZ_command.py'
Nov 29 07:06:52 compute-2 sudo[70135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:52 compute-2 python3.9[70137]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:52 compute-2 sudo[70135]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:53 compute-2 sudo[70290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhnbxbbpoioxpupvitzgrajwcigfscrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400012.6835642-186-276483611433390/AnsiballZ_file.py'
Nov 29 07:06:53 compute-2 sudo[70290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:53 compute-2 python3.9[70292]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:53 compute-2 sudo[70290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:53 compute-2 sshd-session[69217]: Connection closed by 192.168.122.30 port 46748
Nov 29 07:06:53 compute-2 sshd-session[69214]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:06:53 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 07:06:53 compute-2 systemd[1]: session-17.scope: Consumed 4.474s CPU time.
Nov 29 07:06:53 compute-2 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Nov 29 07:06:53 compute-2 systemd-logind[787]: Removed session 17.
Nov 29 07:06:59 compute-2 sshd-session[70317]: Accepted publickey for zuul from 192.168.122.30 port 57352 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:06:59 compute-2 systemd-logind[787]: New session 18 of user zuul.
Nov 29 07:06:59 compute-2 systemd[1]: Started Session 18 of User zuul.
Nov 29 07:06:59 compute-2 sshd-session[70317]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:07:00 compute-2 python3.9[70470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:00 compute-2 sudo[70624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egcsqercaoohouiaqeozebtfahyuzvfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400020.5659208-69-143031749822688/AnsiballZ_setup.py'
Nov 29 07:07:00 compute-2 sudo[70624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:01 compute-2 python3.9[70626]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:07:01 compute-2 sudo[70624]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:01 compute-2 sudo[70708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtxbxfhkbjjxpwdwewdrygvwblrgcjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400020.5659208-69-143031749822688/AnsiballZ_dnf.py'
Nov 29 07:07:01 compute-2 sudo[70708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:02 compute-2 python3.9[70710]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:07:03 compute-2 sudo[70708]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:04 compute-2 python3.9[70861]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:05 compute-2 python3.9[71012]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:07:06 compute-2 python3.9[71162]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:07:06 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:07:07 compute-2 python3.9[71313]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:07:07 compute-2 sshd-session[70320]: Connection closed by 192.168.122.30 port 57352
Nov 29 07:07:07 compute-2 sshd-session[70317]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:07:07 compute-2 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Nov 29 07:07:07 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 07:07:07 compute-2 systemd[1]: session-18.scope: Consumed 6.153s CPU time.
Nov 29 07:07:07 compute-2 systemd-logind[787]: Removed session 18.
Nov 29 07:07:16 compute-2 sshd-session[71338]: Accepted publickey for zuul from 38.102.83.151 port 35832 ssh2: RSA SHA256:y/fB5T9OaGjexql/wO0rE+Q6EPqD30vQjURPm/tNNEg
Nov 29 07:07:16 compute-2 systemd-logind[787]: New session 19 of user zuul.
Nov 29 07:07:16 compute-2 systemd[1]: Started Session 19 of User zuul.
Nov 29 07:07:16 compute-2 sshd-session[71338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:07:16 compute-2 sudo[71414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrdhaekqheltlbvdukxkaoylccqlqklz ; /usr/bin/python3'
Nov 29 07:07:16 compute-2 sudo[71414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:16 compute-2 useradd[71418]: new group: name=ceph-admin, GID=42478
Nov 29 07:07:16 compute-2 useradd[71418]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 29 07:07:16 compute-2 sudo[71414]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:18 compute-2 sudo[71500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjtzciugqscxynsgxmxaxxbqigwfnev ; /usr/bin/python3'
Nov 29 07:07:18 compute-2 sudo[71500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:18 compute-2 sudo[71500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:18 compute-2 sudo[71573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvantmqgcrymkqqastjytdndctaqzqw ; /usr/bin/python3'
Nov 29 07:07:18 compute-2 sudo[71573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:18 compute-2 sudo[71573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:19 compute-2 sudo[71623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaldzwdfyuxrfvxklsgcvycydtgynesd ; /usr/bin/python3'
Nov 29 07:07:19 compute-2 sudo[71623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:19 compute-2 sudo[71623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:19 compute-2 sudo[71649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tklscpablgtpscvnqlgcnbxpuvksnbyt ; /usr/bin/python3'
Nov 29 07:07:19 compute-2 sudo[71649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:19 compute-2 sudo[71649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:20 compute-2 sudo[71675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkhviuecuxtfgnilqcbglfcxkzpaybs ; /usr/bin/python3'
Nov 29 07:07:20 compute-2 sudo[71675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:20 compute-2 sudo[71675]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:20 compute-2 sudo[71701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnbfpaxdqxsbeazeobmxefxiqtcvncwi ; /usr/bin/python3'
Nov 29 07:07:20 compute-2 sudo[71701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:20 compute-2 sudo[71701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:21 compute-2 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rumpxxegvybiebfpnrjzcevtcumiqjld ; /usr/bin/python3'
Nov 29 07:07:21 compute-2 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:21 compute-2 sudo[71779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:21 compute-2 sudo[71852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karrgmyfyakfmdpeuuqkshheaoitpssg ; /usr/bin/python3'
Nov 29 07:07:21 compute-2 sudo[71852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:21 compute-2 sudo[71852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:22 compute-2 sudo[71954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqldxhrrjafmvauwkdtnayxrweeqpqi ; /usr/bin/python3'
Nov 29 07:07:22 compute-2 sudo[71954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:22 compute-2 sudo[71954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:22 compute-2 sudo[72027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnahxwqndhuqdjghxstzwyzcnpwlntuf ; /usr/bin/python3'
Nov 29 07:07:22 compute-2 sudo[72027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:22 compute-2 sudo[72027]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:23 compute-2 sudo[72077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpgpsrauxcmnvvpbzyevsidbehvbemk ; /usr/bin/python3'
Nov 29 07:07:23 compute-2 sudo[72077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:23 compute-2 python3[72079]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:24 compute-2 sudo[72077]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:25 compute-2 sudo[72172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuqivlenlgygrjyzjxipbtqedpbaphfs ; /usr/bin/python3'
Nov 29 07:07:25 compute-2 sudo[72172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:25 compute-2 python3[72174]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:07:26 compute-2 sudo[72172]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:26 compute-2 chronyd[58554]: Selected source 216.232.132.102 (pool.ntp.org)
Nov 29 07:07:27 compute-2 sudo[72199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbejcxufybjhkxalkoecjdydyoezfhb ; /usr/bin/python3'
Nov 29 07:07:27 compute-2 sudo[72199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:27 compute-2 python3[72201]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:07:27 compute-2 sudo[72199]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:27 compute-2 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubumuhelxmzogpyzulkcbyyymsvodrgy ; /usr/bin/python3'
Nov 29 07:07:27 compute-2 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:27 compute-2 python3[72227]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:27 compute-2 kernel: loop: module loaded
Nov 29 07:07:27 compute-2 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 07:07:27 compute-2 sudo[72225]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:27 compute-2 sudo[72260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjxlhpklwbpprewwlkprnnwwcyusykdu ; /usr/bin/python3'
Nov 29 07:07:27 compute-2 sudo[72260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:28 compute-2 python3[72262]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:28 compute-2 lvm[72265]: PV /dev/loop3 not used.
Nov 29 07:07:28 compute-2 lvm[72267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:07:28 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 07:07:28 compute-2 lvm[72273]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 07:07:28 compute-2 lvm[72277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:07:28 compute-2 lvm[72277]: VG ceph_vg0 finished
Nov 29 07:07:28 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 07:07:28 compute-2 sudo[72260]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:28 compute-2 sudo[72353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovuserbeagnkfchfmkhpgpwctabgwdhq ; /usr/bin/python3'
Nov 29 07:07:28 compute-2 sudo[72353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:29 compute-2 python3[72355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:07:29 compute-2 sudo[72353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:29 compute-2 sudo[72426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmaatdcjntodqvaegkxlhecdddxejdgw ; /usr/bin/python3'
Nov 29 07:07:29 compute-2 sudo[72426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:29 compute-2 python3[72428]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400048.7365003-37020-270540329491072/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:29 compute-2 sudo[72426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:29 compute-2 sudo[72476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcsjiotjvrgjhtwrshaqbgejnfarjrvc ; /usr/bin/python3'
Nov 29 07:07:29 compute-2 sudo[72476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:30 compute-2 python3[72478]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:07:30 compute-2 systemd[1]: Reloading.
Nov 29 07:07:30 compute-2 systemd-rc-local-generator[72506]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:07:30 compute-2 systemd-sysv-generator[72511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:07:30 compute-2 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:07:30 compute-2 bash[72518]: /dev/loop3: [64513]:4327940 (/var/lib/ceph-osd-0.img)
Nov 29 07:07:30 compute-2 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:07:30 compute-2 lvm[72519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:07:30 compute-2 lvm[72519]: VG ceph_vg0 finished
Nov 29 07:07:30 compute-2 sudo[72476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:32 compute-2 python3[72543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:09:58 compute-2 sshd-session[72587]: Accepted publickey for ceph-admin from 192.168.122.100 port 59482 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:09:58 compute-2 systemd-logind[787]: New session 20 of user ceph-admin.
Nov 29 07:09:58 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 07:09:58 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 07:09:58 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 07:09:58 compute-2 systemd[1]: Starting User Manager for UID 42477...
Nov 29 07:09:58 compute-2 systemd[72591]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:58 compute-2 sshd-session[72604]: Accepted publickey for ceph-admin from 192.168.122.100 port 59496 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:09:58 compute-2 systemd[72591]: Queued start job for default target Main User Target.
Nov 29 07:09:58 compute-2 systemd-logind[787]: New session 22 of user ceph-admin.
Nov 29 07:09:58 compute-2 systemd[72591]: Created slice User Application Slice.
Nov 29 07:09:58 compute-2 systemd[72591]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:09:58 compute-2 systemd[72591]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:09:58 compute-2 systemd[72591]: Reached target Paths.
Nov 29 07:09:58 compute-2 systemd[72591]: Reached target Timers.
Nov 29 07:09:58 compute-2 systemd[72591]: Starting D-Bus User Message Bus Socket...
Nov 29 07:09:58 compute-2 systemd[72591]: Starting Create User's Volatile Files and Directories...
Nov 29 07:09:58 compute-2 systemd[72591]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:09:58 compute-2 systemd[72591]: Reached target Sockets.
Nov 29 07:09:58 compute-2 systemd[72591]: Finished Create User's Volatile Files and Directories.
Nov 29 07:09:58 compute-2 systemd[72591]: Reached target Basic System.
Nov 29 07:09:58 compute-2 systemd[72591]: Reached target Main User Target.
Nov 29 07:09:58 compute-2 systemd[72591]: Startup finished in 139ms.
Nov 29 07:09:58 compute-2 systemd[1]: Started User Manager for UID 42477.
Nov 29 07:09:58 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 07:09:58 compute-2 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 07:09:58 compute-2 sshd-session[72587]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:58 compute-2 sshd-session[72604]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:58 compute-2 sudo[72611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:09:58 compute-2 sudo[72611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:58 compute-2 sudo[72611]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:58 compute-2 sudo[72636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:09:58 compute-2 sudo[72636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:58 compute-2 sudo[72636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:58 compute-2 sshd-session[72661]: Accepted publickey for ceph-admin from 192.168.122.100 port 59508 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:09:58 compute-2 systemd-logind[787]: New session 23 of user ceph-admin.
Nov 29 07:09:58 compute-2 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 07:09:58 compute-2 sshd-session[72661]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:59 compute-2 sudo[72665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:09:59 compute-2 sudo[72665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72665]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:59 compute-2 sudo[72690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Nov 29 07:09:59 compute-2 sudo[72690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72690]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:59 compute-2 sshd-session[72715]: Accepted publickey for ceph-admin from 192.168.122.100 port 59516 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:09:59 compute-2 systemd-logind[787]: New session 24 of user ceph-admin.
Nov 29 07:09:59 compute-2 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 07:09:59 compute-2 sshd-session[72715]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:59 compute-2 sudo[72719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:09:59 compute-2 sudo[72719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:59 compute-2 sudo[72744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:09:59 compute-2 sudo[72744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:59 compute-2 sshd-session[72769]: Accepted publickey for ceph-admin from 192.168.122.100 port 59528 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:09:59 compute-2 systemd-logind[787]: New session 25 of user ceph-admin.
Nov 29 07:09:59 compute-2 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 07:09:59 compute-2 sshd-session[72769]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:09:59 compute-2 sudo[72773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:09:59 compute-2 sudo[72773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72773]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:59 compute-2 sudo[72798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:09:59 compute-2 sudo[72798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:09:59 compute-2 sudo[72798]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-2 sshd-session[72823]: Accepted publickey for ceph-admin from 192.168.122.100 port 59542 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:00 compute-2 systemd-logind[787]: New session 26 of user ceph-admin.
Nov 29 07:10:00 compute-2 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 07:10:00 compute-2 sshd-session[72823]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:00 compute-2 sudo[72827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:00 compute-2 sudo[72827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:00 compute-2 sudo[72827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-2 sudo[72852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:10:00 compute-2 sudo[72852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:00 compute-2 sudo[72852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-2 sshd-session[72877]: Accepted publickey for ceph-admin from 192.168.122.100 port 59544 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:00 compute-2 systemd-logind[787]: New session 27 of user ceph-admin.
Nov 29 07:10:00 compute-2 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 07:10:00 compute-2 sshd-session[72877]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:00 compute-2 sudo[72881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:00 compute-2 sudo[72881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:00 compute-2 sudo[72881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-2 sudo[72906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:10:00 compute-2 sudo[72906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:00 compute-2 sudo[72906]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-2 sshd-session[72931]: Accepted publickey for ceph-admin from 192.168.122.100 port 59552 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:00 compute-2 systemd-logind[787]: New session 28 of user ceph-admin.
Nov 29 07:10:00 compute-2 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 07:10:00 compute-2 sshd-session[72931]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:01 compute-2 sudo[72935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:01 compute-2 sudo[72935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:01 compute-2 sudo[72935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:01 compute-2 sudo[72960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:10:01 compute-2 sudo[72960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:01 compute-2 sudo[72960]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:01 compute-2 sshd-session[72985]: Accepted publickey for ceph-admin from 192.168.122.100 port 59556 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:01 compute-2 systemd-logind[787]: New session 29 of user ceph-admin.
Nov 29 07:10:01 compute-2 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 07:10:01 compute-2 sshd-session[72985]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:01 compute-2 sudo[72989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:01 compute-2 sudo[72989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:01 compute-2 sudo[72989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:01 compute-2 sudo[73014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:10:01 compute-2 sudo[73014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:01 compute-2 sudo[73014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:01 compute-2 sshd-session[73039]: Accepted publickey for ceph-admin from 192.168.122.100 port 59560 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:01 compute-2 systemd-logind[787]: New session 30 of user ceph-admin.
Nov 29 07:10:01 compute-2 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 07:10:01 compute-2 sshd-session[73039]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:02 compute-2 sshd-session[73066]: Accepted publickey for ceph-admin from 192.168.122.100 port 59576 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:02 compute-2 systemd-logind[787]: New session 31 of user ceph-admin.
Nov 29 07:10:02 compute-2 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 07:10:02 compute-2 sshd-session[73066]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:02 compute-2 sudo[73070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:02 compute-2 sudo[73070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:02 compute-2 sudo[73070]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:02 compute-2 sudo[73095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:10:02 compute-2 sudo[73095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:02 compute-2 sudo[73095]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:02 compute-2 sshd-session[73120]: Accepted publickey for ceph-admin from 192.168.122.100 port 59578 ssh2: RSA SHA256:pUQNBoOZZMlqQgQxix/Jf2qOL1jNapzjPaGNP9LAWRs
Nov 29 07:10:02 compute-2 systemd-logind[787]: New session 32 of user ceph-admin.
Nov 29 07:10:02 compute-2 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 07:10:02 compute-2 sshd-session[73120]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:02 compute-2 sudo[73124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:02 compute-2 sudo[73124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:02 compute-2 sudo[73124]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:02 compute-2 sudo[73149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Nov 29 07:10:02 compute-2 sudo[73149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:03 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:03 compute-2 sudo[73149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:47 compute-2 sudo[73194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 sudo[73194]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:10:47 compute-2 sudo[73219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 sudo[73219]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:47 compute-2 sudo[73244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 sudo[73244]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:10:47 compute-2 sudo[73269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 sudo[73269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:47 compute-2 sudo[73294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 sudo[73294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-2 sudo[73319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:10:47 compute-2 sudo[73319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:47 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:47 compute-2 sudo[73319]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 sudo[73364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:48 compute-2 sudo[73364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 sudo[73364]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 sudo[73389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:10:48 compute-2 sudo[73389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 sudo[73389]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 sudo[73414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:48 compute-2 sudo[73414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 sudo[73414]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 sudo[73439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:10:48 compute-2 sudo[73439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:48 compute-2 sudo[73439]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:48 compute-2 sudo[73501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:48 compute-2 sudo[73501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 sudo[73501]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-2 sudo[73526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:10:48 compute-2 sudo[73526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:48 compute-2 sudo[73526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:49 compute-2 sudo[73551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 sudo[73551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:10:49 compute-2 sudo[73576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:49 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73612 (sysctl)
Nov 29 07:10:49 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 07:10:49 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 07:10:49 compute-2 sudo[73576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:49 compute-2 sudo[73635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 sudo[73635]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:10:49 compute-2 sudo[73660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 sudo[73660]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:49 compute-2 sudo[73685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 sudo[73685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-2 sudo[73710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:10:49 compute-2 sudo[73710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:49 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:50 compute-2 sudo[73710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-2 sudo[73752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:50 compute-2 sudo[73752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:50 compute-2 sudo[73752]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-2 sudo[73777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:10:50 compute-2 sudo[73777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:50 compute-2 sudo[73777]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-2 sudo[73802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:10:50 compute-2 sudo[73802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:50 compute-2 sudo[73802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-2 sudo[73827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:10:50 compute-2 sudo[73827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:50 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat1878454256-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.823633369 +0000 UTC m=+31.225330189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.84810765 +0000 UTC m=+31.249804440 container create d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:11:21 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 07:11:21 compute-2 systemd[1]: Started libpod-conmon-d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca.scope.
Nov 29 07:11:21 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.966153126 +0000 UTC m=+31.367849936 container init d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.973222703 +0000 UTC m=+31.374919493 container start d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.978379981 +0000 UTC m=+31.380076801 container attach d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:11:21 compute-2 pensive_volhard[73956]: 167 167
Nov 29 07:11:21 compute-2 systemd[1]: libpod-d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca.scope: Deactivated successfully.
Nov 29 07:11:21 compute-2 podman[73887]: 2025-11-29 07:11:21.980620109 +0000 UTC m=+31.382316909 container died d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:11:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-aefdc4e403d850f637a348dbe0605ac1070fee52d665fbb9075bf03bd4d38bbc-merged.mount: Deactivated successfully.
Nov 29 07:11:22 compute-2 podman[73887]: 2025-11-29 07:11:22.021692969 +0000 UTC m=+31.423389759 container remove d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:11:22 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:22 compute-2 systemd[1]: libpod-conmon-d4dc54ba5881a82a73cf8d999aca8c6ffa98c84329c26c69d2f38ad430e559ca.scope: Deactivated successfully.
Nov 29 07:11:22 compute-2 podman[73978]: 2025-11-29 07:11:22.191089329 +0000 UTC m=+0.043754682 container create d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:22 compute-2 systemd[1]: Started libpod-conmon-d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4.scope.
Nov 29 07:11:22 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c826b399857e9847e55b57c8dca65015aabd98002ba3529ae9c246f9df7edf9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c826b399857e9847e55b57c8dca65015aabd98002ba3529ae9c246f9df7edf9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:22 compute-2 podman[73978]: 2025-11-29 07:11:22.172489429 +0000 UTC m=+0.025154802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:22 compute-2 podman[73978]: 2025-11-29 07:11:22.274889246 +0000 UTC m=+0.127554619 container init d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:22 compute-2 podman[73978]: 2025-11-29 07:11:22.283223972 +0000 UTC m=+0.135889325 container start d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:11:22 compute-2 podman[73978]: 2025-11-29 07:11:22.337087302 +0000 UTC m=+0.189752655 container attach d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]: [
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:     {
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "available": false,
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "ceph_device": false,
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "lsm_data": {},
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "lvs": [],
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "path": "/dev/sr0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "rejected_reasons": [
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "Has a FileSystem",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "Insufficient space (<5GB)"
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         ],
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         "sys_api": {
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "actuators": null,
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "device_nodes": "sr0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "devname": "sr0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "human_readable_size": "482.00 KB",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "id_bus": "ata",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "model": "QEMU DVD-ROM",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "nr_requests": "2",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "parent": "/dev/sr0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "partitions": {},
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "path": "/dev/sr0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "removable": "1",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "rev": "2.5+",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "ro": "0",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "rotational": "1",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "sas_address": "",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "sas_device_handle": "",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "scheduler_mode": "mq-deadline",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "sectors": 0,
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "sectorsize": "2048",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "size": 493568.0,
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "support_discard": "2048",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "type": "disk",
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:             "vendor": "QEMU"
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:         }
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]:     }
Nov 29 07:11:23 compute-2 relaxed_dewdney[73995]: ]
Nov 29 07:11:23 compute-2 systemd[1]: libpod-d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4.scope: Deactivated successfully.
Nov 29 07:11:23 compute-2 podman[73978]: 2025-11-29 07:11:23.49560091 +0000 UTC m=+1.348266273 container died d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:23 compute-2 systemd[1]: libpod-d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4.scope: Consumed 1.207s CPU time.
Nov 29 07:11:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-c826b399857e9847e55b57c8dca65015aabd98002ba3529ae9c246f9df7edf9e-merged.mount: Deactivated successfully.
Nov 29 07:11:23 compute-2 podman[73978]: 2025-11-29 07:11:23.568895066 +0000 UTC m=+1.421560459 container remove d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:11:23 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:23 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:23 compute-2 systemd[1]: libpod-conmon-d96a6ffab2ec1918a5fd423cd7b121d249ec268e0214728ca5ef31e370eba7f4.scope: Deactivated successfully.
Nov 29 07:11:23 compute-2 sudo[73827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:23 compute-2 sudo[74955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:23 compute-2 sudo[74955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:23 compute-2 sudo[74955]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:23 compute-2 sudo[74980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:11:23 compute-2 sudo[74980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:23 compute-2 sudo[74980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:23 compute-2 sudo[75005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:23 compute-2 sudo[75005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:23 compute-2 sudo[75005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:23 compute-2 sudo[75030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph
Nov 29 07:11:23 compute-2 sudo[75030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:23 compute-2 sudo[75030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:11:24 compute-2 sudo[75080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:24 compute-2 sudo[75130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:11:24 compute-2 sudo[75180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75180]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75228]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:11:24 compute-2 sudo[75253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75253]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:11:24 compute-2 sudo[75303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75303]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 07:11:24 compute-2 sudo[75353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:24 compute-2 sudo[75378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75378]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:11:24 compute-2 sudo[75403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:24 compute-2 sudo[75403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:24 compute-2 sudo[75428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75428]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:11:25 compute-2 sudo[75453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:11:25 compute-2 sudo[75503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75528]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:25 compute-2 sudo[75553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:11:25 compute-2 sudo[75603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75603]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75652]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:11:25 compute-2 sudo[75677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:11:25 compute-2 sudo[75727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:25 compute-2 sudo[75752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75752]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:25 compute-2 sudo[75777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:11:25 compute-2 sudo[75777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:25 compute-2 sudo[75777]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[75802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:11:26 compute-2 sudo[75827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[75852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph
Nov 29 07:11:26 compute-2 sudo[75877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75877]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[75902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75902]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:11:26 compute-2 sudo[75927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[75952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[75977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:26 compute-2 sudo[75977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[75977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[76002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:11:26 compute-2 sudo[76027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76027]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[76075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:11:26 compute-2 sudo[76100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76100]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:26 compute-2 sudo[76125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:26 compute-2 sudo[76150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:11:26 compute-2 sudo[76150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:26 compute-2 sudo[76150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 29 07:11:27 compute-2 sudo[76200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76200]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76225]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:11:27 compute-2 sudo[76250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76250]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:11:27 compute-2 sudo[76300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring.new
Nov 29 07:11:27 compute-2 sudo[76350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76375]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:27 compute-2 sudo[76400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76400]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76425]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring.new
Nov 29 07:11:27 compute-2 sudo[76450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:27 compute-2 sudo[76498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:27 compute-2 sudo[76498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:27 compute-2 sudo[76498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring.new
Nov 29 07:11:28 compute-2 sudo[76523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:28 compute-2 sudo[76548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring.new
Nov 29 07:11:28 compute-2 sudo[76573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:28 compute-2 sudo[76598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76598]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring.new /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 07:11:28 compute-2 sudo[76623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:28 compute-2 sudo[76648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:11:28 compute-2 sudo[76673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76673]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:28 compute-2 sudo[76698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 sudo[76698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:28 compute-2 sudo[76723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:28 compute-2 sudo[76723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:28 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:28 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.030863865 +0000 UTC m=+0.025745090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.643818606 +0000 UTC m=+0.638699811 container create 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:29 compute-2 systemd[1]: Started libpod-conmon-3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78.scope.
Nov 29 07:11:29 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.745032227 +0000 UTC m=+0.739913482 container init 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.75358164 +0000 UTC m=+0.748462845 container start 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.758593873 +0000 UTC m=+0.753475178 container attach 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:29 compute-2 gallant_fermat[76805]: 167 167
Nov 29 07:11:29 compute-2 systemd[1]: libpod-3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78.scope: Deactivated successfully.
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.761582304 +0000 UTC m=+0.756463539 container died 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:29 compute-2 podman[76788]: 2025-11-29 07:11:29.824817852 +0000 UTC m=+0.819699057 container remove 3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:29 compute-2 systemd[1]: libpod-conmon-3fb46e7563ccd6e505c19e36a20e6621e0957ccf55c760562bd5380cee9b6a78.scope: Deactivated successfully.
Nov 29 07:11:29 compute-2 podman[76824]: 2025-11-29 07:11:29.893493236 +0000 UTC m=+0.040912625 container create b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:11:29 compute-2 systemd[1]: Started libpod-conmon-b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d.scope.
Nov 29 07:11:29 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af90d69292d92aa664599d86b4235da307d9a62beb8dbf9d129f91750ee233a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af90d69292d92aa664599d86b4235da307d9a62beb8dbf9d129f91750ee233a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af90d69292d92aa664599d86b4235da307d9a62beb8dbf9d129f91750ee233a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af90d69292d92aa664599d86b4235da307d9a62beb8dbf9d129f91750ee233a/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:29 compute-2 podman[76824]: 2025-11-29 07:11:29.969070732 +0000 UTC m=+0.116490121 container init b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:29 compute-2 podman[76824]: 2025-11-29 07:11:29.87566698 +0000 UTC m=+0.023086399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:29 compute-2 podman[76824]: 2025-11-29 07:11:29.978043017 +0000 UTC m=+0.125462406 container start b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:29 compute-2 podman[76824]: 2025-11-29 07:11:29.981532503 +0000 UTC m=+0.128951892 container attach b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:11:30 compute-2 podman[76824]: 2025-11-29 07:11:30.082677603 +0000 UTC m=+0.230096992 container died b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:30 compute-2 systemd[1]: libpod-b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d.scope: Deactivated successfully.
Nov 29 07:11:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-1af90d69292d92aa664599d86b4235da307d9a62beb8dbf9d129f91750ee233a-merged.mount: Deactivated successfully.
Nov 29 07:11:30 compute-2 podman[76824]: 2025-11-29 07:11:30.120300866 +0000 UTC m=+0.267720255 container remove b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:11:30 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:30 compute-2 systemd[1]: libpod-conmon-b2dfecf0461bb046ea710ee5feb6cd3265e3c063b5aecec0e1b77fb3eda5e69d.scope: Deactivated successfully.
Nov 29 07:11:30 compute-2 systemd[1]: Reloading.
Nov 29 07:11:30 compute-2 systemd-rc-local-generator[76908]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:30 compute-2 systemd-sysv-generator[76911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:30 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:30 compute-2 systemd[1]: Reloading.
Nov 29 07:11:30 compute-2 systemd-rc-local-generator[76939]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:30 compute-2 systemd-sysv-generator[76945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:30 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 07:11:30 compute-2 systemd[1]: Reloading.
Nov 29 07:11:30 compute-2 systemd-rc-local-generator[76980]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:30 compute-2 systemd-sysv-generator[76985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:30 compute-2 systemd[1]: Reached target Ceph cluster 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:11:31 compute-2 systemd[1]: Reloading.
Nov 29 07:11:31 compute-2 systemd-rc-local-generator[77020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:31 compute-2 systemd-sysv-generator[77023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:31 compute-2 systemd[1]: Reloading.
Nov 29 07:11:31 compute-2 systemd-rc-local-generator[77062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:31 compute-2 systemd-sysv-generator[77065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:31 compute-2 systemd[1]: Created slice Slice /system/ceph-38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:11:31 compute-2 systemd[1]: Reached target System Time Set.
Nov 29 07:11:31 compute-2 systemd[1]: Reached target System Time Synchronized.
Nov 29 07:11:31 compute-2 systemd[1]: Starting Ceph mon.compute-2 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:11:31 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:31 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:31 compute-2 podman[77119]: 2025-11-29 07:11:31.842468525 +0000 UTC m=+0.078665742 container create 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:31 compute-2 podman[77119]: 2025-11-29 07:11:31.795498415 +0000 UTC m=+0.031695662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3440fefa323948b1c6bb9cf96bfa20c0d62a2e94886da007e7207f930494338d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3440fefa323948b1c6bb9cf96bfa20c0d62a2e94886da007e7207f930494338d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3440fefa323948b1c6bb9cf96bfa20c0d62a2e94886da007e7207f930494338d/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:31 compute-2 podman[77119]: 2025-11-29 07:11:31.919795774 +0000 UTC m=+0.155993011 container init 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:11:31 compute-2 podman[77119]: 2025-11-29 07:11:31.925152508 +0000 UTC m=+0.161349725 container start 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:31 compute-2 ceph-mon[77138]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:11:31 compute-2 ceph-mon[77138]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:31 compute-2 ceph-mon[77138]: load: jerasure load: lrc 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Git sha 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: DB SUMMARY
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: DB Session ID:  FV6EMGUAMR2UK13SF2XC
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                                     Options.env: 0x55a34ae51c40
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                                Options.info_log: 0x55a34d558fc0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                                 Options.wal_dir: 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                    Options.write_buffer_manager: 0x55a34d568b40
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                               Options.row_cache: None
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                              Options.wal_filter: None
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.wal_compression: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.max_background_jobs: 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Compression algorithms supported:
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kZSTD supported: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:           Options.merge_operator: 
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:        Options.compaction_filter: None
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a34d558c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a34d5511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.compression: NoCompression
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.num_levels: 7
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 62cfe9d7-b838-48ed-bc7b-9412d6dcca65
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400291967295, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 07:11:31 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400292052762, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400292052996, "job": 1, "event": "recovery_finished"}
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 07:11:32 compute-2 bash[77119]: 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe
Nov 29 07:11:32 compute-2 systemd[1]: Started Ceph mon.compute-2 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:11:32 compute-2 sudo[76723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a34d57ae00
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: DB pointer 0x55a34d682000
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:11:32 compute-2 ceph-mon[77138]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Nov 29 07:11:32 compute-2 ceph-mon[77138]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(???) e0 preinit fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).mds e1 new map
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e31 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e20: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v76: 36 pgs: 1 creating+peering, 1 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e21: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e22: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v79: 37 pgs: 1 creating+peering, 2 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e23: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.1 scrub starts
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.1 scrub ok
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e24: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v82: 38 pgs: 1 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.2 scrub starts
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.2 scrub ok
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e25: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e26: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v85: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e27: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e28: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v88: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e29: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v90: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Deploying daemon mon.compute-2 on compute-2
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:11:32 compute-2 ceph-mon[77138]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e30: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.a scrub starts
Nov 29 07:11:32 compute-2 ceph-mon[77138]: 2.a scrub ok
Nov 29 07:11:32 compute-2 ceph-mon[77138]: pgmap v92: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 07:11:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 07:11:32 compute-2 ceph-mon[77138]: osdmap e31: 2 total, 2 up, 2 in
Nov 29 07:11:32 compute-2 ceph-mon[77138]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 29 07:11:34 compute-2 ceph-mon[77138]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Nov 29 07:11:34 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:11:34 compute-2 ceph-mon[77138]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 07:11:34 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:34 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 07:11:34 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 07:11:36 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:11:30.031253Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 07:11:37 compute-2 ceph-mon[77138]: Deploying daemon mon.compute-1 on compute-1
Nov 29 07:11:37 compute-2 ceph-mon[77138]: pgmap v94: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: pgmap v95: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: 2.9 deep-scrub starts
Nov 29 07:11:37 compute-2 ceph-mon[77138]: 2.9 deep-scrub ok
Nov 29 07:11:37 compute-2 ceph-mon[77138]: pgmap v96: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: 2.3 deep-scrub starts
Nov 29 07:11:37 compute-2 ceph-mon[77138]: 2.3 deep-scrub ok
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 07:11:37 compute-2 ceph-mon[77138]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:37 compute-2 ceph-mon[77138]: fsmap 
Nov 29 07:11:37 compute-2 ceph-mon[77138]: osdmap e31: 2 total, 2 up, 2 in
Nov 29 07:11:37 compute-2 ceph-mon[77138]: mgrmap e8: compute-0.rotard(active, since 2m)
Nov 29 07:11:37 compute-2 ceph-mon[77138]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Nov 29 07:11:37 compute-2 ceph-mon[77138]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Nov 29 07:11:37 compute-2 ceph-mon[77138]:     application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 07:11:37 compute-2 ceph-mon[77138]:     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 07:11:37 compute-2 ceph-mon[77138]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:37 compute-2 sudo[77177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:37 compute-2 sudo[77177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:37 compute-2 sudo[77177]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:37 compute-2 sudo[77202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:11:37 compute-2 sudo[77202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:37 compute-2 sudo[77202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:37 compute-2 sudo[77227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:37 compute-2 sudo[77227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:37 compute-2 sudo[77227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:37 compute-2 sudo[77252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:37 compute-2 sudo[77252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.347210395 +0000 UTC m=+0.046023061 container create be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:38 compute-2 systemd[1]: Started libpod-conmon-be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e.scope.
Nov 29 07:11:38 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.326603464 +0000 UTC m=+0.025416160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.436220082 +0000 UTC m=+0.135032768 container init be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.443941309 +0000 UTC m=+0.142753975 container start be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.449962473 +0000 UTC m=+0.148775159 container attach be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:38 compute-2 vigilant_chatelet[77334]: 167 167
Nov 29 07:11:38 compute-2 systemd[1]: libpod-be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e.scope: Deactivated successfully.
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.45181095 +0000 UTC m=+0.150623626 container died be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-7fbeb62988e9e47ac5d194939bdfb9703a96071170443871ea1a2a6e07c729f1-merged.mount: Deactivated successfully.
Nov 29 07:11:38 compute-2 podman[77317]: 2025-11-29 07:11:38.686060368 +0000 UTC m=+0.384873054 container remove be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:38 compute-2 systemd[1]: libpod-conmon-be620f3daccfba05c45e2410c8390912566698c0225e3e0cd61723261ffa7f3e.scope: Deactivated successfully.
Nov 29 07:11:38 compute-2 systemd[1]: Reloading.
Nov 29 07:11:38 compute-2 ceph-mon[77138]: 2.5 deep-scrub starts
Nov 29 07:11:38 compute-2 ceph-mon[77138]: 2.5 deep-scrub ok
Nov 29 07:11:38 compute-2 ceph-mon[77138]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 07:11:38 compute-2 ceph-mon[77138]: osdmap e32: 2 total, 2 up, 2 in
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:38 compute-2 ceph-mon[77138]: Deploying daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 07:11:38 compute-2 systemd-rc-local-generator[77385]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:38 compute-2 systemd-sysv-generator[77389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:39 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:11:39 compute-2 ceph-mon[77138]: paxos.1).electionLogic(10) init, last seen epoch 10
Nov 29 07:11:39 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:39 compute-2 systemd[1]: Reloading.
Nov 29 07:11:39 compute-2 systemd-rc-local-generator[77422]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:39 compute-2 systemd-sysv-generator[77426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:39 compute-2 systemd[1]: Starting Ceph mgr.compute-2.vyxqrz for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:11:39 compute-2 podman[77478]: 2025-11-29 07:11:39.701998837 +0000 UTC m=+0.078493586 container create 1364b7e4cad13ccae5f7d5dd7b0216c6739212bf4bc21bc9f06d32e61d309a31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:11:39 compute-2 podman[77478]: 2025-11-29 07:11:39.648288451 +0000 UTC m=+0.024783230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fdfcc8361b46fe7d1bd085e4b48d3371ada06bc253db6f1c755d0edcae1ebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fdfcc8361b46fe7d1bd085e4b48d3371ada06bc253db6f1c755d0edcae1ebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fdfcc8361b46fe7d1bd085e4b48d3371ada06bc253db6f1c755d0edcae1ebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fdfcc8361b46fe7d1bd085e4b48d3371ada06bc253db6f1c755d0edcae1ebc/merged/var/lib/ceph/mgr/ceph-compute-2.vyxqrz supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-2 podman[77478]: 2025-11-29 07:11:39.779552024 +0000 UTC m=+0.156046773 container init 1364b7e4cad13ccae5f7d5dd7b0216c6739212bf4bc21bc9f06d32e61d309a31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:11:39 compute-2 podman[77478]: 2025-11-29 07:11:39.789580761 +0000 UTC m=+0.166075510 container start 1364b7e4cad13ccae5f7d5dd7b0216c6739212bf4bc21bc9f06d32e61d309a31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:39 compute-2 bash[77478]: 1364b7e4cad13ccae5f7d5dd7b0216c6739212bf4bc21bc9f06d32e61d309a31
Nov 29 07:11:39 compute-2 systemd[1]: Started Ceph mgr.compute-2.vyxqrz for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:11:39 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:39 compute-2 sudo[77252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:40 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:40 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:41 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:42 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:43 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:43 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:44 compute-2 ceph-mon[77138]: paxos.1).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'alerts'
Nov 29 07:11:44 compute-2 ceph-mon[77138]: pgmap v98: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.7 scrub starts
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.7 scrub ok
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.6 scrub starts
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.6 scrub ok
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.8 deep-scrub starts
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.8 deep-scrub ok
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: pgmap v99: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.b scrub starts
Nov 29 07:11:44 compute-2 ceph-mon[77138]: 2.b scrub ok
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: pgmap v100: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:11:44 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:44 compute-2 ceph-mon[77138]: fsmap 
Nov 29 07:11:44 compute-2 ceph-mon[77138]: osdmap e32: 2 total, 2 up, 2 in
Nov 29 07:11:44 compute-2 ceph-mon[77138]: mgrmap e8: compute-0.rotard(active, since 2m)
Nov 29 07:11:44 compute-2 ceph-mon[77138]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Nov 29 07:11:44 compute-2 ceph-mon[77138]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Nov 29 07:11:44 compute-2 ceph-mon[77138]:     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 07:11:44 compute-2 ceph-mon[77138]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 07:11:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:44 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'balancer'
Nov 29 07:11:44 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:44.715+0000 7fae681ee140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:45 compute-2 ceph-mgr[77498]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:45 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:45.010+0000 7fae681ee140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:45 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'cephadm'
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:45 compute-2 ceph-mon[77138]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:11:45 compute-2 ceph-mon[77138]: Cluster is now healthy
Nov 29 07:11:45 compute-2 ceph-mon[77138]: pgmap v101: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:45 compute-2 ceph-mon[77138]: 2.f scrub starts
Nov 29 07:11:45 compute-2 ceph-mon[77138]: 2.f scrub ok
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:45 compute-2 ceph-mon[77138]: Deploying daemon mgr.compute-1.jjnjed on compute-1
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:11:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 07:11:47 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'crash'
Nov 29 07:11:47 compute-2 ceph-mon[77138]: 2.4 scrub starts
Nov 29 07:11:47 compute-2 ceph-mon[77138]: 2.4 scrub ok
Nov 29 07:11:47 compute-2 ceph-mon[77138]: pgmap v102: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4133094199' entity='client.admin' 
Nov 29 07:11:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1019934376 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:11:47 compute-2 sudo[77534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:47 compute-2 sudo[77534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:47 compute-2 sudo[77534]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:47 compute-2 ceph-mgr[77498]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:47 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'dashboard'
Nov 29 07:11:47 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:47.363+0000 7fae681ee140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:47 compute-2 sudo[77559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:11:47 compute-2 sudo[77559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:47 compute-2 sudo[77559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:47 compute-2 sudo[77584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:47 compute-2 sudo[77584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:47 compute-2 sudo[77584]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:47 compute-2 sudo[77609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:11:47 compute-2 sudo[77609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:47 compute-2 podman[77674]: 2025-11-29 07:11:47.790551032 +0000 UTC m=+0.019219495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:49 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:50.085+0000 7fae681ee140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]:   from numpy import show_config as show_numpy_config
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:50.652+0000 7fae681ee140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'influx'
Nov 29 07:11:50 compute-2 podman[77674]: 2025-11-29 07:11:50.709554147 +0000 UTC m=+2.938222590 container create 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'insights'
Nov 29 07:11:50 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:50.898+0000 7fae681ee140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-2 ceph-mon[77138]: 2.11 scrub starts
Nov 29 07:11:51 compute-2 ceph-mon[77138]: 2.11 scrub ok
Nov 29 07:11:51 compute-2 ceph-mon[77138]: 2.1f scrub starts
Nov 29 07:11:51 compute-2 ceph-mon[77138]: 2.1f scrub ok
Nov 29 07:11:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:11:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 07:11:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:51 compute-2 ceph-mon[77138]: Deploying daemon crash.compute-2 on compute-2
Nov 29 07:11:51 compute-2 systemd[1]: Started libpod-conmon-2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb.scope.
Nov 29 07:11:51 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:51 compute-2 podman[77674]: 2025-11-29 07:11:51.142202784 +0000 UTC m=+3.370871257 container init 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:51 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'iostat'
Nov 29 07:11:51 compute-2 podman[77674]: 2025-11-29 07:11:51.152917141 +0000 UTC m=+3.381585594 container start 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:51 compute-2 podman[77674]: 2025-11-29 07:11:51.158200246 +0000 UTC m=+3.386868709 container attach 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:51 compute-2 upbeat_wu[77691]: 167 167
Nov 29 07:11:51 compute-2 systemd[1]: libpod-2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb.scope: Deactivated successfully.
Nov 29 07:11:51 compute-2 podman[77674]: 2025-11-29 07:11:51.160213729 +0000 UTC m=+3.388882172 container died 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:11:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-9654689ff049f298418eb8a796266546802093762f5bdf86e3665de4815fbe3b-merged.mount: Deactivated successfully.
Nov 29 07:11:51 compute-2 podman[77674]: 2025-11-29 07:11:51.263616539 +0000 UTC m=+3.492284982 container remove 2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:51 compute-2 systemd[1]: libpod-conmon-2c123e8c3757c5af3f3dc2fdf80237fa6915343322dfddcc0ee8199a22f05dfb.scope: Deactivated successfully.
Nov 29 07:11:51 compute-2 systemd[1]: Reloading.
Nov 29 07:11:51 compute-2 systemd-rc-local-generator[77737]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:51 compute-2 systemd-sysv-generator[77740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:51 compute-2 ceph-mgr[77498]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:11:51 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:51.421+0000 7fae681ee140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-2 systemd[1]: Reloading.
Nov 29 07:11:51 compute-2 systemd-rc-local-generator[77777]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:51 compute-2 systemd-sysv-generator[77780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:51 compute-2 systemd[1]: Starting Ceph crash.compute-2 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:11:52 compute-2 ceph-mon[77138]: from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:11:52 compute-2 ceph-mon[77138]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 07:11:52 compute-2 ceph-mon[77138]: pgmap v103: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:52 compute-2 ceph-mon[77138]: pgmap v104: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:52 compute-2 ceph-mon[77138]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 07:11:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:52 compute-2 podman[77834]: 2025-11-29 07:11:52.087289945 +0000 UTC m=+0.042271000 container create c7fb321ff790afecfd4a706ce5bbf94e066a59ad70eb8a119d0871a968c8040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38e37efbc6eaa2dac729c60b472eba66696a4ebda5d7a50e914f85b13e3ae97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38e37efbc6eaa2dac729c60b472eba66696a4ebda5d7a50e914f85b13e3ae97/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38e37efbc6eaa2dac729c60b472eba66696a4ebda5d7a50e914f85b13e3ae97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38e37efbc6eaa2dac729c60b472eba66696a4ebda5d7a50e914f85b13e3ae97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:52 compute-2 podman[77834]: 2025-11-29 07:11:52.157732789 +0000 UTC m=+0.112713864 container init c7fb321ff790afecfd4a706ce5bbf94e066a59ad70eb8a119d0871a968c8040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:52 compute-2 podman[77834]: 2025-11-29 07:11:52.067006478 +0000 UTC m=+0.021987583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:52 compute-2 podman[77834]: 2025-11-29 07:11:52.163563362 +0000 UTC m=+0.118544427 container start c7fb321ff790afecfd4a706ce5bbf94e066a59ad70eb8a119d0871a968c8040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:11:52 compute-2 bash[77834]: c7fb321ff790afecfd4a706ce5bbf94e066a59ad70eb8a119d0871a968c8040c
Nov 29 07:11:52 compute-2 systemd[1]: Started Ceph crash.compute-2 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:11:52 compute-2 sudo[77609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1020053194 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:11:52 compute-2 sudo[77854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:52 compute-2 sudo[77854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:52 compute-2 sudo[77854]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 07:11:52 compute-2 sudo[77879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:11:52 compute-2 sudo[77879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:52 compute-2 sudo[77879]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:52 compute-2 sudo[77906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:11:52 compute-2 sudo[77906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:52 compute-2 sudo[77906]: pam_unix(sudo:session): session closed for user root
Nov 29 07:11:52 compute-2 sudo[77931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 07:11:52 compute-2 sudo[77931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.562+0000 7ff6b97a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.562+0000 7ff6b97a2640 -1 AuthRegistry(0x7ff6b40675b0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.564+0000 7ff6b97a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.564+0000 7ff6b97a2640 -1 AuthRegistry(0x7ff6b97a1000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.565+0000 7ff6b37fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.566+0000 7ff6b2ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.566+0000 7ff6b27fc640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: 2025-11-29T07:11:52.566+0000 7ff6b97a2640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 07:11:52 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-2[77849]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 07:11:52 compute-2 podman[78006]: 2025-11-29 07:11:52.867753572 +0000 UTC m=+0.024939185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:53 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'localpool'
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.366914849 +0000 UTC m=+0.524100452 container create 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:11:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:53 compute-2 ceph-mon[77138]: pgmap v105: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:53 compute-2 systemd[1]: Started libpod-conmon-1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028.scope.
Nov 29 07:11:53 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.462345348 +0000 UTC m=+0.619530951 container init 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.473155338 +0000 UTC m=+0.630340921 container start 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:53 compute-2 gallant_faraday[78023]: 167 167
Nov 29 07:11:53 compute-2 systemd[1]: libpod-1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028.scope: Deactivated successfully.
Nov 29 07:11:53 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.644579855 +0000 UTC m=+0.801765478 container attach 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.645423592 +0000 UTC m=+0.802609175 container died 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e2 new map
Nov 29 07:11:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:11:53.720209+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 29 07:11:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 07:11:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-ff74dc6fc82b650d6d8783232063980fd6c10dabb5808cdd28538ff48b30d562-merged.mount: Deactivated successfully.
Nov 29 07:11:53 compute-2 podman[78006]: 2025-11-29 07:11:53.996527276 +0000 UTC m=+1.153712869 container remove 1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:11:54 compute-2 systemd[1]: libpod-conmon-1cb91b411e1c1f39eb57e7f6125fe83c0225b84660f4fa39608b085dd1b8d028.scope: Deactivated successfully.
Nov 29 07:11:54 compute-2 podman[78047]: 2025-11-29 07:11:54.169079929 +0000 UTC m=+0.044096497 container create 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:54 compute-2 systemd[1]: Started libpod-conmon-309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff.scope.
Nov 29 07:11:54 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:11:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:54 compute-2 podman[78047]: 2025-11-29 07:11:54.150772203 +0000 UTC m=+0.025788801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:11:54 compute-2 podman[78047]: 2025-11-29 07:11:54.266096778 +0000 UTC m=+0.141113366 container init 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:54 compute-2 podman[78047]: 2025-11-29 07:11:54.275902917 +0000 UTC m=+0.150919495 container start 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:11:54 compute-2 podman[78047]: 2025-11-29 07:11:54.280023305 +0000 UTC m=+0.155039883 container attach 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:54 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'mirroring'
Nov 29 07:11:54 compute-2 ceph-mon[77138]: 2.12 scrub starts
Nov 29 07:11:54 compute-2 ceph-mon[77138]: 2.12 scrub ok
Nov 29 07:11:54 compute-2 ceph-mon[77138]: 2.1e scrub starts
Nov 29 07:11:54 compute-2 ceph-mon[77138]: 2.1e scrub ok
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='client.14259 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:11:54 compute-2 ceph-mon[77138]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:11:54 compute-2 ceph-mon[77138]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 07:11:54 compute-2 ceph-mon[77138]: osdmap e33: 2 total, 2 up, 2 in
Nov 29 07:11:54 compute-2 ceph-mon[77138]: fsmap cephfs:0
Nov 29 07:11:54 compute-2 ceph-mon[77138]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 07:11:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:54 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'nfs'
Nov 29 07:11:55 compute-2 magical_mclaren[78063]: --> passed data devices: 0 physical, 1 LVM
Nov 29 07:11:55 compute-2 magical_mclaren[78063]: --> relative data size: 1.0
Nov 29 07:11:55 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:11:55 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ebea8b7f-6a60-41f3-b580-d449bc0d4887
Nov 29 07:11:55 compute-2 ceph-mgr[77498]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'orchestrator'
Nov 29 07:11:55 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:55.347+0000 7fae681ee140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"} v 0) v1
Nov 29 07:11:55 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4274206403' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:56.115+0000 7fae681ee140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 07:11:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 07:11:56 compute-2 ceph-mon[77138]: pgmap v107: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:56 compute-2 ceph-mon[77138]: 2.14 scrub starts
Nov 29 07:11:56 compute-2 ceph-mon[77138]: 2.14 scrub ok
Nov 29 07:11:56 compute-2 ceph-mon[77138]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:11:56 compute-2 ceph-mon[77138]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 07:11:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4274206403' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 07:11:56 compute-2 ceph-mon[77138]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:11:56 compute-2 lvm[78111]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:11:56 compute-2 lvm[78111]: VG ceph_vg0 finished
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:56.436+0000 7fae681ee140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'osd_support'
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 07:11:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:11:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/80790899' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:11:56 compute-2 magical_mclaren[78063]:  stderr: got monmap epoch 3
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: --> Creating keyring file for osd.2
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 07:11:56 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid ebea8b7f-6a60-41f3-b580-d449bc0d4887 --setuser ceph --setgroup ceph
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'progress'
Nov 29 07:11:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:11:57 compute-2 ceph-mgr[77498]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:57 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'prometheus'
Nov 29 07:11:57 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:56.686+0000 7fae681ee140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:57 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:56.995+0000 7fae681ee140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:57 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:57.266+0000 7fae681ee140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-2 ceph-mon[77138]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]': finished
Nov 29 07:11:58 compute-2 ceph-mon[77138]: osdmap e34: 3 total, 2 up, 3 in
Nov 29 07:11:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:11:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:58 compute-2 ceph-mon[77138]: pgmap v109: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:11:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/80790899' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:11:58 compute-2 ceph-mgr[77498]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'rbd_support'
Nov 29 07:11:58 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:58.315+0000 7fae681ee140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-2 ceph-mgr[77498]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'restful'
Nov 29 07:11:58 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:11:58.623+0000 7fae681ee140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:59 compute-2 ceph-mon[77138]: 2.d scrub starts
Nov 29 07:11:59 compute-2 ceph-mon[77138]: 2.d scrub ok
Nov 29 07:11:59 compute-2 ceph-mon[77138]: 2.16 deep-scrub starts
Nov 29 07:11:59 compute-2 ceph-mon[77138]: 2.16 deep-scrub ok
Nov 29 07:11:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 07:11:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 07:11:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:11:59 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'rgw'
Nov 29 07:12:00 compute-2 ceph-mgr[77498]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:12:00 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'rook'
Nov 29 07:12:00 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:00.212+0000 7fae681ee140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:12:00 compute-2 ceph-mon[77138]: pgmap v110: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:00 compute-2 ceph-mon[77138]: 2.e scrub starts
Nov 29 07:12:00 compute-2 ceph-mon[77138]: 2.e scrub ok
Nov 29 07:12:00 compute-2 ceph-mon[77138]: 2.17 scrub starts
Nov 29 07:12:00 compute-2 ceph-mon[77138]: 2.17 scrub ok
Nov 29 07:12:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2376407688' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:12:00 compute-2 systemd[72591]: Starting Mark boot as successful...
Nov 29 07:12:00 compute-2 systemd[72591]: Finished Mark boot as successful.
Nov 29 07:12:01 compute-2 ceph-mon[77138]: pgmap v111: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3363120177' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:12:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:02 compute-2 ceph-mgr[77498]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'selftest'
Nov 29 07:12:02 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:02.946+0000 7fae681ee140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-2 ceph-mon[77138]: 2.18 scrub starts
Nov 29 07:12:03 compute-2 ceph-mon[77138]: 2.18 scrub ok
Nov 29 07:12:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4263620903' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 07:12:03 compute-2 ceph-mgr[77498]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:03.222+0000 7fae681ee140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'snap_schedule'
Nov 29 07:12:03 compute-2 ceph-mgr[77498]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'stats'
Nov 29 07:12:03 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:03.484+0000 7fae681ee140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'status'
Nov 29 07:12:04 compute-2 ceph-mon[77138]: pgmap v112: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:04 compute-2 ceph-mon[77138]: 2.1a scrub starts
Nov 29 07:12:04 compute-2 ceph-mon[77138]: 2.1a scrub ok
Nov 29 07:12:04 compute-2 ceph-mgr[77498]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'telegraf'
Nov 29 07:12:04 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:04.095+0000 7fae681ee140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-2 magical_mclaren[78063]:  stderr: 2025-11-29T07:11:56.990+0000 7f7231cfb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:04 compute-2 magical_mclaren[78063]:  stderr: 2025-11-29T07:11:56.990+0000 7f7231cfb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:04 compute-2 magical_mclaren[78063]:  stderr: 2025-11-29T07:11:56.990+0000 7f7231cfb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:04 compute-2 magical_mclaren[78063]:  stderr: 2025-11-29T07:11:56.990+0000 7f7231cfb740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 07:12:04 compute-2 magical_mclaren[78063]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 07:12:04 compute-2 systemd[1]: libpod-309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff.scope: Deactivated successfully.
Nov 29 07:12:04 compute-2 systemd[1]: libpod-309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff.scope: Consumed 3.140s CPU time.
Nov 29 07:12:04 compute-2 podman[78047]: 2025-11-29 07:12:04.356999073 +0000 UTC m=+10.232015661 container died 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:12:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-1f7de54189e297c6f238a04491d6ebd4b523a66a4af3f6d80bfd7c71cd4ce385-merged.mount: Deactivated successfully.
Nov 29 07:12:04 compute-2 podman[78047]: 2025-11-29 07:12:04.414763858 +0000 UTC m=+10.289780436 container remove 309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:04 compute-2 ceph-mgr[77498]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'telemetry'
Nov 29 07:12:04 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:04.421+0000 7fae681ee140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-2 systemd[1]: libpod-conmon-309d3b5e0e733dc28002b65f7d2f8fd1b4ed6558263b8c3cb821257b4bc3b1ff.scope: Deactivated successfully.
Nov 29 07:12:04 compute-2 sudo[77931]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:04 compute-2 sudo[79051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:04 compute-2 sudo[79051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:04 compute-2 sudo[79051]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:04 compute-2 sudo[79076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:04 compute-2 sudo[79076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:04 compute-2 sudo[79076]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:04 compute-2 sudo[79101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:04 compute-2 sudo[79101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:04 compute-2 sudo[79101]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:04 compute-2 sudo[79126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- lvm list --format json
Nov 29 07:12:04 compute-2 sudo[79126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:05 compute-2 ceph-mon[77138]: pgmap v113: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:05 compute-2 ceph-mgr[77498]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 07:12:05 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:05.162+0000 7fae681ee140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.072473778 +0000 UTC m=+0.028353773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.330523987 +0000 UTC m=+0.286403962 container create 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:12:05 compute-2 systemd[1]: Started libpod-conmon-0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367.scope.
Nov 29 07:12:05 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.608699339 +0000 UTC m=+0.564579334 container init 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.618534379 +0000 UTC m=+0.574414354 container start 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:12:05 compute-2 zealous_haslett[79208]: 167 167
Nov 29 07:12:05 compute-2 systemd[1]: libpod-0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367.scope: Deactivated successfully.
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.883789944 +0000 UTC m=+0.839670009 container attach 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:05 compute-2 podman[79192]: 2025-11-29 07:12:05.885571511 +0000 UTC m=+0.841451546 container died 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:12:05 compute-2 ceph-mgr[77498]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'volumes'
Nov 29 07:12:05 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:05.995+0000 7fae681ee140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:12:06 compute-2 systemd[1]: var-lib-containers-storage-overlay-75255637eeea304b7a1c8fefc16e54bacdcbf0d0b78b0c2c0762c195e2ef4f20-merged.mount: Deactivated successfully.
Nov 29 07:12:06 compute-2 ceph-mgr[77498]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:12:06 compute-2 ceph-mgr[77498]: mgr[py] Loading python module 'zabbix'
Nov 29 07:12:06 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:06.727+0000 7fae681ee140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:12:06 compute-2 ceph-mon[77138]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:06 compute-2 ceph-mon[77138]: Standby manager daemon compute-1.jjnjed started
Nov 29 07:12:06 compute-2 ceph-mgr[77498]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:12:06 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-2-vyxqrz[77494]: 2025-11-29T07:12:06.977+0000 7fae681ee140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:12:06 compute-2 ceph-mgr[77498]: ms_deliver_dispatch: unhandled message 0x5649030b9080 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 07:12:06 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 07:12:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:07 compute-2 podman[79192]: 2025-11-29 07:12:07.736103768 +0000 UTC m=+2.691983753 container remove 0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:12:07 compute-2 systemd[1]: libpod-conmon-0601c1b49badbf57216cb9650f94bcd7d096654c4c652486f8afb848e53bc367.scope: Deactivated successfully.
Nov 29 07:12:07 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 07:12:08 compute-2 podman[79232]: 2025-11-29 07:12:07.999601108 +0000 UTC m=+0.074067969 container create 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:08 compute-2 ceph-mon[77138]: pgmap v114: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:08 compute-2 podman[79232]: 2025-11-29 07:12:07.949546855 +0000 UTC m=+0.024013736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:08 compute-2 systemd[1]: Started libpod-conmon-6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9.scope.
Nov 29 07:12:08 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875442e8411896ea075bc9f49409473def0b7f599cd16c51c2961573da0f1cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875442e8411896ea075bc9f49409473def0b7f599cd16c51c2961573da0f1cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875442e8411896ea075bc9f49409473def0b7f599cd16c51c2961573da0f1cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875442e8411896ea075bc9f49409473def0b7f599cd16c51c2961573da0f1cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-2 podman[79232]: 2025-11-29 07:12:08.247836279 +0000 UTC m=+0.322303170 container init 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:08 compute-2 podman[79232]: 2025-11-29 07:12:08.383165825 +0000 UTC m=+0.457632686 container start 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:12:08 compute-2 podman[79232]: 2025-11-29 07:12:08.590002903 +0000 UTC m=+0.664469784 container attach 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:12:09 compute-2 ceph-mon[77138]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:09 compute-2 ceph-mon[77138]: Standby manager daemon compute-2.vyxqrz started
Nov 29 07:12:09 compute-2 ceph-mon[77138]: mgrmap e9: compute-0.rotard(active, since 2m), standbys: compute-1.jjnjed
Nov 29 07:12:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jjnjed", "id": "compute-1.jjnjed"}]: dispatch
Nov 29 07:12:09 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 2m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:12:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-2.vyxqrz", "id": "compute-2.vyxqrz"}]: dispatch
Nov 29 07:12:09 compute-2 magical_perlman[79249]: {
Nov 29 07:12:09 compute-2 magical_perlman[79249]:     "2": [
Nov 29 07:12:09 compute-2 magical_perlman[79249]:         {
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "devices": [
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "/dev/loop3"
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             ],
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "lv_name": "ceph_lv0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "lv_size": "7511998464",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=KQQNUz-RHjx-kIp0-G9BO-QHtb-KwbO-tNScQ8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ebea8b7f-6a60-41f3-b580-d449bc0d4887,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "lv_uuid": "KQQNUz-RHjx-kIp0-G9BO-QHtb-KwbO-tNScQ8",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "name": "ceph_lv0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "tags": {
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.block_uuid": "KQQNUz-RHjx-kIp0-G9BO-QHtb-KwbO-tNScQ8",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.cluster_name": "ceph",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.crush_device_class": "",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.encrypted": "0",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.osd_fsid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.osd_id": "2",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.type": "block",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:                 "ceph.vdo": "0"
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             },
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "type": "block",
Nov 29 07:12:09 compute-2 magical_perlman[79249]:             "vg_name": "ceph_vg0"
Nov 29 07:12:09 compute-2 magical_perlman[79249]:         }
Nov 29 07:12:09 compute-2 magical_perlman[79249]:     ]
Nov 29 07:12:09 compute-2 magical_perlman[79249]: }
Nov 29 07:12:09 compute-2 systemd[1]: libpod-6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9.scope: Deactivated successfully.
Nov 29 07:12:09 compute-2 podman[79232]: 2025-11-29 07:12:09.268218186 +0000 UTC m=+1.342685077 container died 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:12:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-875442e8411896ea075bc9f49409473def0b7f599cd16c51c2961573da0f1cf0-merged.mount: Deactivated successfully.
Nov 29 07:12:09 compute-2 podman[79232]: 2025-11-29 07:12:09.351942767 +0000 UTC m=+1.426409648 container remove 6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:09 compute-2 systemd[1]: libpod-conmon-6abe1d5756c6ac20ecfdae444488b4737743fe8450da21ca156d22e74a1c36a9.scope: Deactivated successfully.
Nov 29 07:12:09 compute-2 sudo[79126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:09 compute-2 sudo[79272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:09 compute-2 sudo[79272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:09 compute-2 sudo[79272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:09 compute-2 sudo[79297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:09 compute-2 sudo[79297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:09 compute-2 sudo[79297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:09 compute-2 sudo[79322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:09 compute-2 sudo[79322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:09 compute-2 sudo[79322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:09 compute-2 sudo[79347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:09 compute-2 sudo[79347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:10 compute-2 ceph-mon[77138]: pgmap v115: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 07:12:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:10 compute-2 ceph-mon[77138]: Deploying daemon osd.2 on compute-2
Nov 29 07:12:10 compute-2 ceph-mon[77138]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.150107421 +0000 UTC m=+0.031284594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.368860026 +0000 UTC m=+0.250037179 container create 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:12:10 compute-2 systemd[1]: Started libpod-conmon-3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c.scope.
Nov 29 07:12:10 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.501712571 +0000 UTC m=+0.382889754 container init 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.50899352 +0000 UTC m=+0.390170673 container start 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.513257794 +0000 UTC m=+0.394434947 container attach 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:10 compute-2 sleepy_villani[79430]: 167 167
Nov 29 07:12:10 compute-2 systemd[1]: libpod-3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c.scope: Deactivated successfully.
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.51634901 +0000 UTC m=+0.397526623 container died 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:12:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-c24d582a56275ef5e147c3e4eed9360fa55154e5b1573c1d27789892ac067113-merged.mount: Deactivated successfully.
Nov 29 07:12:10 compute-2 podman[79413]: 2025-11-29 07:12:10.640462591 +0000 UTC m=+0.521639744 container remove 3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:12:10 compute-2 systemd[1]: libpod-conmon-3efe29b8e14670bbd5f6dd676a3ee347e6db087aff23a99d5818e638d407952c.scope: Deactivated successfully.
Nov 29 07:12:11 compute-2 podman[79462]: 2025-11-29 07:12:10.980983592 +0000 UTC m=+0.049606639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:11 compute-2 podman[79462]: 2025-11-29 07:12:11.19567984 +0000 UTC m=+0.264302807 container create 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:12:11 compute-2 ceph-mon[77138]: 2.1c scrub starts
Nov 29 07:12:11 compute-2 ceph-mon[77138]: 2.1c scrub ok
Nov 29 07:12:11 compute-2 ceph-mon[77138]: pgmap v116: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:11 compute-2 systemd[1]: Started libpod-conmon-7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b.scope.
Nov 29 07:12:11 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:12 compute-2 podman[79462]: 2025-11-29 07:12:12.749889554 +0000 UTC m=+1.818512551 container init 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:12:12 compute-2 podman[79462]: 2025-11-29 07:12:12.759683052 +0000 UTC m=+1.828306029 container start 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:13 compute-2 podman[79462]: 2025-11-29 07:12:13.181987533 +0000 UTC m=+2.250610530 container attach 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:13 compute-2 ceph-mon[77138]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:13 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test[79478]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:12:13 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test[79478]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:12:13 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test[79478]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:12:13 compute-2 systemd[1]: libpod-7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b.scope: Deactivated successfully.
Nov 29 07:12:13 compute-2 podman[79462]: 2025-11-29 07:12:13.535681159 +0000 UTC m=+2.604304156 container died 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-14ba8bdc0a2d3c4705bf8dd7882822bc9261cbfb8da3e66d4453fe29dfeb5579-merged.mount: Deactivated successfully.
Nov 29 07:12:14 compute-2 podman[79462]: 2025-11-29 07:12:14.197992722 +0000 UTC m=+3.266615679 container remove 7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:12:14 compute-2 systemd[1]: libpod-conmon-7872f1da4085279b76efc217a28af23dd0068e29c7d207d9f0940e371122362b.scope: Deactivated successfully.
Nov 29 07:12:14 compute-2 ceph-mon[77138]: 2.10 deep-scrub starts
Nov 29 07:12:14 compute-2 ceph-mon[77138]: 2.10 deep-scrub ok
Nov 29 07:12:14 compute-2 ceph-mon[77138]: pgmap v117: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:12:14 compute-2 systemd[1]: Reloading.
Nov 29 07:12:14 compute-2 systemd-rc-local-generator[79543]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:14 compute-2 systemd-sysv-generator[79547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 07:12:14 compute-2 systemd[1]: Reloading.
Nov 29 07:12:14 compute-2 systemd-rc-local-generator[79587]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:14 compute-2 systemd-sysv-generator[79590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:15 compute-2 systemd[1]: Starting Ceph osd.2 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:12:15 compute-2 ceph-mon[77138]: 2.13 scrub starts
Nov 29 07:12:15 compute-2 ceph-mon[77138]: 2.13 scrub ok
Nov 29 07:12:15 compute-2 ceph-mon[77138]: 2.1d scrub starts
Nov 29 07:12:15 compute-2 ceph-mon[77138]: 2.1d scrub ok
Nov 29 07:12:15 compute-2 ceph-mon[77138]: pgmap v118: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:12:15 compute-2 ceph-mon[77138]: osdmap e35: 3 total, 2 up, 3 in
Nov 29 07:12:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:12:15 compute-2 podman[79641]: 2025-11-29 07:12:15.357717679 +0000 UTC m=+0.049775925 container create 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:15 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:15 compute-2 podman[79641]: 2025-11-29 07:12:15.336755801 +0000 UTC m=+0.028814157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:15 compute-2 podman[79641]: 2025-11-29 07:12:15.437793586 +0000 UTC m=+0.129851852 container init 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:15 compute-2 podman[79641]: 2025-11-29 07:12:15.445110736 +0000 UTC m=+0.137168982 container start 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:12:15 compute-2 podman[79641]: 2025-11-29 07:12:15.449180903 +0000 UTC m=+0.141239169 container attach 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:16 compute-2 bash[79641]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:16 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate[79656]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:12:16 compute-2 bash[79641]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:12:16 compute-2 systemd[1]: libpod-114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb.scope: Deactivated successfully.
Nov 29 07:12:16 compute-2 podman[79641]: 2025-11-29 07:12:16.460667531 +0000 UTC m=+1.152725797 container died 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:16 compute-2 systemd[1]: libpod-114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb.scope: Consumed 1.030s CPU time.
Nov 29 07:12:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 07:12:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-a1c441a9528e48d9704b41d2063be8592efd7a186802597995ebf0913e332839-merged.mount: Deactivated successfully.
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:12:16 compute-2 ceph-mon[77138]: osdmap e36: 3 total, 2 up, 3 in
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3295273094' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:16 compute-2 podman[79641]: 2025-11-29 07:12:16.946368516 +0000 UTC m=+1.638426772 container remove 114283c4c30a3acbbd88ca46503315de62c3f73a6c2a4eb0026d841160ca9ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:12:17 compute-2 podman[79814]: 2025-11-29 07:12:17.163379025 +0000 UTC m=+0.047696740 container create 63a1020ade899a49f6522a3c225e90759439eaf20c90a6bd4f8bf90f4eda5485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce6e303b7859efb3b7525cb616d86143279245e1076c706afea94296dfff358/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce6e303b7859efb3b7525cb616d86143279245e1076c706afea94296dfff358/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce6e303b7859efb3b7525cb616d86143279245e1076c706afea94296dfff358/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce6e303b7859efb3b7525cb616d86143279245e1076c706afea94296dfff358/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce6e303b7859efb3b7525cb616d86143279245e1076c706afea94296dfff358/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-2 podman[79814]: 2025-11-29 07:12:17.138578336 +0000 UTC m=+0.022896091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:17 compute-2 podman[79814]: 2025-11-29 07:12:17.25549275 +0000 UTC m=+0.139810505 container init 63a1020ade899a49f6522a3c225e90759439eaf20c90a6bd4f8bf90f4eda5485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:12:17 compute-2 podman[79814]: 2025-11-29 07:12:17.261635773 +0000 UTC m=+0.145953508 container start 63a1020ade899a49f6522a3c225e90759439eaf20c90a6bd4f8bf90f4eda5485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:12:17 compute-2 bash[79814]: 63a1020ade899a49f6522a3c225e90759439eaf20c90a6bd4f8bf90f4eda5485
Nov 29 07:12:17 compute-2 systemd[1]: Started Ceph osd.2 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:12:17 compute-2 ceph-osd[79833]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:12:17 compute-2 ceph-osd[79833]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:12:17 compute-2 ceph-osd[79833]: pidfile_write: ignore empty --pid-file
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558771b77c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558771b77c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558771b77c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558771b77c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772983000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772983000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772983000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772983000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772983000 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:12:17 compute-2 sudo[79347]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-2 sudo[79846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:17 compute-2 sudo[79846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-2 sudo[79846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-2 sudo[79871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:17 compute-2 sudo[79871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-2 sudo[79871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-2 sudo[79896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:17 compute-2 sudo[79896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-2 sudo[79896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558771b77c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:12:17 compute-2 sudo[79921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- raw list --format json
Nov 29 07:12:17 compute-2 sudo[79921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:17 compute-2 ceph-osd[79833]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 07:12:17 compute-2 ceph-osd[79833]: load: jerasure load: lrc 
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:12:17 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:12:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 07:12:17 compute-2 podman[79991]: 2025-11-29 07:12:17.956640475 +0000 UTC m=+0.078589110 container create 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:12:17 compute-2 podman[79991]: 2025-11-29 07:12:17.90269328 +0000 UTC m=+0.024641945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:18 compute-2 ceph-mon[77138]: pgmap v121: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: osdmap e37: 3 total, 2 up, 3 in
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:18 compute-2 systemd[1]: Started libpod-conmon-459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59.scope.
Nov 29 07:12:18 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:18 compute-2 podman[79991]: 2025-11-29 07:12:18.10412373 +0000 UTC m=+0.226072365 container init 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:18 compute-2 podman[79991]: 2025-11-29 07:12:18.112163082 +0000 UTC m=+0.234111717 container start 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:12:18 compute-2 mystifying_chandrasekhar[80008]: 167 167
Nov 29 07:12:18 compute-2 systemd[1]: libpod-459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59.scope: Deactivated successfully.
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:12:18 compute-2 podman[79991]: 2025-11-29 07:12:18.12670717 +0000 UTC m=+0.248655805 container attach 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 07:12:18 compute-2 podman[79991]: 2025-11-29 07:12:18.127841846 +0000 UTC m=+0.249790481 container died 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:12:18 compute-2 systemd[1]: var-lib-containers-storage-overlay-ddedac2346cf8bf888b2826d879a276b3492aaa5af4fa90a71a48ff7accdc852-merged.mount: Deactivated successfully.
Nov 29 07:12:18 compute-2 podman[79991]: 2025-11-29 07:12:18.179760867 +0000 UTC m=+0.301709502 container remove 459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:18 compute-2 systemd[1]: libpod-conmon-459e388e04981f53a9da5f5ea0112c2c52e701252324d4305ff0fda732e9cc59.scope: Deactivated successfully.
Nov 29 07:12:18 compute-2 podman[80035]: 2025-11-29 07:12:18.351495294 +0000 UTC m=+0.045512771 container create 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0ec00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs mount
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs mount shared_bdev_used = 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Git sha 0
Nov 29 07:12:18 compute-2 systemd[1]: Started libpod-conmon-764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d.scope.
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DB SUMMARY
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DB Session ID:  C8P7WFA1HWCH2VQQUX7B
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                     Options.env: 0x558772a11dc0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                Options.info_log: 0x558771bf4d20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.write_buffer_manager: 0x558772b0e460
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.row_cache: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.wal_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.wal_compression: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Compression algorithms supported:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZSTD supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 podman[80035]: 2025-11-29 07:12:18.33226002 +0000 UTC m=+0.026277527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc123c8f5216f461818193f0501527135333553ca0f2ac27f23fa310c69f826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771bea430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771bea430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf4740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771bea430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:12:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc123c8f5216f461818193f0501527135333553ca0f2ac27f23fa310c69f826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0a81e455-7da7-4cf4-9654-40318ff78c37
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338431269, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338431528, "job": 1, "event": "recovery_finished"}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: freelist init
Nov 29 07:12:18 compute-2 ceph-osd[79833]: freelist _read_cfg
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs umount
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:12:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc123c8f5216f461818193f0501527135333553ca0f2ac27f23fa310c69f826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc123c8f5216f461818193f0501527135333553ca0f2ac27f23fa310c69f826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-2 podman[80035]: 2025-11-29 07:12:18.463585737 +0000 UTC m=+0.157603244 container init 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:12:18 compute-2 podman[80035]: 2025-11-29 07:12:18.473188578 +0000 UTC m=+0.167206055 container start 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:18 compute-2 podman[80035]: 2025-11-29 07:12:18.480478558 +0000 UTC m=+0.174496045 container attach 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bdev(0x558772a0f400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs mount
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Git sha 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DB SUMMARY
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DB Session ID:  C8P7WFA1HWCH2VQQUX7A
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                     Options.env: 0x558771d423f0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                Options.info_log: 0x558771bf5c40
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.write_buffer_manager: 0x558772b0e460
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.row_cache: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                              Options.wal_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.wal_compression: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Compression algorithms supported:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZSTD supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bfe2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf44e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf44e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:           Options.merge_operator: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558771bf44e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558771beb4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.compression: LZ4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.num_levels: 7
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0a81e455-7da7-4cf4-9654-40318ff78c37
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338693459, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338730373, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400338, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0a81e455-7da7-4cf4-9654-40318ff78c37", "db_session_id": "C8P7WFA1HWCH2VQQUX7A", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338742819, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400338, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0a81e455-7da7-4cf4-9654-40318ff78c37", "db_session_id": "C8P7WFA1HWCH2VQQUX7A", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338746352, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400338, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0a81e455-7da7-4cf4-9654-40318ff78c37", "db_session_id": "C8P7WFA1HWCH2VQQUX7A", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400338748211, "job": 1, "event": "recovery_finished"}
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558771ca9c00
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: DB pointer 0x558772af7a00
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 07:12:18 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:12:18 compute-2 ceph-osd[79833]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:12:18 compute-2 ceph-osd[79833]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:12:18 compute-2 ceph-osd[79833]: _get_class not permitted to load lua
Nov 29 07:12:18 compute-2 ceph-osd[79833]: _get_class not permitted to load sdk
Nov 29 07:12:18 compute-2 ceph-osd[79833]: _get_class not permitted to load test_remote_reads
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 load_pgs
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 load_pgs opened 0 pgs
Nov 29 07:12:18 compute-2 ceph-osd[79833]: osd.2 0 log_to_monitors true
Nov 29 07:12:18 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2[79829]: 2025-11-29T07:12:18.781+0000 7fd0a2881740 -1 osd.2 0 log_to_monitors true
Nov 29 07:12:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 07:12:18 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 07:12:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 07:12:18 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: 2.c deep-scrub starts
Nov 29 07:12:18 compute-2 ceph-mon[77138]: 2.c deep-scrub ok
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: osdmap e38: 3 total, 2 up, 3 in
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3034131701' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:12:18 compute-2 ceph-mon[77138]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]: {
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:     "ebea8b7f-6a60-41f3-b580-d449bc0d4887": {
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:         "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:         "osd_id": 2,
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:         "osd_uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887",
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:         "type": "bluestore"
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]:     }
Nov 29 07:12:19 compute-2 wizardly_burnell[80067]: }
Nov 29 07:12:19 compute-2 systemd[1]: libpod-764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d.scope: Deactivated successfully.
Nov 29 07:12:19 compute-2 podman[80035]: 2025-11-29 07:12:19.397105364 +0000 UTC m=+1.091122841 container died 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:12:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-5dc123c8f5216f461818193f0501527135333553ca0f2ac27f23fa310c69f826-merged.mount: Deactivated successfully.
Nov 29 07:12:19 compute-2 podman[80035]: 2025-11-29 07:12:19.682940927 +0000 UTC m=+1.376958404 container remove 764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:12:19 compute-2 systemd[1]: libpod-conmon-764f1339971ab9f014e249d7c1dff406fbf40469f567f1fa1469e50dc5a06e9d.scope: Deactivated successfully.
Nov 29 07:12:19 compute-2 sudo[79921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:19 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:12:19 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:12:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 done with init, starting boot process
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 start_boot
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:12:19 compute-2 ceph-osd[79833]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 07:12:20 compute-2 ceph-mon[77138]: 2.15 scrub starts
Nov 29 07:12:20 compute-2 ceph-mon[77138]: 2.15 scrub ok
Nov 29 07:12:20 compute-2 ceph-mon[77138]: pgmap v124: 100 pgs: 2 peering, 62 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 07:12:20 compute-2 ceph-mon[77138]: osdmap e39: 3 total, 2 up, 3 in
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 07:12:20 compute-2 ceph-mon[77138]: osdmap e40: 3 total, 2 up, 3 in
Nov 29 07:12:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:20 compute-2 sudo[80497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:20 compute-2 sudo[80497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-2 sudo[80497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-2 sudo[80522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:12:20 compute-2 sudo[80522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-2 sudo[80522]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-2 sudo[80547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:20 compute-2 sudo[80547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-2 sudo[80547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-2 sudo[80572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:20 compute-2 sudo[80572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-2 sudo[80572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-2 sudo[80597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:20 compute-2 sudo[80597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-2 sudo[80597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-2 sudo[80622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:12:20 compute-2 sudo[80622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:21 compute-2 podman[80716]: 2025-11-29 07:12:21.145142659 +0000 UTC m=+0.322368272 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:12:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4208717030' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 07:12:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:21 compute-2 podman[80716]: 2025-11-29 07:12:21.466687584 +0000 UTC m=+0.643913197 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:12:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e41 e41: 3 total, 2 up, 3 in
Nov 29 07:12:22 compute-2 ceph-mon[77138]: purged_snaps scrub starts
Nov 29 07:12:22 compute-2 ceph-mon[77138]: purged_snaps scrub ok
Nov 29 07:12:22 compute-2 ceph-mon[77138]: pgmap v127: 146 pgs: 2 peering, 108 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:12:22 compute-2 ceph-mon[77138]: osdmap e41: 3 total, 2 up, 3 in
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1479633516' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 07:12:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:22 compute-2 sudo[80622]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-2 sudo[80803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:22 compute-2 sudo[80803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-2 sudo[80803]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-2 sudo[80828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:22 compute-2 sudo[80828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-2 sudo[80828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-2 sudo[80853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:22 compute-2 sudo[80853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-2 sudo[80853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-2 sudo[80878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:12:22 compute-2 sudo[80878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:23 compute-2 sudo[80878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:23 compute-2 sudo[80935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:23 compute-2 sudo[80935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:23 compute-2 sudo[80935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:23 compute-2 sudo[80960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:23 compute-2 sudo[80960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:23 compute-2 sudo[80960]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:23 compute-2 sudo[80985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:23 compute-2 sudo[80985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:23 compute-2 sudo[80985]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:23 compute-2 sudo[81010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:12:23 compute-2 sudo[81010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:24 compute-2 podman[81075]: 2025-11-29 07:12:24.001228627 +0000 UTC m=+0.030198000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:24 compute-2 podman[81075]: 2025-11-29 07:12:24.254178566 +0000 UTC m=+0.283147929 container create d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e42 e42: 3 total, 2 up, 3 in
Nov 29 07:12:24 compute-2 systemd[1]: Started libpod-conmon-d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c.scope.
Nov 29 07:12:24 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:24 compute-2 ceph-mon[77138]: 2.19 scrub starts
Nov 29 07:12:24 compute-2 ceph-mon[77138]: pgmap v129: 177 pgs: 2 peering, 139 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:25 compute-2 podman[81075]: 2025-11-29 07:12:25.271214039 +0000 UTC m=+1.300183422 container init d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:12:25 compute-2 ceph-mon[77138]: 2.19 scrub ok
Nov 29 07:12:25 compute-2 ceph-mon[77138]: 4.1 scrub starts
Nov 29 07:12:25 compute-2 ceph-mon[77138]: 4.1 scrub ok
Nov 29 07:12:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:25 compute-2 ceph-mon[77138]: osdmap e42: 3 total, 2 up, 3 in
Nov 29 07:12:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:25 compute-2 ceph-mon[77138]: pgmap v131: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:25 compute-2 podman[81075]: 2025-11-29 07:12:25.413469119 +0000 UTC m=+1.442438482 container start d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:12:25 compute-2 systemd[1]: libpod-d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c.scope: Deactivated successfully.
Nov 29 07:12:25 compute-2 goofy_euler[81092]: 167 167
Nov 29 07:12:25 compute-2 conmon[81092]: conmon d63f2b59c8120bb9ad13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c.scope/container/memory.events
Nov 29 07:12:25 compute-2 podman[81075]: 2025-11-29 07:12:25.478741391 +0000 UTC m=+1.507710774 container attach d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 07:12:25 compute-2 podman[81075]: 2025-11-29 07:12:25.479118213 +0000 UTC m=+1.508087596 container died d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-418668fb5fe31e2203153fcd0126a24fb3f61a179918acab2e8014cb59fd9d95-merged.mount: Deactivated successfully.
Nov 29 07:12:25 compute-2 podman[81075]: 2025-11-29 07:12:25.73041723 +0000 UTC m=+1.759386593 container remove d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:12:25 compute-2 systemd[1]: libpod-conmon-d63f2b59c8120bb9ad132162557ea4bb8d5a02a16f44ed154f0fbed30a6bf95c.scope: Deactivated successfully.
Nov 29 07:12:25 compute-2 podman[81118]: 2025-11-29 07:12:25.870356998 +0000 UTC m=+0.029629672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:26 compute-2 podman[81118]: 2025-11-29 07:12:26.011247826 +0000 UTC m=+0.170520470 container create ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:12:26 compute-2 systemd[1]: Started libpod-conmon-ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170.scope.
Nov 29 07:12:26 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34030b80df1fa5fef6447f00796bd99276c0e576d3599dd912c7e63d466947f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34030b80df1fa5fef6447f00796bd99276c0e576d3599dd912c7e63d466947f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34030b80df1fa5fef6447f00796bd99276c0e576d3599dd912c7e63d466947f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34030b80df1fa5fef6447f00796bd99276c0e576d3599dd912c7e63d466947f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-2 podman[81118]: 2025-11-29 07:12:26.235650808 +0000 UTC m=+0.394923482 container init ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:12:26 compute-2 podman[81118]: 2025-11-29 07:12:26.245372174 +0000 UTC m=+0.404644818 container start ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:12:26 compute-2 podman[81118]: 2025-11-29 07:12:26.340968708 +0000 UTC m=+0.500241452 container attach ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:12:26 compute-2 ceph-mon[77138]: 7.1 scrub starts
Nov 29 07:12:26 compute-2 ceph-mon[77138]: 7.1 scrub ok
Nov 29 07:12:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:27 compute-2 recursing_robinson[81134]: [
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:     {
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "available": false,
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "ceph_device": false,
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "lsm_data": {},
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "lvs": [],
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "path": "/dev/sr0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "rejected_reasons": [
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "Insufficient space (<5GB)",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "Has a FileSystem"
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         ],
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         "sys_api": {
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "actuators": null,
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "device_nodes": "sr0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "devname": "sr0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "human_readable_size": "482.00 KB",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "id_bus": "ata",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "model": "QEMU DVD-ROM",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "nr_requests": "2",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "parent": "/dev/sr0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "partitions": {},
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "path": "/dev/sr0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "removable": "1",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "rev": "2.5+",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "ro": "0",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "rotational": "1",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "sas_address": "",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "sas_device_handle": "",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "scheduler_mode": "mq-deadline",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "sectors": 0,
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "sectorsize": "2048",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "size": 493568.0,
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "support_discard": "2048",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "type": "disk",
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:             "vendor": "QEMU"
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:         }
Nov 29 07:12:27 compute-2 recursing_robinson[81134]:     }
Nov 29 07:12:27 compute-2 recursing_robinson[81134]: ]
Nov 29 07:12:27 compute-2 systemd[1]: libpod-ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170.scope: Deactivated successfully.
Nov 29 07:12:27 compute-2 systemd[1]: libpod-ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170.scope: Consumed 1.423s CPU time.
Nov 29 07:12:27 compute-2 podman[81118]: 2025-11-29 07:12:27.628194422 +0000 UTC m=+1.787467066 container died ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:12:27 compute-2 ceph-mon[77138]: pgmap v132: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:27 compute-2 ceph-mon[77138]: 7.2 scrub starts
Nov 29 07:12:27 compute-2 ceph-mon[77138]: 7.2 scrub ok
Nov 29 07:12:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-34030b80df1fa5fef6447f00796bd99276c0e576d3599dd912c7e63d466947f8-merged.mount: Deactivated successfully.
Nov 29 07:12:27 compute-2 podman[81118]: 2025-11-29 07:12:27.775302445 +0000 UTC m=+1.934575099 container remove ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:27 compute-2 systemd[1]: libpod-conmon-ab4c91d2c4c2b2113226357aecb3a3323e229b050794701c37e108e63e1af170.scope: Deactivated successfully.
Nov 29 07:12:27 compute-2 sudo[81010]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:27 compute-2 sudo[82226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-2 sudo[82226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-2 sudo[82226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:12:28 compute-2 sudo[82251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph
Nov 29 07:12:28 compute-2 sudo[82301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-2 sudo[82351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82351]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82376]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:28 compute-2 sudo[82401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-2 sudo[82451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-2 sudo[82524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 sudo[82549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-2 sudo[82549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-2 sudo[82549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-2 ceph-mon[77138]: 4.2 scrub starts
Nov 29 07:12:28 compute-2 ceph-mon[77138]: 4.2 scrub ok
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:12:28 compute-2 ceph-mon[77138]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 07:12:28 compute-2 ceph-mon[77138]: Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:28 compute-2 ceph-mon[77138]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:12:28 compute-2 ceph-mon[77138]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 07:12:28 compute-2 ceph-mon[77138]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 07:12:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:29 compute-2 sudo[82574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new
Nov 29 07:12:29 compute-2 sudo[82574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82599]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 07:12:29 compute-2 sudo[82624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82624]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:12:29 compute-2 sudo[82674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config
Nov 29 07:12:29 compute-2 sudo[82724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82749]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:12:29 compute-2 sudo[82774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82774]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:29 compute-2 sudo[82824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-2 sudo[82849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-2 sudo[82849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-2 sudo[82874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:12:30 compute-2 sudo[82874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[82874]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 sudo[82922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-2 sudo[82922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[82922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 sudo[82947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:12:30 compute-2 sudo[82947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[82947]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 ceph-mon[77138]: pgmap v133: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:30 compute-2 ceph-mon[77138]: Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:12:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:30 compute-2 sudo[82972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-2 sudo[82972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[82972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 sudo[82997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new
Nov 29 07:12:30 compute-2 sudo[82997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[82997]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 sudo[83022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-2 sudo[83022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[83022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-2 sudo[83047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-38a37ed2-442a-5e0d-a69a-881fdd186450/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf.new /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:12:30 compute-2 sudo[83047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-2 sudo[83047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-2 ceph-mon[77138]: Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:12:31 compute-2 ceph-mon[77138]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 07:12:31 compute-2 ceph-mon[77138]: 4.3 scrub starts
Nov 29 07:12:31 compute-2 ceph-mon[77138]: 4.3 scrub ok
Nov 29 07:12:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:31 compute-2 ceph-mon[77138]: pgmap v134: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:32 compute-2 sshd-session[71341]: Received disconnect from 38.102.83.151 port 35832:11: disconnected by user
Nov 29 07:12:32 compute-2 sshd-session[71341]: Disconnected from user zuul 38.102.83.151 port 35832
Nov 29 07:12:32 compute-2 sshd-session[71338]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:12:32 compute-2 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 07:12:32 compute-2 systemd[1]: session-19.scope: Consumed 8.693s CPU time.
Nov 29 07:12:32 compute-2 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Nov 29 07:12:32 compute-2 systemd-logind[787]: Removed session 19.
Nov 29 07:12:32 compute-2 ceph-mon[77138]: 4.4 scrub starts
Nov 29 07:12:32 compute-2 ceph-mon[77138]: 4.4 scrub ok
Nov 29 07:12:32 compute-2 ceph-mon[77138]: 7.3 scrub starts
Nov 29 07:12:32 compute-2 ceph-mon[77138]: 7.3 scrub ok
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:33 compute-2 ceph-mon[77138]: pgmap v135: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.964 iops: 4342.898 elapsed_sec: 0.691
Nov 29 07:12:34 compute-2 ceph-osd[79833]: log_channel(cluster) log [WRN] : OSD bench result of 4342.898351 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 0 waiting for initial osdmap
Nov 29 07:12:34 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2[79829]: 2025-11-29T07:12:34.071+0000 7fd09f018640 -1 osd.2 0 waiting for initial osdmap
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 40 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 40 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 40 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 40 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 42 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 42 set_numa_affinity not setting numa affinity
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 42 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 07:12:34 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-2[79829]: 2025-11-29T07:12:34.121+0000 7fd099e29640 -1 osd.2 42 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:12:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 43 state: booting -> active
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.2( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1a( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.14( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-mon[77138]: 4.5 scrub starts
Nov 29 07:12:34 compute-2 ceph-mon[77138]: 4.5 scrub ok
Nov 29 07:12:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.d( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.4( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.b( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.10( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.13( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.19( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1c( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-mon[77138]: OSD bench result of 4342.898351 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:12:36 compute-2 ceph-mon[77138]: 4.6 scrub starts
Nov 29 07:12:36 compute-2 ceph-mon[77138]: pgmap v136: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 07:12:36 compute-2 ceph-mon[77138]: 4.6 scrub ok
Nov 29 07:12:36 compute-2 ceph-mon[77138]: osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232] boot
Nov 29 07:12:36 compute-2 ceph-mon[77138]: osdmap e43: 3 total, 3 up, 3 in
Nov 29 07:12:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.0( empty local-lis/les=43/44 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.d( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.14( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1a( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.13( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.10( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:36 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 07:12:36 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 07:12:37 compute-2 sudo[83074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:37 compute-2 sudo[83074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-2 sudo[83074]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-2 sudo[83099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:37 compute-2 sudo[83099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-2 sudo[83099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-2 sudo[83124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:37 compute-2 sudo[83124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-2 sudo[83124]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-2 sudo[83149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:37 compute-2 sudo[83149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 4.7 deep-scrub starts
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 4.7 deep-scrub ok
Nov 29 07:12:37 compute-2 ceph-mon[77138]: osdmap e44: 3 total, 3 up, 3 in
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 7.4 scrub starts
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 7.4 scrub ok
Nov 29 07:12:37 compute-2 ceph-mon[77138]: pgmap v139: 177 pgs: 71 peering, 106 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 3.9 scrub starts
Nov 29 07:12:37 compute-2 ceph-mon[77138]: 3.9 scrub ok
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.016905186 +0000 UTC m=+0.032523472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.254897805 +0000 UTC m=+0.270516091 container create c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:12:38 compute-2 systemd[1]: Started libpod-conmon-c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd.scope.
Nov 29 07:12:38 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.386906304 +0000 UTC m=+0.402524580 container init c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.394993468 +0000 UTC m=+0.410611724 container start c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.398786038 +0000 UTC m=+0.414404314 container attach c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:38 compute-2 sad_jepsen[83232]: 167 167
Nov 29 07:12:38 compute-2 systemd[1]: libpod-c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd.scope: Deactivated successfully.
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.403441604 +0000 UTC m=+0.419059860 container died c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-3bbd29808346acfe1a5943493d5a50fc5929b211ce88f6f448b8409272f66138-merged.mount: Deactivated successfully.
Nov 29 07:12:38 compute-2 podman[83215]: 2025-11-29 07:12:38.456604155 +0000 UTC m=+0.472222411 container remove c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:12:38 compute-2 systemd[1]: Reloading.
Nov 29 07:12:38 compute-2 ceph-mon[77138]: Deploying daemon rgw.rgw.compute-2.tfmigt on compute-2
Nov 29 07:12:38 compute-2 ceph-mon[77138]: 4.8 scrub starts
Nov 29 07:12:38 compute-2 ceph-mon[77138]: 4.8 scrub ok
Nov 29 07:12:38 compute-2 systemd-rc-local-generator[83274]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:38 compute-2 systemd-sysv-generator[83280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:38 compute-2 systemd[1]: libpod-conmon-c80a219a73485b46a8383f859187ce554fcd8eb789d50c4aa4e7c04480c2d3bd.scope: Deactivated successfully.
Nov 29 07:12:38 compute-2 systemd[1]: Reloading.
Nov 29 07:12:38 compute-2 systemd-sysv-generator[83322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:38 compute-2 systemd-rc-local-generator[83318]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:39 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.tfmigt for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:12:39 compute-2 podman[83375]: 2025-11-29 07:12:39.501495923 +0000 UTC m=+0.059045757 container create 9f7687fa0fbf21897341e1eb8cec39ff0987ae73f3b9fde5a4ded0d3d8c2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-2-tfmigt, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:12:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cbc1312817461ec15217265ac7c4df01b174848945cb016bf7f6ffe9ed576/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cbc1312817461ec15217265ac7c4df01b174848945cb016bf7f6ffe9ed576/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cbc1312817461ec15217265ac7c4df01b174848945cb016bf7f6ffe9ed576/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cbc1312817461ec15217265ac7c4df01b174848945cb016bf7f6ffe9ed576/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.tfmigt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-2 podman[83375]: 2025-11-29 07:12:39.476214888 +0000 UTC m=+0.033764752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:39 compute-2 podman[83375]: 2025-11-29 07:12:39.571058709 +0000 UTC m=+0.128608543 container init 9f7687fa0fbf21897341e1eb8cec39ff0987ae73f3b9fde5a4ded0d3d8c2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-2-tfmigt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:12:39 compute-2 podman[83375]: 2025-11-29 07:12:39.576754348 +0000 UTC m=+0.134304162 container start 9f7687fa0fbf21897341e1eb8cec39ff0987ae73f3b9fde5a4ded0d3d8c2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-2-tfmigt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:39 compute-2 bash[83375]: 9f7687fa0fbf21897341e1eb8cec39ff0987ae73f3b9fde5a4ded0d3d8c2ec7d
Nov 29 07:12:39 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.tfmigt for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:12:39 compute-2 sudo[83149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:39 compute-2 radosgw[83394]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:12:39 compute-2 radosgw[83394]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 07:12:39 compute-2 radosgw[83394]: framework: beast
Nov 29 07:12:39 compute-2 radosgw[83394]: framework conf key: endpoint, val: 192.168.122.102:8082
Nov 29 07:12:39 compute-2 radosgw[83394]: init_numa not setting numa affinity
Nov 29 07:12:39 compute-2 ceph-mon[77138]: pgmap v140: 177 pgs: 71 peering, 106 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:12:39 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 07:12:39 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 07:12:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 07:12:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 07:12:40 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.952468872s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.848358154s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.953327179s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849311829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.19( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.953065872s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849102020s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.952883720s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.848999023s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.19( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.953003883s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849102020s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.953188896s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849311829s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.952241898s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.848358154s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.952778816s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.848999023s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951914787s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849273682s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951832771s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849273682s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951642036s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849246979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951606750s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849246979s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951222420s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849338531s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951382637s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849517822s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951374054s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849529266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951319695s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849517822s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.3( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951155663s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849372864s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951320648s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849529266s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951155663s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849338531s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.3( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951125145s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849372864s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951332092s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849689484s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.951291084s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849689484s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950805664s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849399567s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950767517s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849399567s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.d( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950756073s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849628448s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.d( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950556755s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849628448s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950213432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849636078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950214386s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849807739s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950184822s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849636078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950143814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849792480s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950071335s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849761963s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.a( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950144768s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849849701s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950047493s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849761963s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.a( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950118065s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849849701s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950029373s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849792480s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950020790s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849952698s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.14( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949916840s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849903107s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.14( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949868202s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849903107s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949930191s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849952698s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.950174332s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849807739s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949775696s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850013733s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949748039s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850013733s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.2( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949704170s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850032806s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949597359s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849956512s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.4( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949523926s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.849914551s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949565887s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849956512s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.2( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949660301s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850032806s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.4( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949491501s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.849914551s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949602127s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850090027s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949573517s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850090027s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949503899s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850044250s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949469566s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850044250s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949420929s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850048065s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949478149s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850162506s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949380875s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850048065s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949451447s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850162506s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949296951s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850116730s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949268341s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850116730s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949357033s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850215912s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949276924s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850261688s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949254036s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850269318s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949197769s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850215912s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949177742s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850215912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949233055s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850261688s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949164391s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850215912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949225426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850269318s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.b( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949005127s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850315094s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948987007s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850364685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.b( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948953629s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850315094s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948983192s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850364685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.949020386s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850418091s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948941231s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850364685s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948984146s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850418091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948968887s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850502014s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948859215s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850364685s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.c( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948945045s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850502014s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948782921s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850467682s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948760986s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850467682s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948674202s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850433350s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.f( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948576927s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850433350s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948581696s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850563049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948552132s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850563049s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.10( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948486328s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850601196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.10( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948454857s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850601196s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948346138s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850551605s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.13( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948353767s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850582123s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948416710s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850677490s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948316574s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850551605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=43/44 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948395729s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850677490s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.13( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948314667s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850582123s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948324203s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 33.850669861s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=43/44 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=11.948237419s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.850669861s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.16( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.14( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[7.1d( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: 5.f scrub starts
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: 5.f scrub ok
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: Deploying daemon rgw.rgw.compute-1.mdhebv on compute-1
Nov 29 07:12:40 compute-2 ceph-mon[77138]: 7.5 scrub starts
Nov 29 07:12:40 compute-2 ceph-mon[77138]: 7.5 scrub ok
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:12:40 compute-2 ceph-mon[77138]: osdmap e45: 3 total, 3 up, 3 in
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[4.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:12:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 07:12:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 07:12:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:12:41 compute-2 ceph-mon[77138]: pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:12:41 compute-2 ceph-mon[77138]: 5.4 scrub starts
Nov 29 07:12:41 compute-2 ceph-mon[77138]: 5.4 scrub ok
Nov 29 07:12:41 compute-2 ceph-mon[77138]: 7.7 deep-scrub starts
Nov 29 07:12:41 compute-2 ceph-mon[77138]: 7.7 deep-scrub ok
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:41 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 07:12:41 compute-2 ceph-mon[77138]: osdmap e46: 3 total, 3 up, 3 in
Nov 29 07:12:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 07:12:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 07:12:42 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:12:42 compute-2 ceph-mon[77138]: Deploying daemon rgw.rgw.compute-0.fvilij on compute-0
Nov 29 07:12:42 compute-2 ceph-mon[77138]: 7.c scrub starts
Nov 29 07:12:42 compute-2 ceph-mon[77138]: 7.c scrub ok
Nov 29 07:12:42 compute-2 ceph-mon[77138]: osdmap e47: 3 total, 3 up, 3 in
Nov 29 07:12:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:12:42 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:12:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:12:42 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:12:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:43 compute-2 sudo[83454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:43 compute-2 sudo[83454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:43 compute-2 sudo[83454]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:43 compute-2 sudo[83479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:43 compute-2 sudo[83479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:43 compute-2 sudo[83479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:43 compute-2 sudo[83504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:43 compute-2 sudo[83504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:43 compute-2 sudo[83504]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 07:12:43 compute-2 sudo[83529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:43 compute-2 sudo[83529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:43 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 29 07:12:43 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 29 07:12:43 compute-2 ceph-mon[77138]: pgmap v144: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:12:43 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:12:43 compute-2 ceph-mon[77138]: osdmap e48: 3 total, 3 up, 3 in
Nov 29 07:12:43 compute-2 ceph-mon[77138]: 3.1a scrub starts
Nov 29 07:12:43 compute-2 ceph-mon[77138]: 3.1a scrub ok
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.220386913 +0000 UTC m=+0.050943521 container create e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:12:44 compute-2 systemd[1]: Started libpod-conmon-e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d.scope.
Nov 29 07:12:44 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.19928811 +0000 UTC m=+0.029844739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.312257381 +0000 UTC m=+0.142814009 container init e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.320778149 +0000 UTC m=+0.151334757 container start e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:12:44 compute-2 lucid_knuth[83611]: 167 167
Nov 29 07:12:44 compute-2 systemd[1]: libpod-e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d.scope: Deactivated successfully.
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.395186807 +0000 UTC m=+0.225743445 container attach e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.397050375 +0000 UTC m=+0.227607033 container died e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:12:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-abdd1f57d774b2c48dc462a44e5b26114e957350c58d3aa10c97e96593a78149-merged.mount: Deactivated successfully.
Nov 29 07:12:44 compute-2 podman[83594]: 2025-11-29 07:12:44.464312259 +0000 UTC m=+0.294868867 container remove e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:12:44 compute-2 systemd[1]: libpod-conmon-e410e2c00bdba101d6b2bc2854373f5933e1bd839ff9bc2f4bc6175115e1fc0d.scope: Deactivated successfully.
Nov 29 07:12:44 compute-2 systemd[1]: Reloading.
Nov 29 07:12:44 compute-2 systemd-sysv-generator[83662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:44 compute-2 systemd-rc-local-generator[83657]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 07:12:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 07:12:44 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 systemd[1]: Reloading.
Nov 29 07:12:44 compute-2 ceph-mon[77138]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 07:12:44 compute-2 ceph-mon[77138]: Deploying daemon mds.cephfs.compute-2.fwjrvc on compute-2
Nov 29 07:12:44 compute-2 ceph-mon[77138]: osdmap e49: 3 total, 3 up, 3 in
Nov 29 07:12:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:12:44 compute-2 systemd-rc-local-generator[83697]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:44 compute-2 systemd-sysv-generator[83701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:45 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.fwjrvc for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:12:45 compute-2 podman[83753]: 2025-11-29 07:12:45.344267304 +0000 UTC m=+0.040113412 container create 314c24a3dfa31aec3f63ce5443120350913f3f1e80146f616220d67413e5e569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-2-fwjrvc, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f345230dfec54c0a18018318b51ddc468a959a1b1f4ae9ef59f295394bd52ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f345230dfec54c0a18018318b51ddc468a959a1b1f4ae9ef59f295394bd52ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f345230dfec54c0a18018318b51ddc468a959a1b1f4ae9ef59f295394bd52ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f345230dfec54c0a18018318b51ddc468a959a1b1f4ae9ef59f295394bd52ab/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.fwjrvc supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:45 compute-2 podman[83753]: 2025-11-29 07:12:45.403528496 +0000 UTC m=+0.099374614 container init 314c24a3dfa31aec3f63ce5443120350913f3f1e80146f616220d67413e5e569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-2-fwjrvc, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:12:45 compute-2 podman[83753]: 2025-11-29 07:12:45.409747442 +0000 UTC m=+0.105593530 container start 314c24a3dfa31aec3f63ce5443120350913f3f1e80146f616220d67413e5e569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-2-fwjrvc, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:12:45 compute-2 bash[83753]: 314c24a3dfa31aec3f63ce5443120350913f3f1e80146f616220d67413e5e569
Nov 29 07:12:45 compute-2 podman[83753]: 2025-11-29 07:12:45.326097223 +0000 UTC m=+0.021943351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:45 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.fwjrvc for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:12:45 compute-2 sudo[83529]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:45 compute-2 ceph-mds[83773]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:12:45 compute-2 ceph-mds[83773]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 07:12:45 compute-2 ceph-mds[83773]: main not setting numa affinity
Nov 29 07:12:45 compute-2 ceph-mds[83773]: pidfile_write: ignore empty --pid-file
Nov 29 07:12:45 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-2-fwjrvc[83769]: starting mds.cephfs.compute-2.fwjrvc at 
Nov 29 07:12:45 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Updating MDS map to version 2 from mon.1
Nov 29 07:12:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 07:12:45 compute-2 ceph-mon[77138]: pgmap v147: 179 pgs: 1 creating+peering, 178 active+clean; 451 KiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:12:45 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:12:45 compute-2 ceph-mon[77138]: osdmap e50: 3 total, 3 up, 3 in
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e3 new map
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:11:53.720209+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.fwjrvc{-1:24133} state up:standby seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Updating MDS map to version 3 from mon.1
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Monitors have assigned me to become a standby.
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e4 new map
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:46.514987+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:creating seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Updating MDS map to version 4 from mon.1
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x1
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x100
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x600
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x601
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x602
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x603
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x604
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x605
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x606
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x607
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x608
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.cache creating system inode with ino:0x609
Nov 29 07:12:46 compute-2 ceph-mds[83773]: mds.0.4 creating_done
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 07:12:46 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: Deploying daemon mds.cephfs.compute-0.msknqt on compute-0
Nov 29 07:12:46 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:boot
Nov 29 07:12:46 compute-2 ceph-mon[77138]: daemon mds.cephfs.compute-2.fwjrvc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 07:12:46 compute-2 ceph-mon[77138]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 07:12:46 compute-2 ceph-mon[77138]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 07:12:46 compute-2 ceph-mon[77138]: Cluster is now healthy
Nov 29 07:12:46 compute-2 ceph-mon[77138]: fsmap cephfs:0 1 up:standby
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.fwjrvc"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:creating}
Nov 29 07:12:46 compute-2 ceph-mon[77138]: daemon mds.cephfs.compute-2.fwjrvc is now active in filesystem cephfs as rank 0
Nov 29 07:12:46 compute-2 ceph-mon[77138]: osdmap e51: 3 total, 3 up, 3 in
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:46 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e5 new map
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:47.524985+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:47 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Updating MDS map to version 5 from mon.1
Nov 29 07:12:47 compute-2 ceph-mds[83773]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:12:47 compute-2 ceph-mds[83773]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 07:12:47 compute-2 ceph-mds[83773]: mds.0.4 recovery_done -- successful recovery!
Nov 29 07:12:47 compute-2 ceph-mds[83773]: mds.0.4 active_start
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e6 new map
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:47.524985+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 07:12:47 compute-2 ceph-mon[77138]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:47 compute-2 ceph-mon[77138]: pgmap v150: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:47 compute-2 ceph-mon[77138]: 7.d scrub starts
Nov 29 07:12:47 compute-2 ceph-mon[77138]: 7.d scrub ok
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:active
Nov 29 07:12:47 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] up:boot
Nov 29 07:12:47 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 1 up:standby
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.msknqt"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:47 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 1 up:standby
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:12:47 compute-2 ceph-mon[77138]: osdmap e52: 3 total, 3 up, 3 in
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:47 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:12:48 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 07:12:48 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 07:12:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 07:12:48 compute-2 ceph-mon[77138]: Deploying daemon mds.cephfs.compute-1.oeerwd on compute-1
Nov 29 07:12:48 compute-2 ceph-mon[77138]: 7.12 scrub starts
Nov 29 07:12:48 compute-2 ceph-mon[77138]: 7.12 scrub ok
Nov 29 07:12:48 compute-2 ceph-mon[77138]: 2.1b scrub starts
Nov 29 07:12:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:12:48 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:12:48 compute-2 ceph-mon[77138]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:12:48 compute-2 ceph-mon[77138]: osdmap e53: 3 total, 3 up, 3 in
Nov 29 07:12:49 compute-2 radosgw[83394]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:12:49 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-2-tfmigt[83390]: 2025-11-29T07:12:49.034+0000 7f56c4b6a940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:12:49 compute-2 radosgw[83394]: framework: beast
Nov 29 07:12:49 compute-2 radosgw[83394]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 07:12:49 compute-2 radosgw[83394]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 07:12:49 compute-2 radosgw[83394]: starting handler: beast
Nov 29 07:12:49 compute-2 radosgw[83394]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:12:49 compute-2 radosgw[83394]: mgrc service_daemon_register rgw.24139 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.tfmigt,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=566f71d6-80f0-4888-8471-3c4b61b17fae,zone_name=default,zonegroup_id=52a2d801-fd4c-4d81-9622-166900f04f3d,zonegroup_name=default}
Nov 29 07:12:49 compute-2 ceph-mon[77138]: pgmap v153: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 12 op/s
Nov 29 07:12:49 compute-2 ceph-mon[77138]: 2.1b scrub ok
Nov 29 07:12:49 compute-2 ceph-mon[77138]: 4.b deep-scrub starts
Nov 29 07:12:49 compute-2 ceph-mon[77138]: 4.b deep-scrub ok
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:50 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 29 07:12:50 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 29 07:12:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e7 new map
Nov 29 07:12:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:50.793764+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:50 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc Updating MDS map to version 7 from mon.1
Nov 29 07:12:50 compute-2 ceph-mon[77138]: Deploying daemon haproxy.rgw.default.compute-0.aoijdn on compute-0
Nov 29 07:12:50 compute-2 ceph-mon[77138]: 5.12 scrub starts
Nov 29 07:12:50 compute-2 ceph-mon[77138]: 5.12 scrub ok
Nov 29 07:12:50 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] up:boot
Nov 29 07:12:50 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:active
Nov 29 07:12:50 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:12:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.oeerwd"}]: dispatch
Nov 29 07:12:51 compute-2 ceph-mds[83773]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 29 07:12:51 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-2-fwjrvc[83769]: 2025-11-29T07:12:51.534+0000 7f7488d1e640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 29 07:12:51 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Nov 29 07:12:51 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Nov 29 07:12:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e8 new map
Nov 29 07:12:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:50.793764+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:51 compute-2 ceph-mon[77138]: pgmap v155: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.5 KiB/s wr, 14 op/s
Nov 29 07:12:51 compute-2 ceph-mon[77138]: 3.11 deep-scrub starts
Nov 29 07:12:51 compute-2 ceph-mon[77138]: 3.11 deep-scrub ok
Nov 29 07:12:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:53 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] up:standby
Nov 29 07:12:53 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:12:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:53 compute-2 ceph-mon[77138]: 7.15 scrub starts
Nov 29 07:12:53 compute-2 ceph-mon[77138]: 7.15 scrub ok
Nov 29 07:12:54 compute-2 ceph-mon[77138]: 4.f scrub starts
Nov 29 07:12:54 compute-2 ceph-mon[77138]: 4.f scrub ok
Nov 29 07:12:54 compute-2 ceph-mon[77138]: pgmap v156: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 2.7 KiB/s wr, 11 op/s
Nov 29 07:12:54 compute-2 ceph-mon[77138]: 4.10 scrub starts
Nov 29 07:12:54 compute-2 ceph-mon[77138]: 4.10 scrub ok
Nov 29 07:12:54 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 07:12:54 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 07:12:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e9 new map
Nov 29 07:12:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:11:53.720139+0000
                                           modified        2025-11-29T07:12:50.793764+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24133}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:12:54 compute-2 sudo[84357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:54 compute-2 sudo[84357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:54 compute-2 sudo[84357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:55 compute-2 sudo[84382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:55 compute-2 sudo[84382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:55 compute-2 sudo[84382]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:55 compute-2 sudo[84407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:55 compute-2 sudo[84407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:55 compute-2 sudo[84407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:55 compute-2 sudo[84432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:12:55 compute-2 sudo[84432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:55 compute-2 ceph-mon[77138]: pgmap v157: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 215 KiB/s rd, 6.3 KiB/s wr, 394 op/s
Nov 29 07:12:55 compute-2 ceph-mon[77138]: 7.17 deep-scrub starts
Nov 29 07:12:55 compute-2 ceph-mon[77138]: 5.b scrub starts
Nov 29 07:12:55 compute-2 ceph-mon[77138]: 5.b scrub ok
Nov 29 07:12:55 compute-2 ceph-mon[77138]: mds.? [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] up:standby
Nov 29 07:12:55 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:12:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:55 compute-2 ceph-mon[77138]: Deploying daemon haproxy.rgw.default.compute-2.goeiuk on compute-2
Nov 29 07:12:55 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Nov 29 07:12:55 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Nov 29 07:12:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:12:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.003000103s ======
Nov 29 07:12:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000103s
Nov 29 07:12:56 compute-2 ceph-mon[77138]: 7.17 deep-scrub ok
Nov 29 07:12:56 compute-2 ceph-mon[77138]: 3.0 scrub starts
Nov 29 07:12:56 compute-2 ceph-mon[77138]: 3.0 scrub ok
Nov 29 07:12:57 compute-2 ceph-mon[77138]: 4.11 scrub starts
Nov 29 07:12:57 compute-2 ceph-mon[77138]: 4.11 scrub ok
Nov 29 07:12:57 compute-2 ceph-mon[77138]: pgmap v158: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 191 KiB/s rd, 5.6 KiB/s wr, 349 op/s
Nov 29 07:12:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:12:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:12:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:12:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:12:58 compute-2 ceph-mon[77138]: 4.12 scrub starts
Nov 29 07:12:58 compute-2 ceph-mon[77138]: 4.12 scrub ok
Nov 29 07:12:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Nov 29 07:12:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Nov 29 07:13:00 compute-2 ceph-mon[77138]: pgmap v159: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 162 KiB/s rd, 4.1 KiB/s wr, 294 op/s
Nov 29 07:13:00 compute-2 ceph-mon[77138]: 5.0 deep-scrub starts
Nov 29 07:13:00 compute-2 ceph-mon[77138]: 5.0 deep-scrub ok
Nov 29 07:13:00 compute-2 ceph-mon[77138]: 7.19 scrub starts
Nov 29 07:13:00 compute-2 ceph-mon[77138]: 7.19 scrub ok
Nov 29 07:13:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:00.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.415657543 +0000 UTC m=+4.823680091 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.513646618 +0000 UTC m=+4.921669146 container create 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 systemd[1]: Started libpod-conmon-101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209.scope.
Nov 29 07:13:00 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.651561626 +0000 UTC m=+5.059584234 container init 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.666028657 +0000 UTC m=+5.074051185 container start 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.669712942 +0000 UTC m=+5.077735570 container attach 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 interesting_blackwell[84616]: 0 0
Nov 29 07:13:00 compute-2 systemd[1]: libpod-101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209.scope: Deactivated successfully.
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.67673872 +0000 UTC m=+5.084761258 container died 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-5de2cccaf83e61228b2cbbf4d15c33b6d6d6a32d26db6f67c897a9b802277789-merged.mount: Deactivated successfully.
Nov 29 07:13:00 compute-2 podman[84499]: 2025-11-29 07:13:00.723137515 +0000 UTC m=+5.131160063 container remove 101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209 (image=quay.io/ceph/haproxy:2.3, name=interesting_blackwell)
Nov 29 07:13:00 compute-2 systemd[1]: libpod-conmon-101957261a1943b933f8290d6b11a5d2bcf06dc6404a38ac604e523626125209.scope: Deactivated successfully.
Nov 29 07:13:00 compute-2 systemd[1]: Reloading.
Nov 29 07:13:01 compute-2 systemd-rc-local-generator[84662]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:01 compute-2 systemd-sysv-generator[84665]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:01 compute-2 ceph-mon[77138]: pgmap v160: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 3.5 KiB/s wr, 254 op/s
Nov 29 07:13:01 compute-2 systemd[1]: Reloading.
Nov 29 07:13:01 compute-2 systemd-rc-local-generator[84704]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:01 compute-2 systemd-sysv-generator[84708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:01 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.goeiuk for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:13:01 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 29 07:13:01 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 29 07:13:01 compute-2 podman[84760]: 2025-11-29 07:13:01.853176433 +0000 UTC m=+0.094476346 container create 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:01 compute-2 podman[84760]: 2025-11-29 07:13:01.785097603 +0000 UTC m=+0.026397536 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 07:13:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97177ac668e2ee480b9407d54da8f79c9e70eccf04d3d6cadfd7ca3c0975cebd/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:01 compute-2 podman[84760]: 2025-11-29 07:13:01.929000095 +0000 UTC m=+0.170300058 container init 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:01 compute-2 podman[84760]: 2025-11-29 07:13:01.935114673 +0000 UTC m=+0.176414586 container start 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:01 compute-2 bash[84760]: 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d
Nov 29 07:13:01 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.goeiuk for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:13:01 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk[84775]: [NOTICE] 332/071301 (2) : New worker #1 (4) forked
Nov 29 07:13:02 compute-2 sudo[84432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:02 compute-2 sudo[84789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:02 compute-2 sudo[84789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:02 compute-2 sudo[84789]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:02 compute-2 ceph-mon[77138]: 4.16 scrub starts
Nov 29 07:13:02 compute-2 ceph-mon[77138]: 4.16 scrub ok
Nov 29 07:13:02 compute-2 ceph-mon[77138]: 5.d scrub starts
Nov 29 07:13:02 compute-2 ceph-mon[77138]: 5.d scrub ok
Nov 29 07:13:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:02 compute-2 sudo[84814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:02 compute-2 sudo[84814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:02 compute-2 sudo[84814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:02 compute-2 sudo[84839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:02 compute-2 sudo[84839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:02 compute-2 sudo[84839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:02 compute-2 sudo[84864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:13:02 compute-2 sudo[84864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 07:13:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 07:13:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 4.17 scrub starts
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 4.17 scrub ok
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 07:13:03 compute-2 ceph-mon[77138]: Deploying daemon keepalived.rgw.default.compute-2.gecapa on compute-2
Nov 29 07:13:03 compute-2 ceph-mon[77138]: pgmap v161: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 KiB/s rd, 2.7 KiB/s wr, 244 op/s
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 5.e scrub starts
Nov 29 07:13:03 compute-2 ceph-mon[77138]: 5.e scrub ok
Nov 29 07:13:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:03.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:04.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:04 compute-2 ceph-mon[77138]: 7.1a deep-scrub starts
Nov 29 07:13:04 compute-2 ceph-mon[77138]: 7.1a deep-scrub ok
Nov 29 07:13:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:05 compute-2 ceph-mon[77138]: 4.1e scrub starts
Nov 29 07:13:05 compute-2 ceph-mon[77138]: 4.1e scrub ok
Nov 29 07:13:05 compute-2 ceph-mon[77138]: pgmap v162: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 KiB/s rd, 2.7 KiB/s wr, 244 op/s
Nov 29 07:13:05 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 07:13:05 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 07:13:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:06.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:06 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 07:13:06 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 07:13:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:07 compute-2 ceph-mon[77138]: 7.1c scrub starts
Nov 29 07:13:07 compute-2 ceph-mon[77138]: 7.1c scrub ok
Nov 29 07:13:07 compute-2 ceph-mon[77138]: 3.e scrub starts
Nov 29 07:13:07 compute-2 ceph-mon[77138]: 3.e scrub ok
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.766481361 +0000 UTC m=+4.997800900 container create abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.expose-services=, name=keepalived, release=1793, vcs-type=git)
Nov 29 07:13:07 compute-2 systemd[1]: Started libpod-conmon-abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38.scope.
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.751916597 +0000 UTC m=+4.983236156 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 07:13:07 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:13:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.856859418 +0000 UTC m=+5.088178957 container init abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.866688041 +0000 UTC m=+5.098007580 container start abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, release=1793, version=2.2.4, io.openshift.expose-services=, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64)
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.8702062 +0000 UTC m=+5.101525749 container attach abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Nov 29 07:13:07 compute-2 ecstatic_bassi[85024]: 0 0
Nov 29 07:13:07 compute-2 systemd[1]: libpod-abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38.scope: Deactivated successfully.
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.875033954 +0000 UTC m=+5.106353493 container died abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 07:13:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-8af57ceb331e9e7d78d2ec6a5ebc6f9b94cbdb16b806d0e2b386303dd7e7d18f-merged.mount: Deactivated successfully.
Nov 29 07:13:07 compute-2 podman[84929]: 2025-11-29 07:13:07.918603582 +0000 UTC m=+5.149923121 container remove abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_bassi, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public)
Nov 29 07:13:07 compute-2 systemd[1]: libpod-conmon-abbaaf2360f76be7e153eed4c96cd1fda456f0d8aaa349aa44542f3338a6ce38.scope: Deactivated successfully.
Nov 29 07:13:07 compute-2 systemd[1]: Reloading.
Nov 29 07:13:08 compute-2 systemd-rc-local-generator[85071]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:08 compute-2 systemd-sysv-generator[85074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:08.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:08 compute-2 systemd[1]: Reloading.
Nov 29 07:13:08 compute-2 systemd-rc-local-generator[85114]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:08 compute-2 systemd-sysv-generator[85117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:08 compute-2 ceph-mon[77138]: pgmap v163: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:08 compute-2 ceph-mon[77138]: 5.8 scrub starts
Nov 29 07:13:08 compute-2 ceph-mon[77138]: 5.8 scrub ok
Nov 29 07:13:08 compute-2 ceph-mon[77138]: 6.4 scrub starts
Nov 29 07:13:08 compute-2 ceph-mon[77138]: 6.4 scrub ok
Nov 29 07:13:08 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.gecapa for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 07:13:08 compute-2 podman[85171]: 2025-11-29 07:13:08.810676757 +0000 UTC m=+0.044935136 container create bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 07:13:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a60b4f5c975583ed71f6d57f7a6a30b13bb6d8c754feee01764aec985adcbc/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-2 podman[85171]: 2025-11-29 07:13:08.874988739 +0000 UTC m=+0.109247138 container init bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Nov 29 07:13:08 compute-2 podman[85171]: 2025-11-29 07:13:08.881336464 +0000 UTC m=+0.115594843 container start bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, release=1793, distribution-scope=public, description=keepalived for Ceph)
Nov 29 07:13:08 compute-2 bash[85171]: bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed
Nov 29 07:13:08 compute-2 podman[85171]: 2025-11-29 07:13:08.79101807 +0000 UTC m=+0.025276469 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 07:13:08 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.gecapa for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Starting VRRP child process, pid=4
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: Startup complete
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 07:13:08 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:08 2025: VRRP_Script(check_backend) succeeded
Nov 29 07:13:08 compute-2 sudo[84864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:09 compute-2 ceph-mon[77138]: pgmap v164: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 5.1b scrub starts
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 5.1b scrub ok
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 6.6 scrub starts
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 6.6 scrub ok
Nov 29 07:13:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 07:13:09 compute-2 ceph-mon[77138]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 07:13:09 compute-2 ceph-mon[77138]: Deploying daemon keepalived.rgw.default.compute-0.uxbosd on compute-0
Nov 29 07:13:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 29 07:13:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 29 07:13:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:11.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:12 compute-2 ceph-mon[77138]: 6.9 scrub starts
Nov 29 07:13:12 compute-2 ceph-mon[77138]: 6.9 scrub ok
Nov 29 07:13:12 compute-2 ceph-mon[77138]: 3.1c scrub starts
Nov 29 07:13:12 compute-2 ceph-mon[77138]: 3.1c scrub ok
Nov 29 07:13:12 compute-2 ceph-mon[77138]: 3.15 scrub starts
Nov 29 07:13:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:12 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:12 2025: (VI_0) Entering MASTER STATE
Nov 29 07:13:12 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 07:13:12 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 07:13:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:13 compute-2 ceph-mon[77138]: pgmap v165: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:13 compute-2 ceph-mon[77138]: 3.15 scrub ok
Nov 29 07:13:13 compute-2 ceph-mon[77138]: 5.1f scrub starts
Nov 29 07:13:13 compute-2 ceph-mon[77138]: 5.1f scrub ok
Nov 29 07:13:13 compute-2 ceph-mon[77138]: pgmap v166: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:13 compute-2 ceph-mon[77138]: 5.13 scrub starts
Nov 29 07:13:13 compute-2 ceph-mon[77138]: 5.13 scrub ok
Nov 29 07:13:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:13.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:14 compute-2 sudo[85199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-2 sudo[85199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85199]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 ceph-mon[77138]: 6.b scrub starts
Nov 29 07:13:14 compute-2 ceph-mon[77138]: 6.b scrub ok
Nov 29 07:13:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:14 compute-2 sudo[85224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-2 sudo[85224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:14.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:14 compute-2 sudo[85249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-2 sudo[85249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 sudo[85274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:13:14 compute-2 sudo[85274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 sudo[85299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-2 sudo[85299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 sudo[85324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:14 compute-2 sudo[85324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85324]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 sudo[85349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-2 sudo[85349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-2 sudo[85349]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-2 sudo[85374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:13:14 compute-2 sudo[85374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:15 compute-2 ceph-mon[77138]: pgmap v167: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 07:13:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:15 compute-2 podman[85473]: 2025-11-29 07:13:15.438008728 +0000 UTC m=+0.257315831 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:13:15 compute-2 podman[85494]: 2025-11-29 07:13:15.630598762 +0000 UTC m=+0.065237515 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:13:15 compute-2 podman[85473]: 2025-11-29 07:13:15.655589269 +0000 UTC m=+0.474896332 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:13:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:16 compute-2 podman[85624]: 2025-11-29 07:13:16.368006629 +0000 UTC m=+0.053017270 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:16 compute-2 podman[85624]: 2025-11-29 07:13:16.380786083 +0000 UTC m=+0.065796704 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:16 compute-2 podman[85688]: 2025-11-29 07:13:16.622249635 +0000 UTC m=+0.055749942 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, description=keepalived for Ceph, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc.)
Nov 29 07:13:16 compute-2 podman[85688]: 2025-11-29 07:13:16.636597071 +0000 UTC m=+0.070097368 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 07:13:16 compute-2 sudo[85374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:17.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:17 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:17 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 29 07:13:17 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa[85188]: Sat Nov 29 07:13:17 2025: (VI_0) Entering BACKUP STATE
Nov 29 07:13:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 07:13:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 07:13:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:18 compute-2 ceph-mon[77138]: pgmap v168: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 07:13:18 compute-2 ceph-mon[77138]: 5.11 scrub starts
Nov 29 07:13:18 compute-2 ceph-mon[77138]: 5.11 scrub ok
Nov 29 07:13:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:18 compute-2 ceph-mon[77138]: 5.1a scrub starts
Nov 29 07:13:18 compute-2 ceph-mon[77138]: 5.1a scrub ok
Nov 29 07:13:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 07:13:19 compute-2 ceph-mon[77138]: 6.c deep-scrub starts
Nov 29 07:13:19 compute-2 ceph-mon[77138]: 6.c deep-scrub ok
Nov 29 07:13:19 compute-2 ceph-mon[77138]: pgmap v169: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 07:13:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:13:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:19.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:20.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 07:13:20 compute-2 ceph-mon[77138]: 6.f scrub starts
Nov 29 07:13:20 compute-2 ceph-mon[77138]: 6.f scrub ok
Nov 29 07:13:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:13:20 compute-2 ceph-mon[77138]: osdmap e54: 3 total, 3 up, 3 in
Nov 29 07:13:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:13:20 compute-2 ceph-mon[77138]: 5.18 scrub starts
Nov 29 07:13:20 compute-2 ceph-mon[77138]: 5.18 scrub ok
Nov 29 07:13:20 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 07:13:20 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 07:13:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 07:13:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:13:21 compute-2 ceph-mon[77138]: osdmap e55: 3 total, 3 up, 3 in
Nov 29 07:13:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:13:21 compute-2 ceph-mon[77138]: pgmap v172: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 29 07:13:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:21 compute-2 ceph-mon[77138]: 3.1b scrub starts
Nov 29 07:13:21 compute-2 ceph-mon[77138]: 3.1b scrub ok
Nov 29 07:13:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 07:13:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:13:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:13:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:13:22 compute-2 ceph-mon[77138]: osdmap e56: 3 total, 3 up, 3 in
Nov 29 07:13:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:13:22 compute-2 ceph-mon[77138]: 3.3 scrub starts
Nov 29 07:13:22 compute-2 ceph-mon[77138]: 3.3 scrub ok
Nov 29 07:13:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:23 compute-2 ceph-mon[77138]: pgmap v174: 243 pgs: 62 unknown, 1 active+clean+scrubbing, 180 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:13:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:13:23 compute-2 ceph-mon[77138]: osdmap e57: 3 total, 3 up, 3 in
Nov 29 07:13:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 07:13:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:24.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:24 compute-2 sudo[85726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:24 compute-2 sudo[85726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-2 sudo[85726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-2 sudo[85751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:13:24 compute-2 sudo[85751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-2 sudo[85751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-2 ceph-mon[77138]: osdmap e58: 3 total, 3 up, 3 in
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:24 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 07:13:24 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 07:13:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.5( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.3( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.11( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.2( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.15( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[8.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.3( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.4( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.1( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 59 pg[10.11( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:25 compute-2 ceph-mon[77138]: pgmap v177: 274 pgs: 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:25 compute-2 ceph-mon[77138]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 07:13:25 compute-2 ceph-mon[77138]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:13:25 compute-2 ceph-mon[77138]: 3.8 scrub starts
Nov 29 07:13:25 compute-2 ceph-mon[77138]: 3.8 scrub ok
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:13:25 compute-2 ceph-mon[77138]: osdmap e59: 3 total, 3 up, 3 in
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:13:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.10( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.12( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.1e( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.1c( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.b( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.c( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.16( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.6( v 46'8 (0'0,46'8] local-lis/les=59/60 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.15( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.3( v 58'99 lc 53'84 (0'0,58'99] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=58'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.2( v 46'8 (0'0,46'8] local-lis/les=59/60 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.f( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.11( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.1f( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.a( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.9( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.1( v 53'96 (0'0,53'96] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.d( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.3( v 46'8 (0'0,46'8] local-lis/les=59/60 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.f( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[10.4( v 53'96 (0'0,53'96] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.11( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 60 pg[8.5( v 46'8 (0'0,46'8] local-lis/les=59/60 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:26.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:27 compute-2 ceph-mon[77138]: Reconfiguring mgr.compute-0.rotard (monmap changed)...
Nov 29 07:13:27 compute-2 ceph-mon[77138]: Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 07:13:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:27 compute-2 ceph-mon[77138]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 07:13:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:13:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:27 compute-2 ceph-mon[77138]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 07:13:27 compute-2 ceph-mon[77138]: osdmap e60: 3 total, 3 up, 3 in
Nov 29 07:13:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:27.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:28.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:29.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:29 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 29 07:13:29 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 29 07:13:29 compute-2 ceph-mon[77138]: pgmap v180: 305 pgs: 31 unknown, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:29 compute-2 ceph-mon[77138]: Reconfiguring osd.0 (monmap changed)...
Nov 29 07:13:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 07:13:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:29 compute-2 ceph-mon[77138]: Reconfiguring daemon osd.0 on compute-0
Nov 29 07:13:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.16( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.13( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:30.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.14 scrub starts
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.14 scrub ok
Nov 29 07:13:31 compute-2 ceph-mon[77138]: pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 7.6 scrub starts
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.1d scrub starts
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.1d scrub ok
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 7.6 scrub ok
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:13:31 compute-2 ceph-mon[77138]: osdmap e61: 3 total, 3 up, 3 in
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.a scrub starts
Nov 29 07:13:31 compute-2 ceph-mon[77138]: 3.a scrub ok
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:13:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Nov 29 07:13:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Nov 29 07:13:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.17( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.16( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.3( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.e( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.13( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.a( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.19( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:31 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[11.8( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:32 compute-2 ceph-mon[77138]: pgmap v183: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 446 B/s, 1 objects/s recovering
Nov 29 07:13:32 compute-2 ceph-mon[77138]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 07:13:32 compute-2 ceph-mon[77138]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 07:13:32 compute-2 ceph-mon[77138]: 7.e scrub starts
Nov 29 07:13:32 compute-2 ceph-mon[77138]: 7.e scrub ok
Nov 29 07:13:32 compute-2 ceph-mon[77138]: 7.a deep-scrub starts
Nov 29 07:13:32 compute-2 ceph-mon[77138]: 7.a deep-scrub ok
Nov 29 07:13:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:13:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:32 compute-2 ceph-mon[77138]: osdmap e62: 3 total, 3 up, 3 in
Nov 29 07:13:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 07:13:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:32 compute-2 sshd-session[85780]: Accepted publickey for zuul from 192.168.122.30 port 55438 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:32 compute-2 systemd-logind[787]: New session 33 of user zuul.
Nov 29 07:13:32 compute-2 systemd[1]: Started Session 33 of User zuul.
Nov 29 07:13:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:32.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:32 compute-2 sshd-session[85780]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:13:32 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 07:13:32 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 07:13:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 07:13:33 compute-2 ceph-mon[77138]: Reconfiguring osd.1 (monmap changed)...
Nov 29 07:13:33 compute-2 ceph-mon[77138]: Reconfiguring daemon osd.1 on compute-1
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:13:33 compute-2 ceph-mon[77138]: 7.11 scrub starts
Nov 29 07:13:33 compute-2 ceph-mon[77138]: 7.11 scrub ok
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:13:33 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:33 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:33.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:33 compute-2 python3.9[85934]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:13:33 compute-2 sudo[85952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:33 compute-2 sudo[85952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-2 sudo[85952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-2 sudo[85982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:33 compute-2 sudo[85982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-2 sudo[85982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-2 sudo[86007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:33 compute-2 sudo[86007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-2 sudo[86007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 sudo[86033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:13:34 compute-2 sudo[86033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 07:13:34 compute-2 ceph-mon[77138]: pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 430 B/s, 1 objects/s recovering
Nov 29 07:13:34 compute-2 ceph-mon[77138]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 07:13:34 compute-2 ceph-mon[77138]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:13:34 compute-2 ceph-mon[77138]: osdmap e63: 3 total, 3 up, 3 in
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:34 compute-2 ceph-mon[77138]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:13:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:34 compute-2 ceph-mon[77138]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 07:13:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:34 compute-2 sudo[86105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-2 sudo[86105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 sudo[86105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.306654694 +0000 UTC m=+0.053651351 container create eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:13:34 compute-2 systemd[1]: Started libpod-conmon-eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9.scope.
Nov 29 07:13:34 compute-2 sudo[86151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-2 sudo[86151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 sudo[86151]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.279757541 +0000 UTC m=+0.026754198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.391978348 +0000 UTC m=+0.138975025 container init eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.400019171 +0000 UTC m=+0.147015818 container start eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.403541411 +0000 UTC m=+0.150538088 container attach eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:13:34 compute-2 crazy_ardinghelli[86177]: 167 167
Nov 29 07:13:34 compute-2 systemd[1]: libpod-eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9.scope: Deactivated successfully.
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.40910134 +0000 UTC m=+0.156098017 container died eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-103336cd2c76beeb71ad33b2ba9309dffa9465f2827f8dd4e8ea59fa25a5a56f-merged.mount: Deactivated successfully.
Nov 29 07:13:34 compute-2 podman[86113]: 2025-11-29 07:13:34.618276716 +0000 UTC m=+0.365273373 container remove eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:13:34 compute-2 systemd[1]: libpod-conmon-eecb65bc3c3994ff4b520557dbc2754041a53280005b52a9b62a79d5b43000b9.scope: Deactivated successfully.
Nov 29 07:13:34 compute-2 sudo[86033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 sudo[86271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-2 sudo[86271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 sudo[86271]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 65 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:34 compute-2 sudo[86319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:34 compute-2 sudo[86319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 sudo[86319]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 sudo[86368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-2 sudo[86368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-2 sudo[86368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-2 sudo[86419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntgzhxncfrpqywmzeobkmcnrgunzjcyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400414.5877402-64-22648628291693/AnsiballZ_command.py'
Nov 29 07:13:34 compute-2 sudo[86419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:35 compute-2 sudo[86421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 07:13:35 compute-2 sudo[86421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-2 python3.9[86426]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:35 compute-2 ceph-mon[77138]: osdmap e64: 3 total, 3 up, 3 in
Nov 29 07:13:35 compute-2 ceph-mon[77138]: 3.d scrub starts
Nov 29 07:13:35 compute-2 ceph-mon[77138]: 3.d scrub ok
Nov 29 07:13:35 compute-2 ceph-mon[77138]: pgmap v188: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:35 compute-2 ceph-mon[77138]: Reconfiguring mgr.compute-2.vyxqrz (monmap changed)...
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:35 compute-2 ceph-mon[77138]: Reconfiguring daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 07:13:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:13:35 compute-2 ceph-mon[77138]: osdmap e65: 3 total, 3 up, 3 in
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.31458992 +0000 UTC m=+0.060878076 container create 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:35 compute-2 systemd[1]: Started libpod-conmon-9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c.scope.
Nov 29 07:13:35 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.286519528 +0000 UTC m=+0.032807704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.395557427 +0000 UTC m=+0.141845603 container init 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.406516529 +0000 UTC m=+0.152804725 container start 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.411176526 +0000 UTC m=+0.157464702 container attach 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:13:35 compute-2 wizardly_gates[86487]: 167 167
Nov 29 07:13:35 compute-2 systemd[1]: libpod-9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c.scope: Deactivated successfully.
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.414360335 +0000 UTC m=+0.160648491 container died 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:35 compute-2 systemd[1]: var-lib-containers-storage-overlay-1c6ea7de1c4eb58bb03b7d3af84decde2bcece9a6e010a0324005823b7e2fd17-merged.mount: Deactivated successfully.
Nov 29 07:13:35 compute-2 podman[86470]: 2025-11-29 07:13:35.614566387 +0000 UTC m=+0.360854543 container remove 9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:13:35 compute-2 systemd[1]: libpod-conmon-9461ee4b5dcdcb6b8e4bbcfa889698478a12667c42c4fb4dcd0a391ed547237c.scope: Deactivated successfully.
Nov 29 07:13:35 compute-2 sudo[86421]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-2 sudo[86510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:35 compute-2 sudo[86510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-2 sudo[86510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-2 sudo[86535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:35 compute-2 sudo[86535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-2 sudo[86535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 sudo[86560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:35 compute-2 sudo[86560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-2 sudo[86560]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 66 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:35 compute-2 sudo[86585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:13:35 compute-2 sudo[86585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:36 compute-2 podman[86681]: 2025-11-29 07:13:36.535136779 +0000 UTC m=+0.064927824 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:13:36 compute-2 podman[86681]: 2025-11-29 07:13:36.654806879 +0000 UTC m=+0.184597924 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:36 compute-2 ceph-mon[77138]: osdmap e66: 3 total, 3 up, 3 in
Nov 29 07:13:36 compute-2 ceph-mon[77138]: 5.7 scrub starts
Nov 29 07:13:36 compute-2 ceph-mon[77138]: 5.7 scrub ok
Nov 29 07:13:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 07:13:36 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 67 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:37 compute-2 podman[86835]: 2025-11-29 07:13:37.480440899 +0000 UTC m=+0.078346929 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:37 compute-2 podman[86835]: 2025-11-29 07:13:37.499225206 +0000 UTC m=+0.097131246 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:13:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:38 compute-2 podman[86901]: 2025-11-29 07:13:38.163029197 +0000 UTC m=+0.136817983 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, architecture=x86_64)
Nov 29 07:13:38 compute-2 podman[86901]: 2025-11-29 07:13:38.17870754 +0000 UTC m=+0.152496296 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, name=keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public)
Nov 29 07:13:38 compute-2 sudo[86585]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:38 compute-2 ceph-mon[77138]: pgmap v191: 305 pgs: 4 unknown, 7 peering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 219 B/s, 8 objects/s recovering
Nov 29 07:13:38 compute-2 ceph-mon[77138]: osdmap e67: 3 total, 3 up, 3 in
Nov 29 07:13:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:38 compute-2 ceph-mon[77138]: 5.2 scrub starts
Nov 29 07:13:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:38 compute-2 ceph-mon[77138]: 5.2 scrub ok
Nov 29 07:13:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 07:13:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 68 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:39.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 07:13:40 compute-2 ceph-mon[77138]: pgmap v193: 305 pgs: 4 active+remapped, 7 peering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 350 B/s, 13 objects/s recovering
Nov 29 07:13:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:40 compute-2 ceph-mon[77138]: osdmap e68: 3 total, 3 up, 3 in
Nov 29 07:13:40 compute-2 ceph-mon[77138]: 5.1c scrub starts
Nov 29 07:13:40 compute-2 ceph-mon[77138]: 5.1c scrub ok
Nov 29 07:13:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 69 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 69 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 69 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 69 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:40 compute-2 sudo[86947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:40 compute-2 sudo[86947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:40 compute-2 sudo[86947]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:40 compute-2 sudo[86972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:40 compute-2 sudo[86972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:40 compute-2 sudo[86972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:40.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:40 compute-2 sudo[86997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:40 compute-2 sudo[86997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:40 compute-2 sudo[86997]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:40 compute-2 sudo[87022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:13:40 compute-2 sudo[87022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:40 compute-2 sudo[87022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:41 compute-2 ceph-mon[77138]: osdmap e69: 3 total, 3 up, 3 in
Nov 29 07:13:41 compute-2 ceph-mon[77138]: pgmap v196: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 906 B/s wr, 81 op/s; 73 B/s, 1 objects/s recovering
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:41.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:42 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 29 07:13:42 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 29 07:13:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:43 compute-2 sudo[86419]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:43.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:43 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 07:13:43 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 07:13:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:45.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:13:47 compute-2 ceph-mon[77138]: pgmap v197: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 682 B/s wr, 61 op/s; 54 B/s, 0 objects/s recovering
Nov 29 07:13:47 compute-2 ceph-mon[77138]: 7.16 scrub starts
Nov 29 07:13:47 compute-2 ceph-mon[77138]: 7.16 scrub ok
Nov 29 07:13:47 compute-2 ceph-mon[77138]: pgmap v198: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 547 B/s wr, 49 op/s; 44 B/s, 0 objects/s recovering
Nov 29 07:13:47 compute-2 ceph-mon[77138]: 3.5 scrub starts
Nov 29 07:13:47 compute-2 ceph-mon[77138]: 3.5 scrub ok
Nov 29 07:13:47 compute-2 ceph-mon[77138]: 7.9 scrub starts
Nov 29 07:13:47 compute-2 ceph-mon[77138]: pgmap v199: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 0 objects/s recovering
Nov 29 07:13:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:13:47 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc discarding unexpected beacon reply up:active seq 16 dne
Nov 29 07:13:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 07:13:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:47.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:47 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 07:13:47 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 07:13:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:48.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.10 scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.10 scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.1f deep-scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.1f deep-scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 5.1 scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 5.1 scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.1e deep-scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.1e deep-scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.9 scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 5.9 scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 5.9 scrub ok
Nov 29 07:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:13:48 compute-2 ceph-mon[77138]: osdmap e70: 3 total, 3 up, 3 in
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.14 scrub starts
Nov 29 07:13:48 compute-2 ceph-mon[77138]: 7.14 scrub ok
Nov 29 07:13:49 compute-2 ceph-mon[77138]: 3.c deep-scrub starts
Nov 29 07:13:49 compute-2 ceph-mon[77138]: 3.c deep-scrub ok
Nov 29 07:13:49 compute-2 ceph-mon[77138]: pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 72 B/s, 2 objects/s recovering
Nov 29 07:13:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:13:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:13:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:49.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:13:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 07:13:49 compute-2 sshd-session[85783]: Connection closed by 192.168.122.30 port 55438
Nov 29 07:13:49 compute-2 sshd-session[85780]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:13:49 compute-2 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 07:13:49 compute-2 systemd[1]: session-33.scope: Consumed 9.553s CPU time.
Nov 29 07:13:49 compute-2 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Nov 29 07:13:49 compute-2 systemd-logind[787]: Removed session 33.
Nov 29 07:13:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:50 compute-2 ceph-mon[77138]: 3.f scrub starts
Nov 29 07:13:50 compute-2 ceph-mon[77138]: 3.f scrub ok
Nov 29 07:13:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:13:50 compute-2 ceph-mon[77138]: osdmap e71: 3 total, 3 up, 3 in
Nov 29 07:13:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 07:13:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:51.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:51 compute-2 ceph-mon[77138]: osdmap e72: 3 total, 3 up, 3 in
Nov 29 07:13:51 compute-2 ceph-mon[77138]: pgmap v204: 305 pgs: 4 unknown, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Nov 29 07:13:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 07:13:51 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 07:13:51 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 07:13:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:13:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:13:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:53.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:54.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:54 compute-2 sudo[87117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:54 compute-2 sudo[87117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:54 compute-2 sudo[87117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:54 compute-2 sudo[87142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:54 compute-2 sudo[87142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:54 compute-2 sudo[87142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:13:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:56.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:13:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:57.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:13:57 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 07:13:57 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 07:13:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 07:13:58 compute-2 ceph-mon[77138]: osdmap e73: 3 total, 3 up, 3 in
Nov 29 07:13:58 compute-2 ceph-mon[77138]: 7.1d scrub starts
Nov 29 07:13:58 compute-2 ceph-mon[77138]: 7.1d scrub ok
Nov 29 07:13:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:13:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:58.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:13:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 29 07:13:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 29 07:13:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:13:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:13:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:59.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:00.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:00 compute-2 sudo[87170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:00 compute-2 sudo[87170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:00 compute-2 sudo[87170]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:00 compute-2 sudo[87195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:14:00 compute-2 sudo[87195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:00 compute-2 sudo[87195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:01.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 07:14:01 compute-2 ceph-mon[77138]: pgmap v206: 305 pgs: 4 unknown, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 7.f scrub starts
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 7.f scrub ok
Nov 29 07:14:01 compute-2 ceph-mon[77138]: pgmap v207: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 7.1b scrub starts
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 3.10 scrub starts
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 3.10 scrub ok
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 7.1b scrub ok
Nov 29 07:14:01 compute-2 ceph-mon[77138]: pgmap v208: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 580 B/s wr, 52 op/s; 15 B/s, 1 objects/s recovering
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 4.14 scrub starts
Nov 29 07:14:01 compute-2 ceph-mon[77138]: 4.14 scrub ok
Nov 29 07:14:01 compute-2 ceph-mon[77138]: osdmap e74: 3 total, 3 up, 3 in
Nov 29 07:14:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:14:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:02.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Nov 29 07:14:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Nov 29 07:14:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:03.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:03 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 29 07:14:03 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 29 07:14:03 compute-2 ceph-mon[77138]: pgmap v210: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 46 op/s; 13 B/s, 1 objects/s recovering
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 4.1c scrub starts
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 4.1c scrub ok
Nov 29 07:14:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:14:03 compute-2 ceph-mon[77138]: pgmap v211: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 459 B/s wr, 41 op/s; 12 B/s, 1 objects/s recovering
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 5.10 scrub starts
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 5.10 scrub ok
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 5.16 scrub starts
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 5.16 scrub ok
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 7.b scrub starts
Nov 29 07:14:03 compute-2 ceph-mon[77138]: osdmap e75: 3 total, 3 up, 3 in
Nov 29 07:14:03 compute-2 ceph-mon[77138]: 7.b scrub ok
Nov 29 07:14:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:04 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 29 07:14:04 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 29 07:14:05 compute-2 ceph-mon[77138]: pgmap v213: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.9 deep-scrub starts
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.9 deep-scrub ok
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.19 scrub starts
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.19 scrub ok
Nov 29 07:14:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.15 scrub starts
Nov 29 07:14:05 compute-2 ceph-mon[77138]: 4.15 scrub ok
Nov 29 07:14:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:05.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:05 compute-2 sshd-session[87223]: Accepted publickey for zuul from 192.168.122.30 port 52306 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:14:05 compute-2 systemd-logind[787]: New session 34 of user zuul.
Nov 29 07:14:05 compute-2 systemd[1]: Started Session 34 of User zuul.
Nov 29 07:14:05 compute-2 sshd-session[87223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:14:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 07:14:05 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:05 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 07:14:06 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 77 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:06 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 77 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:06 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 77 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:06 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 77 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:06 compute-2 ceph-mon[77138]: pgmap v214: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 07:14:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:14:06 compute-2 ceph-mon[77138]: osdmap e76: 3 total, 3 up, 3 in
Nov 29 07:14:06 compute-2 ceph-mon[77138]: osdmap e77: 3 total, 3 up, 3 in
Nov 29 07:14:06 compute-2 python3.9[87376]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 07:14:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:06.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:06 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 29 07:14:06 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 29 07:14:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 07:14:07 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78) [2] r=0 lpr=78 pi=[56,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:07 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78) [2] r=0 lpr=78 pi=[56,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:07 compute-2 ceph-mon[77138]: pgmap v217: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:14:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:14:07 compute-2 ceph-mon[77138]: 4.1f scrub starts
Nov 29 07:14:07 compute-2 ceph-mon[77138]: 4.1f scrub ok
Nov 29 07:14:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:14:07 compute-2 ceph-mon[77138]: osdmap e78: 3 total, 3 up, 3 in
Nov 29 07:14:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:07.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:07 compute-2 python3.9[87551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:14:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 79 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:08 compute-2 sudo[87705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shtvawlkhcuhmupbwimbmgnoxmjzrama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400448.0854974-101-43398008395161/AnsiballZ_command.py'
Nov 29 07:14:08 compute-2 sudo[87705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:08 compute-2 python3.9[87707]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:08 compute-2 sudo[87705]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-2 ceph-mon[77138]: 7.8 scrub starts
Nov 29 07:14:09 compute-2 ceph-mon[77138]: 7.8 scrub ok
Nov 29 07:14:09 compute-2 ceph-mon[77138]: osdmap e79: 3 total, 3 up, 3 in
Nov 29 07:14:09 compute-2 ceph-mon[77138]: pgmap v220: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Nov 29 07:14:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:14:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 07:14:09 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 80 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 80 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=5 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:09.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:09 compute-2 sudo[87859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvlyaavexwxeexhyaylpebogozoshwyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400449.4126027-136-278950249420301/AnsiballZ_stat.py'
Nov 29 07:14:09 compute-2 sudo[87859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:10 compute-2 python3.9[87861]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:14:10 compute-2 sudo[87859]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:10 compute-2 ceph-mon[77138]: 7.13 scrub starts
Nov 29 07:14:10 compute-2 ceph-mon[77138]: 7.13 scrub ok
Nov 29 07:14:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:14:10 compute-2 ceph-mon[77138]: osdmap e80: 3 total, 3 up, 3 in
Nov 29 07:14:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 07:14:10 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 81 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:10 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 81 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:10 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 81 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:10 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 81 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:10.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 07:14:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 07:14:11 compute-2 sudo[88014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fihivcjmtomjfukzcudeocwzfjaiwcdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400450.5118291-169-170433252614789/AnsiballZ_file.py'
Nov 29 07:14:11 compute-2 sudo[88014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 07:14:11 compute-2 ceph-mon[77138]: osdmap e81: 3 total, 3 up, 3 in
Nov 29 07:14:11 compute-2 ceph-mon[77138]: pgmap v223: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Nov 29 07:14:11 compute-2 ceph-mon[77138]: 4.1d scrub starts
Nov 29 07:14:11 compute-2 ceph-mon[77138]: 4.1d scrub ok
Nov 29 07:14:11 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 82 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 82 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=6 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-2 python3.9[88016]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:14:11 compute-2 sudo[88014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:11.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:11 compute-2 sudo[88166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgvnvuiseoguhayqeekwdbatefwfhvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400451.5713933-196-19781778909129/AnsiballZ_file.py'
Nov 29 07:14:11 compute-2 sudo[88166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:12 compute-2 python3.9[88168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:14:12 compute-2 sudo[88166]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 07:14:12 compute-2 ceph-mon[77138]: osdmap e82: 3 total, 3 up, 3 in
Nov 29 07:14:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:12 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 07:14:12 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 07:14:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:13 compute-2 python3.9[88318]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:14:13 compute-2 network[88336]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:14:13 compute-2 network[88337]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:14:13 compute-2 network[88338]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:14:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 07:14:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:13.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:13 compute-2 ceph-mon[77138]: 7.18 scrub starts
Nov 29 07:14:13 compute-2 ceph-mon[77138]: 7.18 scrub ok
Nov 29 07:14:13 compute-2 ceph-mon[77138]: osdmap e83: 3 total, 3 up, 3 in
Nov 29 07:14:13 compute-2 ceph-mon[77138]: pgmap v226: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:13 compute-2 ceph-mon[77138]: 6.1 scrub starts
Nov 29 07:14:13 compute-2 ceph-mon[77138]: 6.1 scrub ok
Nov 29 07:14:13 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 29 07:14:13 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 29 07:14:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:14 compute-2 sudo[88372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:14 compute-2 sudo[88372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:14 compute-2 sudo[88372]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:14 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 07:14:14 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 07:14:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 07:14:14 compute-2 sudo[88401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:14 compute-2 sudo[88401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:14 compute-2 sudo[88401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 5.c scrub starts
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 5.c scrub ok
Nov 29 07:14:14 compute-2 ceph-mon[77138]: osdmap e84: 3 total, 3 up, 3 in
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 5.1e scrub starts
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 5.1e scrub ok
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 3.13 scrub starts
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 3.13 scrub ok
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 8.5 deep-scrub starts
Nov 29 07:14:14 compute-2 ceph-mon[77138]: 8.5 deep-scrub ok
Nov 29 07:14:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:14:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:15.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:15 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 07:14:15 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 07:14:15 compute-2 ceph-mon[77138]: pgmap v228: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 487 B/s wr, 44 op/s; 52 B/s, 3 objects/s recovering
Nov 29 07:14:15 compute-2 ceph-mon[77138]: 8.3 scrub starts
Nov 29 07:14:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:14:15 compute-2 ceph-mon[77138]: 8.3 scrub ok
Nov 29 07:14:15 compute-2 ceph-mon[77138]: osdmap e85: 3 total, 3 up, 3 in
Nov 29 07:14:15 compute-2 ceph-mon[77138]: 5.15 scrub starts
Nov 29 07:14:15 compute-2 ceph-mon[77138]: 5.15 scrub ok
Nov 29 07:14:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:16.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:17 compute-2 python3.9[88650]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:14:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 07:14:17 compute-2 ceph-mon[77138]: 8.a scrub starts
Nov 29 07:14:17 compute-2 ceph-mon[77138]: 8.a scrub ok
Nov 29 07:14:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:14:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:17.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 29 07:14:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 29 07:14:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:17 compute-2 python3.9[88800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:14:18 compute-2 ceph-mon[77138]: pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 34 op/s; 106 B/s, 4 objects/s recovering
Nov 29 07:14:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:14:18 compute-2 ceph-mon[77138]: osdmap e86: 3 total, 3 up, 3 in
Nov 29 07:14:18 compute-2 ceph-mon[77138]: 8.1f scrub starts
Nov 29 07:14:18 compute-2 ceph-mon[77138]: 8.1f scrub ok
Nov 29 07:14:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:18.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:19 compute-2 sshd-session[88830]: Connection closed by 45.148.10.240 port 39144
Nov 29 07:14:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 07:14:19 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 87 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87 pruub=8.927716255s) [1] r=-1 lpr=87 pi=[68,87)/1 crt=53'1137 mlcod 0'0 active pruub 129.409317017s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:19 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 87 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87 pruub=8.927647591s) [1] r=-1 lpr=87 pi=[68,87)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 129.409317017s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:19 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 87 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87 pruub=8.927662849s) [1] r=-1 lpr=87 pi=[68,87)/1 crt=53'1137 mlcod 0'0 active pruub 129.409393311s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:19 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 87 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87 pruub=8.927588463s) [1] r=-1 lpr=87 pi=[68,87)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 129.409393311s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:19 compute-2 ceph-mon[77138]: 5.14 scrub starts
Nov 29 07:14:19 compute-2 ceph-mon[77138]: 5.14 scrub ok
Nov 29 07:14:19 compute-2 ceph-mon[77138]: pgmap v232: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s; 91 B/s, 4 objects/s recovering
Nov 29 07:14:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:14:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:19 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 07:14:19 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 07:14:20 compute-2 python3.9[88956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:14:20 compute-2 ceph-mon[77138]: 3.12 scrub starts
Nov 29 07:14:20 compute-2 ceph-mon[77138]: 3.12 scrub ok
Nov 29 07:14:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:14:20 compute-2 ceph-mon[77138]: osdmap e87: 3 total, 3 up, 3 in
Nov 29 07:14:20 compute-2 ceph-mon[77138]: 8.f scrub starts
Nov 29 07:14:20 compute-2 ceph-mon[77138]: 8.f scrub ok
Nov 29 07:14:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:20.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 07:14:20 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 88 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:20 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 88 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:20 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 88 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:20 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 88 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:20 compute-2 sudo[89112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhaamxvkqmsscfsvbeyvgsiptiavldli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400460.5951798-340-204998026111338/AnsiballZ_setup.py'
Nov 29 07:14:20 compute-2 sudo[89112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:21 compute-2 python3.9[89114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:14:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:21.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:21 compute-2 sudo[89112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:21 compute-2 sudo[89197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szsitkbralztqxllimgasphunrncmcyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400460.5951798-340-204998026111338/AnsiballZ_dnf.py'
Nov 29 07:14:21 compute-2 sudo[89197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:22 compute-2 python3.9[89199]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:14:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:22 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 07:14:22 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 07:14:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:23 compute-2 ceph-mon[77138]: osdmap e88: 3 total, 3 up, 3 in
Nov 29 07:14:23 compute-2 ceph-mon[77138]: pgmap v235: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:24 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 29 07:14:24 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 29 07:14:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 07:14:25 compute-2 ceph-mon[77138]: pgmap v236: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:25 compute-2 ceph-mon[77138]: 3.16 scrub starts
Nov 29 07:14:25 compute-2 ceph-mon[77138]: 3.16 scrub ok
Nov 29 07:14:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:26.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:26 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 07:14:26 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 07:14:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:27 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 07:14:27 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 07:14:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:28.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:28 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 89 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] async=[1] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:28 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 89 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] async=[1] r=0 lpr=88 pi=[68,88)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 8.d scrub starts
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 8.d scrub ok
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 5.17 scrub starts
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 5.17 scrub ok
Nov 29 07:14:28 compute-2 ceph-mon[77138]: pgmap v237: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 8.11 scrub starts
Nov 29 07:14:28 compute-2 ceph-mon[77138]: 8.11 scrub ok
Nov 29 07:14:28 compute-2 ceph-mon[77138]: osdmap e89: 3 total, 3 up, 3 in
Nov 29 07:14:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:29.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:29 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 07:14:29 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: pgmap v239: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.9 scrub starts
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.9 scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 6.a scrub starts
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.2 scrub starts
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.2 scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: pgmap v240: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 6.a scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 5.6 scrub starts
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 5.6 scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.b scrub starts
Nov 29 07:14:29 compute-2 ceph-mon[77138]: 8.b scrub ok
Nov 29 07:14:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 07:14:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 90 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=6 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.552300453s) [1] async=[1] r=-1 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 53'1137 active pruub 145.702148438s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 90 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=5 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.941811562s) [1] async=[1] r=-1 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 53'1137 active pruub 146.091720581s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 90 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=6 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.552165031s) [1] r=-1 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 145.702148438s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 90 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=88/89 n=5 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.941685677s) [1] r=-1 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 146.091720581s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:30 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 07:14:30 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 07:14:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 07:14:31 compute-2 ceph-mon[77138]: 4.1a scrub starts
Nov 29 07:14:31 compute-2 ceph-mon[77138]: 4.1a scrub ok
Nov 29 07:14:31 compute-2 ceph-mon[77138]: osdmap e90: 3 total, 3 up, 3 in
Nov 29 07:14:31 compute-2 ceph-mon[77138]: 8.16 scrub starts
Nov 29 07:14:31 compute-2 ceph-mon[77138]: 8.16 scrub ok
Nov 29 07:14:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:31.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 07:14:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 07:14:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:32 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 07:14:32 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 07:14:32 compute-2 ceph-mon[77138]: pgmap v242: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:32 compute-2 ceph-mon[77138]: 4.1b scrub starts
Nov 29 07:14:32 compute-2 ceph-mon[77138]: 4.1b scrub ok
Nov 29 07:14:32 compute-2 ceph-mon[77138]: osdmap e91: 3 total, 3 up, 3 in
Nov 29 07:14:32 compute-2 ceph-mon[77138]: 8.1c scrub starts
Nov 29 07:14:32 compute-2 ceph-mon[77138]: 8.1c scrub ok
Nov 29 07:14:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:33.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:33 compute-2 ceph-mon[77138]: pgmap v244: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:33 compute-2 ceph-mon[77138]: 4.18 scrub starts
Nov 29 07:14:33 compute-2 ceph-mon[77138]: 4.18 scrub ok
Nov 29 07:14:33 compute-2 ceph-mon[77138]: 8.15 scrub starts
Nov 29 07:14:33 compute-2 ceph-mon[77138]: 8.15 scrub ok
Nov 29 07:14:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:34.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:34 compute-2 sudo[89279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:34 compute-2 sudo[89279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:34 compute-2 sudo[89279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:34 compute-2 sudo[89304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:34 compute-2 sudo[89304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:34 compute-2 sudo[89304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 07:14:35 compute-2 ceph-mon[77138]: 6.3 scrub starts
Nov 29 07:14:35 compute-2 ceph-mon[77138]: 6.3 scrub ok
Nov 29 07:14:35 compute-2 ceph-mon[77138]: 5.5 scrub starts
Nov 29 07:14:35 compute-2 ceph-mon[77138]: 5.5 scrub ok
Nov 29 07:14:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:14:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:35.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:36.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:36 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 29 07:14:36 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 29 07:14:37 compute-2 ceph-mon[77138]: pgmap v245: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 22 op/s; 82 B/s, 3 objects/s recovering
Nov 29 07:14:37 compute-2 ceph-mon[77138]: 6.d scrub starts
Nov 29 07:14:37 compute-2 ceph-mon[77138]: 6.d scrub ok
Nov 29 07:14:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:14:37 compute-2 ceph-mon[77138]: osdmap e92: 3 total, 3 up, 3 in
Nov 29 07:14:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 07:14:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 93 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=10.483815193s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'1137 mlcod 0'0 active pruub 149.177841187s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 93 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=10.483733177s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 149.177841187s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 93 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=10.482994080s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'1137 mlcod 0'0 active pruub 149.177764893s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 93 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=10.482872963s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 149.177764893s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:37 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Nov 29 07:14:37 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Nov 29 07:14:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 4.c scrub starts
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 4.c scrub ok
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 3.1e scrub starts
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 3.1e scrub ok
Nov 29 07:14:38 compute-2 ceph-mon[77138]: pgmap v247: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 101 B/s, 3 objects/s recovering
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 8.c scrub starts
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 8.c scrub ok
Nov 29 07:14:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 6.2 scrub starts
Nov 29 07:14:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 6.2 scrub ok
Nov 29 07:14:38 compute-2 ceph-mon[77138]: osdmap e93: 3 total, 3 up, 3 in
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 8.6 deep-scrub starts
Nov 29 07:14:38 compute-2 ceph-mon[77138]: 8.6 deep-scrub ok
Nov 29 07:14:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 07:14:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 94 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 94 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=6 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 94 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:38 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 94 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:38 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 07:14:38 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 07:14:39 compute-2 ceph-mon[77138]: pgmap v249: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s; 89 B/s, 3 objects/s recovering
Nov 29 07:14:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 6.5 scrub starts
Nov 29 07:14:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 6.5 scrub ok
Nov 29 07:14:39 compute-2 ceph-mon[77138]: osdmap e94: 3 total, 3 up, 3 in
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 3.17 scrub starts
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 3.17 scrub ok
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 10.12 scrub starts
Nov 29 07:14:39 compute-2 ceph-mon[77138]: 10.12 scrub ok
Nov 29 07:14:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 07:14:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 95 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=6 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] async=[1] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 95 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] async=[1] r=0 lpr=94 pi=[65,94)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:40.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 07:14:40 compute-2 ceph-mon[77138]: osdmap e95: 3 total, 3 up, 3 in
Nov 29 07:14:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 96 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=5 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.991864204s) [1] async=[1] r=-1 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 53'1137 active pruub 156.750488281s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 96 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=5 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.991784096s) [1] r=-1 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 156.750488281s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 96 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=6 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.990324020s) [1] async=[1] r=-1 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 53'1137 active pruub 156.750259399s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:40 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 96 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=94/95 n=6 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.990188599s) [1] r=-1 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 156.750259399s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 07:14:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 07:14:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 07:14:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:41 compute-2 ceph-mon[77138]: pgmap v252: 305 pgs: 2 active+remapped, 1 unknown, 302 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 40 B/s, 2 objects/s recovering
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 4.a deep-scrub starts
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 4.a deep-scrub ok
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 3.19 scrub starts
Nov 29 07:14:41 compute-2 ceph-mon[77138]: osdmap e96: 3 total, 3 up, 3 in
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 3.19 scrub ok
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 10.10 scrub starts
Nov 29 07:14:41 compute-2 ceph-mon[77138]: 10.10 scrub ok
Nov 29 07:14:41 compute-2 ceph-mon[77138]: osdmap e97: 3 total, 3 up, 3 in
Nov 29 07:14:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 07:14:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:43.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:43 compute-2 ceph-mon[77138]: osdmap e98: 3 total, 3 up, 3 in
Nov 29 07:14:43 compute-2 ceph-mon[77138]: pgmap v256: 305 pgs: 2 active+remapped, 1 unknown, 302 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 55 B/s, 3 objects/s recovering
Nov 29 07:14:44 compute-2 ceph-mon[77138]: 6.8 deep-scrub starts
Nov 29 07:14:44 compute-2 ceph-mon[77138]: 6.8 deep-scrub ok
Nov 29 07:14:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:44.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 07:14:45 compute-2 ceph-mon[77138]: pgmap v257: 305 pgs: 305 active+clean; 458 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 0 B/s wr, 18 op/s; 44 B/s, 2 objects/s recovering
Nov 29 07:14:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 07:14:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 07:14:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 07:14:46 compute-2 ceph-mon[77138]: osdmap e99: 3 total, 3 up, 3 in
Nov 29 07:14:46 compute-2 ceph-mon[77138]: osdmap e100: 3 total, 3 up, 3 in
Nov 29 07:14:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:46.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:46 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 29 07:14:46 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 29 07:14:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:47.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 6.e scrub starts
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 6.e scrub ok
Nov 29 07:14:47 compute-2 ceph-mon[77138]: pgmap v260: 305 pgs: 305 active+clean; 458 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 193 B/s wr, 17 op/s; 0 B/s, 0 objects/s recovering
Nov 29 07:14:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 5.1d scrub starts
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 5.1d scrub ok
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 10.1e scrub starts
Nov 29 07:14:47 compute-2 ceph-mon[77138]: 10.1e scrub ok
Nov 29 07:14:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:48.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 07:14:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:49.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:49 compute-2 ceph-mon[77138]: 4.13 scrub starts
Nov 29 07:14:49 compute-2 ceph-mon[77138]: 4.13 scrub ok
Nov 29 07:14:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 07:14:49 compute-2 ceph-mon[77138]: osdmap e101: 3 total, 3 up, 3 in
Nov 29 07:14:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 07:14:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 07:14:50 compute-2 ceph-mon[77138]: pgmap v262: 305 pgs: 1 active+remapped, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Nov 29 07:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 07:14:50 compute-2 ceph-mon[77138]: osdmap e102: 3 total, 3 up, 3 in
Nov 29 07:14:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 07:14:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:51.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 3.4 scrub starts
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 3.4 scrub ok
Nov 29 07:14:51 compute-2 ceph-mon[77138]: pgmap v264: 305 pgs: 1 peering, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 4.d scrub starts
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 4.d scrub ok
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 3.2 scrub starts
Nov 29 07:14:51 compute-2 ceph-mon[77138]: 3.2 scrub ok
Nov 29 07:14:51 compute-2 ceph-mon[77138]: osdmap e103: 3 total, 3 up, 3 in
Nov 29 07:14:51 compute-2 ceph-mon[77138]: osdmap e104: 3 total, 3 up, 3 in
Nov 29 07:14:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 07:14:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:52 compute-2 ceph-mon[77138]: 3.18 scrub starts
Nov 29 07:14:52 compute-2 ceph-mon[77138]: 3.18 scrub ok
Nov 29 07:14:52 compute-2 ceph-mon[77138]: osdmap e105: 3 total, 3 up, 3 in
Nov 29 07:14:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:53.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:53 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 29 07:14:53 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 29 07:14:54 compute-2 ceph-mon[77138]: pgmap v268: 305 pgs: 1 peering, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 3.1 deep-scrub starts
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 3.1 deep-scrub ok
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 4.e scrub starts
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 4.e scrub ok
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 10.4 scrub starts
Nov 29 07:14:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:54 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 29 07:14:54 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 29 07:14:54 compute-2 sudo[89410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:54 compute-2 sudo[89410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:54 compute-2 sudo[89410]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 07:14:54 compute-2 sudo[89435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:54 compute-2 sudo[89435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:54 compute-2 sudo[89435]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 3.7 scrub starts
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 3.7 scrub ok
Nov 29 07:14:54 compute-2 ceph-mon[77138]: 10.4 scrub ok
Nov 29 07:14:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 07:14:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:14:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:14:56 compute-2 ceph-mon[77138]: pgmap v269: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 19 B/s, 0 objects/s recovering
Nov 29 07:14:56 compute-2 ceph-mon[77138]: 10.3 scrub starts
Nov 29 07:14:56 compute-2 ceph-mon[77138]: 10.3 scrub ok
Nov 29 07:14:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 07:14:56 compute-2 ceph-mon[77138]: osdmap e106: 3 total, 3 up, 3 in
Nov 29 07:14:56 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 07:14:56 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 07:14:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 07:14:57 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 107 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=10.883352280s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=53'1137 mlcod 0'0 active pruub 169.408248901s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:57 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 107 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=10.882891655s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 169.408248901s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:57 compute-2 ceph-mon[77138]: 3.6 scrub starts
Nov 29 07:14:57 compute-2 ceph-mon[77138]: 3.6 scrub ok
Nov 29 07:14:57 compute-2 ceph-mon[77138]: pgmap v271: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:14:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 07:14:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:58 compute-2 ceph-mon[77138]: 5.3 scrub starts
Nov 29 07:14:58 compute-2 ceph-mon[77138]: 5.3 scrub ok
Nov 29 07:14:58 compute-2 ceph-mon[77138]: 10.f scrub starts
Nov 29 07:14:58 compute-2 ceph-mon[77138]: 10.f scrub ok
Nov 29 07:14:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 07:14:58 compute-2 ceph-mon[77138]: osdmap e107: 3 total, 3 up, 3 in
Nov 29 07:14:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 07:14:58 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 108 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:58 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 108 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=68/69 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:58.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 29 07:14:58 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 29 07:14:59 compute-2 ceph-mon[77138]: 3.b scrub starts
Nov 29 07:14:59 compute-2 ceph-mon[77138]: 3.b scrub ok
Nov 29 07:14:59 compute-2 ceph-mon[77138]: osdmap e108: 3 total, 3 up, 3 in
Nov 29 07:14:59 compute-2 ceph-mon[77138]: pgmap v274: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:14:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:14:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 07:14:59 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=109) [2] r=0 lpr=109 pi=[74,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:59 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 109 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=108/109 n=5 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:14:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:14:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:59.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:14:59 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 07:14:59 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 07:15:00 compute-2 ceph-mon[77138]: 10.1 scrub starts
Nov 29 07:15:00 compute-2 ceph-mon[77138]: 10.1 scrub ok
Nov 29 07:15:00 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:15:00 compute-2 ceph-mon[77138]: osdmap e109: 3 total, 3 up, 3 in
Nov 29 07:15:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 07:15:00 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 110 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=108/109 n=5 ec=56/47 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.986671448s) [1] async=[1] r=-1 lpr=110 pi=[68,110)/1 crt=53'1137 mlcod 53'1137 active pruub 176.609497070s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:00 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 110 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=108/109 n=5 ec=56/47 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.986556053s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 176.609497070s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:00 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=110) [2]/[1] r=-1 lpr=110 pi=[74,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:00 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=110) [2]/[1] r=-1 lpr=110 pi=[74,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:00 compute-2 sudo[89463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:00 compute-2 sudo[89463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:00 compute-2 sudo[89463]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:00 compute-2 sudo[89488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:00 compute-2 sudo[89488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:00 compute-2 sudo[89488]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:00 compute-2 sudo[89513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:00 compute-2 sudo[89513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:00 compute-2 sudo[89513]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:01 compute-2 sudo[89539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:15:01 compute-2 sudo[89539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:01 compute-2 ceph-mon[77138]: 10.11 scrub starts
Nov 29 07:15:01 compute-2 ceph-mon[77138]: 10.11 scrub ok
Nov 29 07:15:01 compute-2 ceph-mon[77138]: osdmap e110: 3 total, 3 up, 3 in
Nov 29 07:15:01 compute-2 ceph-mon[77138]: pgmap v277: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:01 compute-2 ceph-mon[77138]: 5.19 scrub starts
Nov 29 07:15:01 compute-2 ceph-mon[77138]: 5.19 scrub ok
Nov 29 07:15:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 07:15:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:01.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:01 compute-2 podman[89634]: 2025-11-29 07:15:01.501461267 +0000 UTC m=+0.060522168 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:15:01 compute-2 podman[89634]: 2025-11-29 07:15:01.679959932 +0000 UTC m=+0.239020813 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:15:01 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 07:15:01 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 07:15:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:02 compute-2 ceph-mon[77138]: osdmap e111: 3 total, 3 up, 3 in
Nov 29 07:15:02 compute-2 ceph-mon[77138]: 11.3 scrub starts
Nov 29 07:15:02 compute-2 ceph-mon[77138]: 11.3 scrub ok
Nov 29 07:15:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 07:15:02 compute-2 podman[89789]: 2025-11-29 07:15:02.621771683 +0000 UTC m=+0.387705065 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:15:02 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 112 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=110/74 les/c/f=111/75/0 sis=112) [2] r=0 lpr=112 pi=[74,112)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:02 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 112 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=110/74 les/c/f=111/75/0 sis=112) [2] r=0 lpr=112 pi=[74,112)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:02 compute-2 podman[89789]: 2025-11-29 07:15:02.656446996 +0000 UTC m=+0.422380348 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:15:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 07:15:02 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 07:15:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:02 compute-2 podman[89853]: 2025-11-29 07:15:02.928499531 +0000 UTC m=+0.052733848 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vendor=Red Hat, Inc., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2)
Nov 29 07:15:02 compute-2 podman[89853]: 2025-11-29 07:15:02.940727767 +0000 UTC m=+0.064962064 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Nov 29 07:15:02 compute-2 sudo[89539]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.163845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503163944, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7094, "num_deletes": 256, "total_data_size": 13045839, "memory_usage": 13317056, "flush_reason": "Manual Compaction"}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 29 07:15:03 compute-2 sudo[89885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:03 compute-2 sudo[89885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:03 compute-2 sudo[89885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503241839, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7806457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 258, "largest_seqno": 7099, "table_properties": {"data_size": 7778388, "index_size": 18354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81505, "raw_average_key_size": 23, "raw_value_size": 7711292, "raw_average_value_size": 2233, "num_data_blocks": 813, "num_entries": 3452, "num_filter_entries": 3452, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400292, "oldest_key_time": 1764400292, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 78472 microseconds, and 23448 cpu microseconds.
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.242289) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7806457 bytes OK
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.242461) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.248140) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.248189) EVENT_LOG_v1 {"time_micros": 1764400503248178, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.248229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13007949, prev total WAL file size 13027939, number of live WAL files 2.
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.251814) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7623KB) 8(1648B)]
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503251956, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7808105, "oldest_snapshot_seqno": -1}
Nov 29 07:15:03 compute-2 sudo[89911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:03 compute-2 sudo[89911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:03 compute-2 sudo[89911]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:03 compute-2 sudo[89936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:03 compute-2 sudo[89936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:03 compute-2 sudo[89936]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3199 keys, 7802674 bytes, temperature: kUnknown
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503321963, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7802674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7775295, "index_size": 18309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77289, "raw_average_key_size": 24, "raw_value_size": 7711350, "raw_average_value_size": 2410, "num_data_blocks": 813, "num_entries": 3199, "num_filter_entries": 3199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.322284) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7802674 bytes
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.323669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.4 rd, 111.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3457, records dropped: 258 output_compression: NoCompression
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.323713) EVENT_LOG_v1 {"time_micros": 1764400503323697, "job": 4, "event": "compaction_finished", "compaction_time_micros": 70112, "compaction_time_cpu_micros": 17570, "output_level": 6, "num_output_files": 1, "total_output_size": 7802674, "num_input_records": 3457, "num_output_records": 3199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503325253, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503325355, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 07:15:03 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:15:03.251692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:15:03 compute-2 sudo[89961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:15:03 compute-2 sudo[89961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:03.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:03 compute-2 ceph-mon[77138]: 6.7 scrub starts
Nov 29 07:15:03 compute-2 ceph-mon[77138]: 6.7 scrub ok
Nov 29 07:15:03 compute-2 ceph-mon[77138]: pgmap v279: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:03 compute-2 ceph-mon[77138]: osdmap e112: 3 total, 3 up, 3 in
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: 11.e scrub starts
Nov 29 07:15:03 compute-2 ceph-mon[77138]: 11.e scrub ok
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 07:15:03 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 113 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=112/113 n=5 ec=56/47 lis/c=110/74 les/c/f=111/75/0 sis=112) [2] r=0 lpr=112 pi=[74,112)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:03 compute-2 sudo[89961]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:03 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 29 07:15:03 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 29 07:15:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:04 compute-2 ceph-mon[77138]: osdmap e113: 3 total, 3 up, 3 in
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:15:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:04 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 07:15:04 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 07:15:05 compute-2 sudo[89197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:15:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:05.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:15:05 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 29 07:15:05 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 29 07:15:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:06 compute-2 ceph-mon[77138]: 11.8 deep-scrub starts
Nov 29 07:15:06 compute-2 ceph-mon[77138]: 11.8 deep-scrub ok
Nov 29 07:15:06 compute-2 ceph-mon[77138]: pgmap v282: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:06 compute-2 ceph-mon[77138]: 3.1f scrub starts
Nov 29 07:15:06 compute-2 ceph-mon[77138]: 3.1f scrub ok
Nov 29 07:15:06 compute-2 ceph-mon[77138]: 11.19 scrub starts
Nov 29 07:15:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 07:15:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:07 compute-2 ceph-mon[77138]: 11.19 scrub ok
Nov 29 07:15:07 compute-2 ceph-mon[77138]: 11.a scrub starts
Nov 29 07:15:07 compute-2 ceph-mon[77138]: 11.a scrub ok
Nov 29 07:15:07 compute-2 ceph-mon[77138]: pgmap v283: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 07:15:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 07:15:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 07:15:07 compute-2 ceph-mon[77138]: osdmap e114: 3 total, 3 up, 3 in
Nov 29 07:15:07 compute-2 ceph-mon[77138]: 10.6 scrub starts
Nov 29 07:15:07 compute-2 ceph-mon[77138]: 10.6 scrub ok
Nov 29 07:15:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:08.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:08 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 07:15:08 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 07:15:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 07:15:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:09 compute-2 ceph-mon[77138]: 10.7 scrub starts
Nov 29 07:15:09 compute-2 ceph-mon[77138]: 10.7 scrub ok
Nov 29 07:15:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 07:15:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:10.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 29 07:15:10 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 29 07:15:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 07:15:11 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 116 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=116 pruub=12.109648705s) [0] r=-1 lpr=116 pi=[81,116)/1 crt=53'1137 mlcod 0'0 active pruub 184.483367920s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:11 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 116 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=116 pruub=12.109577179s) [0] r=-1 lpr=116 pi=[81,116)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 184.483367920s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:11 compute-2 ceph-mon[77138]: pgmap v285: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 07:15:11 compute-2 ceph-mon[77138]: 11.16 scrub starts
Nov 29 07:15:11 compute-2 ceph-mon[77138]: 11.16 scrub ok
Nov 29 07:15:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 07:15:11 compute-2 ceph-mon[77138]: osdmap e115: 3 total, 3 up, 3 in
Nov 29 07:15:11 compute-2 ceph-mon[77138]: 10.9 scrub starts
Nov 29 07:15:11 compute-2 ceph-mon[77138]: 10.9 scrub ok
Nov 29 07:15:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 07:15:11 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 07:15:11 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 07:15:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 07:15:12 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 117 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=117) [0]/[2] r=0 lpr=117 pi=[81,117)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:12 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 117 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=117) [0]/[2] r=0 lpr=117 pi=[81,117)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:12 compute-2 ceph-mon[77138]: pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 150 B/s wr, 12 op/s; 32 B/s, 1 objects/s recovering
Nov 29 07:15:12 compute-2 ceph-mon[77138]: 11.13 scrub starts
Nov 29 07:15:12 compute-2 ceph-mon[77138]: 11.13 scrub ok
Nov 29 07:15:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 07:15:12 compute-2 ceph-mon[77138]: osdmap e116: 3 total, 3 up, 3 in
Nov 29 07:15:12 compute-2 ceph-mon[77138]: 11.17 scrub starts
Nov 29 07:15:12 compute-2 ceph-mon[77138]: 11.17 scrub ok
Nov 29 07:15:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:15:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:15:12 compute-2 sudo[90046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:12 compute-2 sudo[90046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:12 compute-2 sudo[90046]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:12 compute-2 sudo[90071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:15:12 compute-2 sudo[90071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:12 compute-2 sudo[90071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:13 compute-2 ceph-mon[77138]: 5.a scrub starts
Nov 29 07:15:13 compute-2 ceph-mon[77138]: 5.a scrub ok
Nov 29 07:15:13 compute-2 ceph-mon[77138]: osdmap e117: 3 total, 3 up, 3 in
Nov 29 07:15:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:15:13 compute-2 ceph-mon[77138]: pgmap v290: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:15:13 compute-2 ceph-mon[77138]: 8.1 scrub starts
Nov 29 07:15:13 compute-2 ceph-mon[77138]: 8.1 scrub ok
Nov 29 07:15:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 07:15:13 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 118 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=117/118 n=5 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[81,117)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:13 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 07:15:13 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 07:15:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:15:14 compute-2 ceph-mon[77138]: osdmap e118: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-2 ceph-mon[77138]: 9.17 scrub starts
Nov 29 07:15:14 compute-2 ceph-mon[77138]: 9.17 scrub ok
Nov 29 07:15:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 119 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=117/118 n=5 ec=56/47 lis/c=117/81 les/c/f=118/82/0 sis=119 pruub=14.853060722s) [0] async=[0] r=-1 lpr=119 pi=[81,119)/1 crt=53'1137 mlcod 53'1137 active pruub 190.600463867s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:14 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 119 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=117/118 n=5 ec=56/47 lis/c=117/81 les/c/f=118/82/0 sis=119 pruub=14.852942467s) [0] r=-1 lpr=119 pi=[81,119)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 190.600463867s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:15 compute-2 sudo[90098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:15 compute-2 sudo[90098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:15 compute-2 sudo[90098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:15 compute-2 sudo[90123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:15 compute-2 sudo[90123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:15 compute-2 sudo[90123]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:15.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 10.a scrub starts
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 10.a scrub ok
Nov 29 07:15:15 compute-2 ceph-mon[77138]: pgmap v292: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:15:15 compute-2 ceph-mon[77138]: osdmap e119: 3 total, 3 up, 3 in
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 8.7 deep-scrub starts
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 8.7 deep-scrub ok
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 10.b deep-scrub starts
Nov 29 07:15:15 compute-2 ceph-mon[77138]: 10.b deep-scrub ok
Nov 29 07:15:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 07:15:15 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 120 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=120 pruub=12.421783447s) [0] r=-1 lpr=120 pi=[65,120)/1 crt=53'1137 mlcod 0'0 active pruub 189.179138184s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:15 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 120 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=120 pruub=12.421654701s) [0] r=-1 lpr=120 pi=[65,120)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 189.179138184s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:15 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 07:15:15 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 07:15:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 121 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=121) [0]/[2] r=0 lpr=121 pi=[65,121)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:16 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 121 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=65/66 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=121) [0]/[2] r=0 lpr=121 pi=[65,121)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-2 sudo[90273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkpusyckqvggwprmnphxwoghnfettraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400516.1586993-377-186146388051392/AnsiballZ_command.py'
Nov 29 07:15:16 compute-2 sudo[90273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:15:16 compute-2 ceph-mon[77138]: osdmap e120: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-2 ceph-mon[77138]: 9.13 scrub starts
Nov 29 07:15:16 compute-2 ceph-mon[77138]: 9.13 scrub ok
Nov 29 07:15:16 compute-2 ceph-mon[77138]: osdmap e121: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:15:16 compute-2 python3.9[90275]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:15:16 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 29 07:15:16 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 29 07:15:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 07:15:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:17.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:17 compute-2 sudo[90273]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 29 07:15:17 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 29 07:15:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:18 compute-2 sudo[90561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqddkovojnfdwjxehtmfonprfjsmabm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400517.677386-401-247584254845351/AnsiballZ_selinux.py'
Nov 29 07:15:18 compute-2 sudo[90561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:18.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:18 compute-2 python3.9[90563]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:15:18 compute-2 sudo[90561]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:19 compute-2 sudo[90714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqopdulxukdmrdalgrvpaknobmgzwddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400519.1332932-434-202157165265855/AnsiballZ_command.py'
Nov 29 07:15:19 compute-2 sudo[90714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:19.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:19 compute-2 python3.9[90716]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:15:19 compute-2 sudo[90714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:20.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:21.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:22.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:22 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:15:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:23.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:26.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:26 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:15:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 252..780) lease_timeout -- calling new election
Nov 29 07:15:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:27.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:27 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:15:27 compute-2 ceph-mon[77138]: paxos.1).electionLogic(14) init, last seen epoch 14
Nov 29 07:15:27 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:15:27 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 07:15:27 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 07:15:27 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:15:27 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 122 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=121/122 n=5 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=121) [0]/[2] async=[0] r=0 lpr=121 pi=[65,121)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:28 compute-2 sudo[90870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvstecvkkqrogvrczklofreqydkeayyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400527.9608972-458-240731425104666/AnsiballZ_file.py'
Nov 29 07:15:28 compute-2 sudo[90870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:15:28 compute-2 python3.9[90872]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:15:28 compute-2 sudo[90870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:28.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:28 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 07:15:28 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 07:15:29 compute-2 sudo[91023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkrffhvskqvfyqxxfbludbqrdrfpxkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400528.6361809-482-6311077689430/AnsiballZ_mount.py'
Nov 29 07:15:29 compute-2 sudo[91023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:29 compute-2 python3.9[91025]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:15:29 compute-2 sudo[91023]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 9.3 scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 9.3 scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v298: 305 pgs: 1 activating, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 8.e scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.c scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.c scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v299: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.d deep-scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.e deep-scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v300: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v301: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v302: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 8.e scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 9.7 scrub starts
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 9.7 scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.d deep-scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: 10.e deep-scrub ok
Nov 29 07:15:29 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:15:29 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:15:29 compute-2 ceph-mon[77138]: osdmap e122: 3 total, 3 up, 3 in
Nov 29 07:15:29 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 6m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:15:29 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:15:29 compute-2 ceph-mon[77138]: pgmap v303: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 123 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=121/122 n=5 ec=56/47 lis/c=121/65 les/c/f=122/66/0 sis=123 pruub=14.543544769s) [0] async=[0] r=-1 lpr=123 pi=[65,123)/1 crt=53'1137 mlcod 53'1137 active pruub 205.195312500s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:29 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 123 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=121/122 n=5 ec=56/47 lis/c=121/65 les/c/f=122/66/0 sis=123 pruub=14.543232918s) [0] r=-1 lpr=123 pi=[65,123)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 205.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:29.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:30.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:30 compute-2 sudo[91175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuvghybfswvbidtghjlgjepujnpulyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400530.2926967-566-25877221080602/AnsiballZ_file.py'
Nov 29 07:15:30 compute-2 sudo[91175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 9.5 scrub starts
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 9.5 scrub ok
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 10.16 scrub starts
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 10.16 scrub ok
Nov 29 07:15:30 compute-2 ceph-mon[77138]: osdmap e123: 3 total, 3 up, 3 in
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 8.13 scrub starts
Nov 29 07:15:30 compute-2 ceph-mon[77138]: 8.13 scrub ok
Nov 29 07:15:30 compute-2 python3.9[91177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:15:30 compute-2 sudo[91175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:30 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 07:15:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 07:15:31 compute-2 sudo[91328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfccugbspmpdnprrcwmjhulmxcelicvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400530.9994757-590-132512764423645/AnsiballZ_stat.py'
Nov 29 07:15:31 compute-2 sudo[91328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:31 compute-2 python3.9[91330]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:15:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:31 compute-2 sudo[91328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:31 compute-2 ceph-mon[77138]: pgmap v305: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:31 compute-2 ceph-mon[77138]: osdmap e124: 3 total, 3 up, 3 in
Nov 29 07:15:31 compute-2 sudo[91406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fndvfqrggkonlciuivacroljclmqyfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400530.9994757-590-132512764423645/AnsiballZ_file.py'
Nov 29 07:15:31 compute-2 sudo[91406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:31 compute-2 python3.9[91408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:15:31 compute-2 sudo[91406]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 07:15:31 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 07:15:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:32.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:32 compute-2 ceph-mon[77138]: 9.18 scrub starts
Nov 29 07:15:32 compute-2 ceph-mon[77138]: 9.18 scrub ok
Nov 29 07:15:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:33 compute-2 sudo[91559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpbamcgdfkqwamzreoqmkwihwcwmxfth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400532.8156981-653-271128535331532/AnsiballZ_stat.py'
Nov 29 07:15:33 compute-2 sudo[91559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:33 compute-2 python3.9[91561]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:15:33 compute-2 sudo[91559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:33.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:33 compute-2 ceph-mon[77138]: 9.8 scrub starts
Nov 29 07:15:33 compute-2 ceph-mon[77138]: 9.8 scrub ok
Nov 29 07:15:33 compute-2 ceph-mon[77138]: pgmap v307: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 07:15:33 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 07:15:33 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 07:15:34 compute-2 sudo[91713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujxyykcnfhdslpvahpyxmrxsgwzxmokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400533.954553-693-165706951456946/AnsiballZ_getent.py'
Nov 29 07:15:34 compute-2 sudo[91713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:34.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:34 compute-2 python3.9[91715]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:15:34 compute-2 sudo[91713]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:34 compute-2 ceph-mon[77138]: 10.17 scrub starts
Nov 29 07:15:34 compute-2 ceph-mon[77138]: 10.17 scrub ok
Nov 29 07:15:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:15:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 07:15:34 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 125 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=125) [2] r=0 lpr=125 pi=[90,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 29 07:15:34 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 29 07:15:35 compute-2 sudo[91841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:35 compute-2 sudo[91841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:35 compute-2 sudo[91841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:35 compute-2 sudo[91891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qghowtmimcjnjqeqvfpejjohjeqwmbov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400534.9159403-722-226807106420058/AnsiballZ_getent.py'
Nov 29 07:15:35 compute-2 sudo[91891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:35 compute-2 sudo[91895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:35 compute-2 sudo[91895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:35 compute-2 sudo[91895]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:35 compute-2 python3.9[91894]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:15:35 compute-2 sudo[91891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:35 compute-2 ceph-mon[77138]: 9.9 scrub starts
Nov 29 07:15:35 compute-2 ceph-mon[77138]: 9.9 scrub ok
Nov 29 07:15:35 compute-2 ceph-mon[77138]: pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:15:35 compute-2 ceph-mon[77138]: osdmap e125: 3 total, 3 up, 3 in
Nov 29 07:15:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 07:15:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=126) [2]/[1] r=-1 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:35 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=126) [2]/[1] r=-1 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:36 compute-2 sudo[92070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwaqqvdnhdajefwezktdgmaggjhkvxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400535.605038-746-245353932575961/AnsiballZ_group.py'
Nov 29 07:15:36 compute-2 sudo[92070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:36 compute-2 python3.9[92072]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:15:36 compute-2 sudo[92070]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:36.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:36 compute-2 sudo[92222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxvpgbkbielaytqvsjtujbdnsniefdmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400536.5781598-773-78886605524092/AnsiballZ_file.py'
Nov 29 07:15:36 compute-2 sudo[92222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 07:15:36 compute-2 ceph-mon[77138]: 9.16 scrub starts
Nov 29 07:15:36 compute-2 ceph-mon[77138]: 9.16 scrub ok
Nov 29 07:15:36 compute-2 ceph-mon[77138]: 9.1 scrub starts
Nov 29 07:15:36 compute-2 ceph-mon[77138]: 9.1 scrub ok
Nov 29 07:15:36 compute-2 ceph-mon[77138]: osdmap e126: 3 total, 3 up, 3 in
Nov 29 07:15:37 compute-2 python3.9[92224]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:15:37 compute-2 sudo[92222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:37.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 07:15:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 128 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=126/90 les/c/f=127/91/0 sis=128) [2] r=0 lpr=128 pi=[90,128)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:37 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 128 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=126/90 les/c/f=127/91/0 sis=128) [2] r=0 lpr=128 pi=[90,128)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:38 compute-2 sudo[92375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhmogxyueedznhoojkfnhmdndlqmbmkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400537.6758738-805-249780112425010/AnsiballZ_dnf.py'
Nov 29 07:15:38 compute-2 sudo[92375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:38 compute-2 python3.9[92377]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:15:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:38 compute-2 ceph-mon[77138]: pgmap v311: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:38 compute-2 ceph-mon[77138]: osdmap e127: 3 total, 3 up, 3 in
Nov 29 07:15:38 compute-2 ceph-mon[77138]: osdmap e128: 3 total, 3 up, 3 in
Nov 29 07:15:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 07:15:39 compute-2 ceph-osd[79833]: osd.2 pg_epoch: 129 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=128/129 n=5 ec=56/47 lis/c=126/90 les/c/f=127/91/0 sis=128) [2] r=0 lpr=128 pi=[90,128)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:39.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:39 compute-2 sudo[92375]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:40 compute-2 ceph-mon[77138]: pgmap v314: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:40 compute-2 ceph-mon[77138]: 10.1a scrub starts
Nov 29 07:15:40 compute-2 ceph-mon[77138]: 10.1a scrub ok
Nov 29 07:15:40 compute-2 ceph-mon[77138]: osdmap e129: 3 total, 3 up, 3 in
Nov 29 07:15:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:40.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:40 compute-2 sudo[92529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyvcthywmieffnhfdjwmnnhujasejumy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400540.446374-830-141752494823355/AnsiballZ_file.py'
Nov 29 07:15:40 compute-2 sudo[92529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 29 07:15:40 compute-2 ceph-osd[79833]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 29 07:15:40 compute-2 python3.9[92531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:15:40 compute-2 sudo[92529]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:15:41 compute-2 ceph-mon[77138]: 9.1d deep-scrub starts
Nov 29 07:15:41 compute-2 ceph-mon[77138]: 9.1d deep-scrub ok
Nov 29 07:15:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 07:15:41 compute-2 sudo[92682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efaradxducmmqnesnthsgazccnrarvie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400541.1707673-853-147690590677998/AnsiballZ_stat.py'
Nov 29 07:15:41 compute-2 sudo[92682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:41 compute-2 python3.9[92684]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:15:41 compute-2 sudo[92682]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:41 compute-2 sudo[92760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdkyvyytamxdmfjrloyugeoutjrvzovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400541.1707673-853-147690590677998/AnsiballZ_file.py'
Nov 29 07:15:41 compute-2 sudo[92760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:42 compute-2 python3.9[92762]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:15:42 compute-2 sudo[92760]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:42 compute-2 ceph-mon[77138]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 141 B/s, 3 objects/s recovering
Nov 29 07:15:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:15:42 compute-2 ceph-mon[77138]: osdmap e130: 3 total, 3 up, 3 in
Nov 29 07:15:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 07:15:42 compute-2 sudo[92912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrqdfkggeuunmvrxjsexhcusyocrwvvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400542.384802-893-139772490444371/AnsiballZ_stat.py'
Nov 29 07:15:42 compute-2 sudo[92912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:42 compute-2 python3.9[92914]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:15:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:42 compute-2 sudo[92912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:43 compute-2 sudo[92991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdeyhmomkxapieeodhmcrgdrimynmkoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400542.384802-893-139772490444371/AnsiballZ_file.py'
Nov 29 07:15:43 compute-2 sudo[92991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:43 compute-2 python3.9[92993]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:15:43 compute-2 sudo[92991]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:43 compute-2 ceph-mon[77138]: 8.1a scrub starts
Nov 29 07:15:43 compute-2 ceph-mon[77138]: 8.1a scrub ok
Nov 29 07:15:43 compute-2 ceph-mon[77138]: pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 119 B/s, 2 objects/s recovering
Nov 29 07:15:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:43 compute-2 ceph-mon[77138]: osdmap e131: 3 total, 3 up, 3 in
Nov 29 07:15:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 07:15:44 compute-2 sudo[93143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxnfpmepoxkyaxfbcpoplcxlalpeqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400544.0140116-937-125429563578318/AnsiballZ_dnf.py'
Nov 29 07:15:44 compute-2 sudo[93143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:44.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:44 compute-2 python3.9[93145]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:15:44 compute-2 ceph-mon[77138]: 9.2 scrub starts
Nov 29 07:15:44 compute-2 ceph-mon[77138]: 9.2 scrub ok
Nov 29 07:15:44 compute-2 ceph-mon[77138]: 10.1c scrub starts
Nov 29 07:15:44 compute-2 ceph-mon[77138]: 10.1c scrub ok
Nov 29 07:15:44 compute-2 ceph-mon[77138]: osdmap e132: 3 total, 3 up, 3 in
Nov 29 07:15:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 07:15:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:45 compute-2 ceph-mon[77138]: pgmap v321: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 122 B/s, 1 objects/s recovering
Nov 29 07:15:45 compute-2 ceph-mon[77138]: osdmap e133: 3 total, 3 up, 3 in
Nov 29 07:15:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 07:15:45 compute-2 sudo[93143]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:46.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:46 compute-2 python3.9[93297]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:15:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:47 compute-2 ceph-mon[77138]: osdmap e134: 3 total, 3 up, 3 in
Nov 29 07:15:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:15:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 07:15:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:48 compute-2 python3.9[93450]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:15:48 compute-2 ceph-mon[77138]: pgmap v324: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:48 compute-2 ceph-mon[77138]: 8.1d scrub starts
Nov 29 07:15:48 compute-2 ceph-mon[77138]: 8.1d scrub ok
Nov 29 07:15:48 compute-2 ceph-mon[77138]: 8.1e scrub starts
Nov 29 07:15:48 compute-2 ceph-mon[77138]: 8.1e scrub ok
Nov 29 07:15:48 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:15:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:49 compute-2 python3.9[93601]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:15:50 compute-2 ceph-mon[77138]: pgmap v326: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:50 compute-2 ceph-mon[77138]: 9.4 scrub starts
Nov 29 07:15:50 compute-2 ceph-mon[77138]: 9.4 scrub ok
Nov 29 07:15:50 compute-2 ceph-mon[77138]: 10.1d scrub starts
Nov 29 07:15:50 compute-2 ceph-mon[77138]: 10.1d scrub ok
Nov 29 07:15:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:50 compute-2 sudo[93751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqysbuoslwabecbrwzplhhxxhsluxfmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400550.299767-1061-201371794916839/AnsiballZ_systemd.py'
Nov 29 07:15:50 compute-2 sudo[93751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:51 compute-2 python3.9[93753]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:15:51 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:15:51 compute-2 ceph-mon[77138]: pgmap v327: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:51 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:15:51 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:15:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:51.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:51 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:15:51 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:15:51 compute-2 sudo[93751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:52 compute-2 python3.9[93916]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:15:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:52 compute-2 ceph-mon[77138]: 9.c deep-scrub starts
Nov 29 07:15:52 compute-2 ceph-mon[77138]: 9.c deep-scrub ok
Nov 29 07:15:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:15:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:53.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:53 compute-2 ceph-mon[77138]: 10.1f deep-scrub starts
Nov 29 07:15:53 compute-2 ceph-mon[77138]: 10.1f deep-scrub ok
Nov 29 07:15:53 compute-2 ceph-mon[77138]: pgmap v328: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:15:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:54 compute-2 ceph-mon[77138]: 8.4 scrub starts
Nov 29 07:15:54 compute-2 ceph-mon[77138]: 8.4 scrub ok
Nov 29 07:15:55 compute-2 sudo[93943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:55 compute-2 sudo[93943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:55 compute-2 sudo[93943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:55 compute-2 sudo[93968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:55 compute-2 sudo[93968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:55 compute-2 sudo[93968]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:55 compute-2 ceph-mon[77138]: pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 1 objects/s recovering
Nov 29 07:15:55 compute-2 ceph-mon[77138]: 9.14 deep-scrub starts
Nov 29 07:15:55 compute-2 ceph-mon[77138]: 9.14 deep-scrub ok
Nov 29 07:15:55 compute-2 sudo[94118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtbyavleqqspdthhxnzxsigprslvwmlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400555.6208553-1232-237667678211096/AnsiballZ_systemd.py'
Nov 29 07:15:55 compute-2 sudo[94118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:56 compute-2 python3.9[94120]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:15:56 compute-2 sudo[94118]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:56 compute-2 ceph-mon[77138]: 8.14 deep-scrub starts
Nov 29 07:15:56 compute-2 ceph-mon[77138]: 8.14 deep-scrub ok
Nov 29 07:15:56 compute-2 sudo[94272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wufaijgjbzngpmzymyrgvsvrxkcwbutv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400556.4056823-1232-107696797885477/AnsiballZ_systemd.py'
Nov 29 07:15:56 compute-2 sudo[94272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:57 compute-2 python3.9[94274]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:15:57 compute-2 sudo[94272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:57 compute-2 systemd[72591]: Created slice User Background Tasks Slice.
Nov 29 07:15:57 compute-2 systemd[72591]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 07:15:57 compute-2 systemd[72591]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 07:15:57 compute-2 sshd-session[87226]: Connection closed by 192.168.122.30 port 52306
Nov 29 07:15:57 compute-2 sshd-session[87223]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:15:57 compute-2 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 07:15:57 compute-2 systemd[1]: session-34.scope: Consumed 1min 4.577s CPU time.
Nov 29 07:15:57 compute-2 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Nov 29 07:15:57 compute-2 systemd-logind[787]: Removed session 34.
Nov 29 07:15:57 compute-2 ceph-mon[77138]: pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 07:15:57 compute-2 ceph-mon[77138]: 9.1c scrub starts
Nov 29 07:15:57 compute-2 ceph-mon[77138]: 9.1c scrub ok
Nov 29 07:15:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:15:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:15:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:58.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:15:58 compute-2 ceph-mon[77138]: 8.17 deep-scrub starts
Nov 29 07:15:58 compute-2 ceph-mon[77138]: 8.17 deep-scrub ok
Nov 29 07:15:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:15:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:15:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:15:59 compute-2 ceph-mon[77138]: pgmap v331: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:16:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:00.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:16:00 compute-2 ceph-mon[77138]: 8.1b scrub starts
Nov 29 07:16:00 compute-2 ceph-mon[77138]: 8.1b scrub ok
Nov 29 07:16:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:01 compute-2 ceph-mon[77138]: pgmap v332: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:01 compute-2 ceph-mon[77138]: 11.2 deep-scrub starts
Nov 29 07:16:01 compute-2 ceph-mon[77138]: 11.2 deep-scrub ok
Nov 29 07:16:01 compute-2 ceph-mon[77138]: 8.12 scrub starts
Nov 29 07:16:01 compute-2 ceph-mon[77138]: 8.12 scrub ok
Nov 29 07:16:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:04 compute-2 ceph-mon[77138]: pgmap v333: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:04 compute-2 sshd-session[94306]: Accepted publickey for zuul from 192.168.122.30 port 34934 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:16:04 compute-2 systemd-logind[787]: New session 35 of user zuul.
Nov 29 07:16:04 compute-2 systemd[1]: Started Session 35 of User zuul.
Nov 29 07:16:04 compute-2 sshd-session[94306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:16:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:16:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:16:05 compute-2 python3.9[94459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:05 compute-2 ceph-mon[77138]: pgmap v334: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:05.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:06 compute-2 ceph-mon[77138]: 8.18 scrub starts
Nov 29 07:16:06 compute-2 ceph-mon[77138]: 8.18 scrub ok
Nov 29 07:16:06 compute-2 sudo[94614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezeiewkpsqparljeijioypkogjxegwen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400565.9571326-75-176454217980995/AnsiballZ_getent.py'
Nov 29 07:16:06 compute-2 sudo[94614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:06 compute-2 python3.9[94616]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:16:06 compute-2 sudo[94614]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:07 compute-2 ceph-mon[77138]: pgmap v335: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:07.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:07 compute-2 sudo[94768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzwetepabgqsgkxbucbkhjflwddwwyem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400567.195803-111-109650528653704/AnsiballZ_setup.py'
Nov 29 07:16:07 compute-2 sudo[94768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:07 compute-2 python3.9[94770]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:16:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:08 compute-2 sudo[94768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:08 compute-2 sudo[94852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtkbhwkcuefbvniybeoeldallywyutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400567.195803-111-109650528653704/AnsiballZ_dnf.py'
Nov 29 07:16:08 compute-2 sudo[94852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:08.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:08 compute-2 python3.9[94854]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:16:08 compute-2 ceph-mon[77138]: 11.6 scrub starts
Nov 29 07:16:08 compute-2 ceph-mon[77138]: 11.6 scrub ok
Nov 29 07:16:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:09 compute-2 ceph-mon[77138]: 8.19 deep-scrub starts
Nov 29 07:16:09 compute-2 ceph-mon[77138]: 8.19 deep-scrub ok
Nov 29 07:16:09 compute-2 ceph-mon[77138]: pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:09 compute-2 ceph-mon[77138]: 11.9 scrub starts
Nov 29 07:16:09 compute-2 ceph-mon[77138]: 11.9 scrub ok
Nov 29 07:16:10 compute-2 sudo[94852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:10 compute-2 sudo[95006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwwihjoyrdxuiaawhydvcgelpwvusjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400570.626887-153-209419762007400/AnsiballZ_dnf.py'
Nov 29 07:16:10 compute-2 sudo[95006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:11 compute-2 python3.9[95008]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:16:11 compute-2 ceph-mon[77138]: 8.8 scrub starts
Nov 29 07:16:11 compute-2 ceph-mon[77138]: 8.8 scrub ok
Nov 29 07:16:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:11.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:12 compute-2 ceph-mon[77138]: pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:12 compute-2 sudo[95006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:12 compute-2 sudo[95035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:12 compute-2 sudo[95035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:12 compute-2 sudo[95035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:12 compute-2 sudo[95060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:16:12 compute-2 sudo[95060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:12 compute-2 sudo[95060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:12 compute-2 sudo[95085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:12 compute-2 sudo[95085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:12 compute-2 sudo[95085]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:12 compute-2 sudo[95110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:16:12 compute-2 sudo[95110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:13 compute-2 ceph-mon[77138]: pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:13 compute-2 podman[95206]: 2025-11-29 07:16:13.461291327 +0000 UTC m=+0.076236881 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:16:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:13.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:13 compute-2 podman[95206]: 2025-11-29 07:16:13.594185798 +0000 UTC m=+0.209131342 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:16:14 compute-2 sudo[95487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckyunuuvtbdyidxbwgmqnmubpevefrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400573.512816-177-2496160852303/AnsiballZ_systemd.py'
Nov 29 07:16:14 compute-2 sudo[95487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:14 compute-2 podman[95486]: 2025-11-29 07:16:14.202219736 +0000 UTC m=+0.055498049 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:16:14 compute-2 podman[95486]: 2025-11-29 07:16:14.210920526 +0000 UTC m=+0.064198819 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:16:14 compute-2 ceph-mon[77138]: 11.b scrub starts
Nov 29 07:16:14 compute-2 ceph-mon[77138]: 11.b scrub ok
Nov 29 07:16:14 compute-2 podman[95556]: 2025-11-29 07:16:14.42887344 +0000 UTC m=+0.052402665 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 07:16:14 compute-2 podman[95556]: 2025-11-29 07:16:14.438436863 +0000 UTC m=+0.061966078 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived)
Nov 29 07:16:14 compute-2 python3.9[95495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:16:14 compute-2 sudo[95110]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:14 compute-2 sudo[95487]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:15 compute-2 sudo[95742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:15 compute-2 sudo[95742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:15 compute-2 sudo[95742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:15 compute-2 sudo[95748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:15 compute-2 sudo[95748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:15 compute-2 sudo[95748]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:15 compute-2 sudo[95791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:16:15 compute-2 sudo[95791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:15 compute-2 sudo[95791]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:15 compute-2 sudo[95799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:15 compute-2 sudo[95799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:15 compute-2 sudo[95799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:15 compute-2 ceph-mon[77138]: 8.10 scrub starts
Nov 29 07:16:15 compute-2 ceph-mon[77138]: 8.10 scrub ok
Nov 29 07:16:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:15 compute-2 ceph-mon[77138]: pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:15 compute-2 ceph-mon[77138]: 11.c scrub starts
Nov 29 07:16:15 compute-2 ceph-mon[77138]: 11.c scrub ok
Nov 29 07:16:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:15 compute-2 sudo[95841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:15 compute-2 sudo[95841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:15 compute-2 sudo[95841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:15 compute-2 python3.9[95741]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:15 compute-2 sudo[95867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:16:15 compute-2 sudo[95867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:16 compute-2 sudo[95867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:16 compute-2 sudo[96072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxenrsgqjnwroefvogrbjvipiaimblqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400576.078693-231-243723830802773/AnsiballZ_sefcontext.py'
Nov 29 07:16:16 compute-2 sudo[96072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:16 compute-2 python3.9[96074]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.12 scrub starts
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.12 scrub ok
Nov 29 07:16:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.d scrub starts
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.d scrub ok
Nov 29 07:16:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.1c scrub starts
Nov 29 07:16:16 compute-2 ceph-mon[77138]: 11.1c scrub ok
Nov 29 07:16:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:16:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:16:16 compute-2 sudo[96072]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:17 compute-2 ceph-mon[77138]: pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:16:17 compute-2 ceph-mon[77138]: 11.4 scrub starts
Nov 29 07:16:17 compute-2 ceph-mon[77138]: 11.4 scrub ok
Nov 29 07:16:18 compute-2 python3.9[96225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:18.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:18 compute-2 sudo[96381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxryvnrcthdfkhsntnzlmzqekvppbss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400578.5879793-285-134992618188955/AnsiballZ_dnf.py'
Nov 29 07:16:18 compute-2 sudo[96381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:19 compute-2 python3.9[96383]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:16:19 compute-2 ceph-mon[77138]: pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:20 compute-2 sshd-session[96386]: Invalid user sol from 45.148.10.240 port 55878
Nov 29 07:16:20 compute-2 sshd-session[96386]: Connection closed by invalid user sol 45.148.10.240 port 55878 [preauth]
Nov 29 07:16:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:20 compute-2 sudo[96381]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:20 compute-2 ceph-mon[77138]: 11.7 scrub starts
Nov 29 07:16:20 compute-2 ceph-mon[77138]: 11.7 scrub ok
Nov 29 07:16:20 compute-2 ceph-mon[77138]: 11.10 scrub starts
Nov 29 07:16:20 compute-2 ceph-mon[77138]: 11.10 scrub ok
Nov 29 07:16:21 compute-2 sudo[96538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrwyyrkxuimtzvphuyvgufvwymoqogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400580.989446-309-96815636981102/AnsiballZ_command.py'
Nov 29 07:16:21 compute-2 sudo[96538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:21 compute-2 python3.9[96540]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:16:22 compute-2 ceph-mon[77138]: pgmap v342: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:22 compute-2 ceph-mon[77138]: 11.14 scrub starts
Nov 29 07:16:22 compute-2 ceph-mon[77138]: 11.14 scrub ok
Nov 29 07:16:22 compute-2 sudo[96538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:23 compute-2 sudo[96826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzravpaycoodvhdowoojfufxiuzihkvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400582.7146943-334-10411869426257/AnsiballZ_file.py'
Nov 29 07:16:23 compute-2 sudo[96826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:23 compute-2 python3.9[96828]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:16:23 compute-2 sudo[96826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:24 compute-2 python3.9[96978]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:16:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:24 compute-2 ceph-mon[77138]: 11.f scrub starts
Nov 29 07:16:24 compute-2 ceph-mon[77138]: 11.f scrub ok
Nov 29 07:16:25 compute-2 sudo[97131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbjkgaewvtlyvwdrhlceryccqwyxtczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400584.7022886-381-73996981643826/AnsiballZ_dnf.py'
Nov 29 07:16:25 compute-2 sudo[97131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:25 compute-2 python3.9[97133]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:16:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:25 compute-2 ceph-mon[77138]: pgmap v343: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:25 compute-2 ceph-mon[77138]: 11.1 scrub starts
Nov 29 07:16:25 compute-2 ceph-mon[77138]: 11.1 scrub ok
Nov 29 07:16:25 compute-2 ceph-mon[77138]: pgmap v344: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:26.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:26 compute-2 sudo[97131]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:27 compute-2 sudo[97285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgkwroeqleichbucxcqqqzhihfwewgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400587.0624042-408-58915389157779/AnsiballZ_dnf.py'
Nov 29 07:16:27 compute-2 sudo[97285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:27 compute-2 python3.9[97287]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:16:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:28 compute-2 ceph-mon[77138]: pgmap v345: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:28 compute-2 ceph-mon[77138]: 11.1b scrub starts
Nov 29 07:16:28 compute-2 ceph-mon[77138]: 11.1b scrub ok
Nov 29 07:16:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:28.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:28 compute-2 sudo[97289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:28 compute-2 sudo[97289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:28 compute-2 sudo[97289]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:28 compute-2 sudo[97314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:16:28 compute-2 sudo[97314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:28 compute-2 sudo[97314]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:29 compute-2 sudo[97285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:16:29 compute-2 ceph-mon[77138]: pgmap v346: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:30 compute-2 sudo[97489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtqbeiiaykamyzbuyyynaigoyirifck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400590.4489343-444-97466784330794/AnsiballZ_stat.py'
Nov 29 07:16:30 compute-2 sudo[97489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:30 compute-2 ceph-mon[77138]: 11.1d scrub starts
Nov 29 07:16:30 compute-2 ceph-mon[77138]: 11.1d scrub ok
Nov 29 07:16:30 compute-2 ceph-mon[77138]: 11.1a scrub starts
Nov 29 07:16:30 compute-2 ceph-mon[77138]: 11.1a scrub ok
Nov 29 07:16:30 compute-2 python3.9[97491]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:16:31 compute-2 sudo[97489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:31 compute-2 sudo[97644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livkmvxgijsxgtaeoqrbgpurnftwntmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400591.2399397-468-52276772034696/AnsiballZ_slurp.py'
Nov 29 07:16:31 compute-2 sudo[97644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:31 compute-2 python3.9[97646]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 07:16:31 compute-2 ceph-mon[77138]: pgmap v347: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:31 compute-2 sudo[97644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:33 compute-2 sshd-session[94309]: Connection closed by 192.168.122.30 port 34934
Nov 29 07:16:33 compute-2 sshd-session[94306]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:16:33 compute-2 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 07:16:33 compute-2 systemd[1]: session-35.scope: Consumed 19.160s CPU time.
Nov 29 07:16:33 compute-2 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Nov 29 07:16:33 compute-2 systemd-logind[787]: Removed session 35.
Nov 29 07:16:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:35 compute-2 sudo[97673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:35 compute-2 sudo[97673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:35 compute-2 sudo[97673]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:35 compute-2 sudo[97698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:35 compute-2 sudo[97698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:35 compute-2 sudo[97698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:16:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:38.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:39 compute-2 ceph-mon[77138]: pgmap v348: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.11 scrub starts
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.11 scrub ok
Nov 29 07:16:40 compute-2 ceph-mon[77138]: pgmap v349: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.15 scrub starts
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.15 scrub ok
Nov 29 07:16:40 compute-2 ceph-mon[77138]: pgmap v350: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:40 compute-2 ceph-mon[77138]: pgmap v351: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.5 scrub starts
Nov 29 07:16:40 compute-2 ceph-mon[77138]: 11.5 scrub ok
Nov 29 07:16:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 07:16:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 07:16:42 compute-2 ceph-mon[77138]: 11.18 scrub starts
Nov 29 07:16:42 compute-2 ceph-mon[77138]: 11.18 scrub ok
Nov 29 07:16:42 compute-2 ceph-mon[77138]: pgmap v352: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:42 compute-2 ceph-mon[77138]: 11.1e deep-scrub starts
Nov 29 07:16:42 compute-2 ceph-mon[77138]: 11.1e deep-scrub ok
Nov 29 07:16:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:43.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:43 compute-2 ceph-mon[77138]: 9.e scrub starts
Nov 29 07:16:43 compute-2 ceph-mon[77138]: 9.e scrub ok
Nov 29 07:16:43 compute-2 ceph-mon[77138]: pgmap v353: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 07:16:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:45.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 07:16:46 compute-2 ceph-mon[77138]: 9.6 scrub starts
Nov 29 07:16:46 compute-2 ceph-mon[77138]: 9.6 scrub ok
Nov 29 07:16:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:47.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:48 compute-2 ceph-mon[77138]: 11.1f scrub starts
Nov 29 07:16:48 compute-2 ceph-mon[77138]: 11.1f scrub ok
Nov 29 07:16:48 compute-2 ceph-mon[77138]: pgmap v354: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:48 compute-2 ceph-mon[77138]: 9.a scrub starts
Nov 29 07:16:48 compute-2 ceph-mon[77138]: 9.a scrub ok
Nov 29 07:16:48 compute-2 ceph-mon[77138]: pgmap v355: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:49.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:50 compute-2 ceph-mon[77138]: pgmap v356: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:50 compute-2 sshd-session[97730]: Accepted publickey for zuul from 192.168.122.30 port 42184 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:16:50 compute-2 systemd-logind[787]: New session 36 of user zuul.
Nov 29 07:16:50 compute-2 systemd[1]: Started Session 36 of User zuul.
Nov 29 07:16:50 compute-2 sshd-session[97730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:16:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:50.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:51.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:51 compute-2 python3.9[97884]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:52 compute-2 ceph-mon[77138]: pgmap v357: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:52 compute-2 python3.9[98038]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:16:53 compute-2 ceph-mon[77138]: 9.d scrub starts
Nov 29 07:16:53 compute-2 ceph-mon[77138]: 9.d scrub ok
Nov 29 07:16:53 compute-2 ceph-mon[77138]: pgmap v358: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:53.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:54 compute-2 python3.9[98232]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:16:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:54.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:55 compute-2 sshd-session[97733]: Connection closed by 192.168.122.30 port 42184
Nov 29 07:16:55 compute-2 sshd-session[97730]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:16:55 compute-2 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Nov 29 07:16:55 compute-2 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 07:16:55 compute-2 systemd[1]: session-36.scope: Consumed 2.342s CPU time.
Nov 29 07:16:55 compute-2 systemd-logind[787]: Removed session 36.
Nov 29 07:16:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:55.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:55 compute-2 sudo[98259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:55 compute-2 sudo[98259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:55 compute-2 sudo[98259]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:55 compute-2 sudo[98284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:55 compute-2 sudo[98284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:55 compute-2 sudo[98284]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:55 compute-2 ceph-mon[77138]: pgmap v359: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:55 compute-2 ceph-mon[77138]: 10.8 deep-scrub starts
Nov 29 07:16:55 compute-2 ceph-mon[77138]: 10.8 deep-scrub ok
Nov 29 07:16:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:16:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:16:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:57.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:16:58 compute-2 ceph-mon[77138]: pgmap v360: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:16:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:16:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:16:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:16:59 compute-2 ceph-mon[77138]: pgmap v361: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:01.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:01 compute-2 ceph-mon[77138]: pgmap v362: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 07:17:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 07:17:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:02 compute-2 sshd-session[98312]: Accepted publickey for zuul from 192.168.122.30 port 58948 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:17:02 compute-2 systemd-logind[787]: New session 37 of user zuul.
Nov 29 07:17:02 compute-2 systemd[1]: Started Session 37 of User zuul.
Nov 29 07:17:02 compute-2 sshd-session[98312]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:17:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:04 compute-2 python3.9[98466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:05.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:05 compute-2 python3.9[98621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:05 compute-2 ceph-mon[77138]: pgmap v363: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:06 compute-2 sudo[98775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkzivmrulgqyylhypxqogrxarxemiuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400626.3310897-87-146018452688109/AnsiballZ_setup.py'
Nov 29 07:17:06 compute-2 sudo[98775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:06.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:06 compute-2 python3.9[98777]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:17:07 compute-2 sudo[98775]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:07.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:07 compute-2 sudo[98860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntlvbgdscsancztjzrvdlprfguacshsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400626.3310897-87-146018452688109/AnsiballZ_dnf.py'
Nov 29 07:17:07 compute-2 sudo[98860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:07 compute-2 python3.9[98862]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:17:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:08 compute-2 ceph-mon[77138]: 9.f scrub starts
Nov 29 07:17:08 compute-2 ceph-mon[77138]: 9.f scrub ok
Nov 29 07:17:08 compute-2 ceph-mon[77138]: pgmap v364: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:08 compute-2 ceph-mon[77138]: 9.10 scrub starts
Nov 29 07:17:08 compute-2 ceph-mon[77138]: 9.10 scrub ok
Nov 29 07:17:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 07:17:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:08.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 07:17:09 compute-2 sudo[98860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 07:17:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:09.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 07:17:10 compute-2 ceph-mon[77138]: pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:10 compute-2 ceph-mon[77138]: 9.11 scrub starts
Nov 29 07:17:10 compute-2 ceph-mon[77138]: 9.11 scrub ok
Nov 29 07:17:10 compute-2 ceph-mon[77138]: 9.12 scrub starts
Nov 29 07:17:10 compute-2 ceph-mon[77138]: 9.12 scrub ok
Nov 29 07:17:10 compute-2 ceph-mon[77138]: pgmap v366: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:10.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:10 compute-2 sudo[99014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfipcxnbqlwujvdigngavnjhapivlhnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400630.2254667-123-114732432836876/AnsiballZ_setup.py'
Nov 29 07:17:10 compute-2 sudo[99014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:10 compute-2 python3.9[99016]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:17:11 compute-2 sudo[99014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:12 compute-2 sudo[99210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmnbrpucsyrcbzkjttsforyixdeduela ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400631.7080967-157-134908254610725/AnsiballZ_file.py'
Nov 29 07:17:12 compute-2 sudo[99210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:12 compute-2 ceph-mon[77138]: pgmap v367: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:12 compute-2 python3.9[99212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:12 compute-2 sudo[99210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:12.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 07:17:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:13.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 07:17:13 compute-2 sudo[99363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlburtsizcxfwuimauniuwbeeyqovwde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400633.2237785-183-247630416888028/AnsiballZ_command.py'
Nov 29 07:17:13 compute-2 sudo[99363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:13 compute-2 ceph-mon[77138]: 9.15 deep-scrub starts
Nov 29 07:17:13 compute-2 ceph-mon[77138]: 9.15 deep-scrub ok
Nov 29 07:17:13 compute-2 ceph-mon[77138]: pgmap v368: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:13 compute-2 python3.9[99365]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:17:14 compute-2 sudo[99363]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:14.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:15 compute-2 sudo[99529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlskcfjajpqrpiovlowltaxzjvipxipd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400634.7060328-204-10522895663277/AnsiballZ_stat.py'
Nov 29 07:17:15 compute-2 sudo[99529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:15 compute-2 python3.9[99531]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:17:15 compute-2 sudo[99529]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:15 compute-2 sudo[99581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:15 compute-2 sudo[99581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:15 compute-2 sudo[99581]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:15 compute-2 sudo[99632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubvutbribfumnjoivsqshlhqwohnhfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400634.7060328-204-10522895663277/AnsiballZ_file.py'
Nov 29 07:17:15 compute-2 sudo[99632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:15 compute-2 sudo[99633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:15 compute-2 sudo[99633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:15 compute-2 sudo[99633]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:16 compute-2 ceph-mon[77138]: 10.19 scrub starts
Nov 29 07:17:16 compute-2 ceph-mon[77138]: 10.19 scrub ok
Nov 29 07:17:16 compute-2 ceph-mon[77138]: 10.1b deep-scrub starts
Nov 29 07:17:16 compute-2 python3.9[99641]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:16 compute-2 sudo[99632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:16.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:17 compute-2 sudo[99810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbqdqmvqbvnoeeabyjxelvhxyraithoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400636.687056-241-232810738256453/AnsiballZ_stat.py'
Nov 29 07:17:17 compute-2 sudo[99810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:17 compute-2 python3.9[99812]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:17:17 compute-2 sudo[99810]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:17 compute-2 sudo[99888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxuqcoysgzdjypkmdmhfcfglwnqzkci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400636.687056-241-232810738256453/AnsiballZ_file.py'
Nov 29 07:17:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:17 compute-2 sudo[99888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:18 compute-2 python3.9[99890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:18 compute-2 sudo[99888]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:19 compute-2 sudo[100041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmkojpikayaatnevqqtrbjpwkznoggnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400638.736713-280-68232225724695/AnsiballZ_ini_file.py'
Nov 29 07:17:19 compute-2 sudo[100041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:19 compute-2 python3.9[100043]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:19 compute-2 sudo[100041]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:19 compute-2 ceph-mon[77138]: 10.1b deep-scrub ok
Nov 29 07:17:19 compute-2 ceph-mon[77138]: pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:19 compute-2 ceph-mon[77138]: 10.15 scrub starts
Nov 29 07:17:19 compute-2 ceph-mon[77138]: 10.15 scrub ok
Nov 29 07:17:19 compute-2 ceph-mon[77138]: 10.2 scrub starts
Nov 29 07:17:19 compute-2 ceph-mon[77138]: 10.2 scrub ok
Nov 29 07:17:19 compute-2 ceph-mon[77138]: pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:20 compute-2 sudo[100193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjgcddnozlnvyoajdjxhrzxmiojhmmxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400639.6920931-280-252329853867361/AnsiballZ_ini_file.py'
Nov 29 07:17:20 compute-2 sudo[100193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:20 compute-2 python3.9[100195]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:20 compute-2 sudo[100193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:20.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:20 compute-2 sudo[100345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzxcblacahkfidrozmlloisbyytouck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400640.5297904-280-142216189915291/AnsiballZ_ini_file.py'
Nov 29 07:17:20 compute-2 sudo[100345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:21 compute-2 python3.9[100347]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:21 compute-2 sudo[100345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:21 compute-2 ceph-mon[77138]: pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:21 compute-2 ceph-mon[77138]: 10.13 scrub starts
Nov 29 07:17:21 compute-2 ceph-mon[77138]: 10.13 scrub ok
Nov 29 07:17:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:21.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:21 compute-2 sudo[100498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsdzfgsyeqfsckhuenhqdwmoaxtquzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400641.3077917-280-118203151439925/AnsiballZ_ini_file.py'
Nov 29 07:17:21 compute-2 sudo[100498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:21 compute-2 python3.9[100500]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:21 compute-2 sudo[100498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:22 compute-2 ceph-mon[77138]: pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:22.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:23 compute-2 sudo[100651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpqratepmjigcnprrbyoojjijssbtheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400643.0250692-372-117508310945724/AnsiballZ_dnf.py'
Nov 29 07:17:23 compute-2 sudo[100651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:23.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:23 compute-2 python3.9[100653]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:17:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:24.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:25 compute-2 sudo[100651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:28 compute-2 sudo[100681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:28 compute-2 sudo[100681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:28 compute-2 sudo[100681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:28 compute-2 sudo[100706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:17:28 compute-2 sudo[100706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:28 compute-2 sudo[100706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:29 compute-2 sudo[100732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:29 compute-2 sudo[100732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:29 compute-2 sudo[100732]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:29 compute-2 sudo[100757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:17:29 compute-2 sudo[100757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:29 compute-2 podman[100855]: 2025-11-29 07:17:29.538254081 +0000 UTC m=+0.058850121 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:17:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:29.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:29 compute-2 podman[100855]: 2025-11-29 07:17:29.643772088 +0000 UTC m=+0.164368118 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:17:30 compute-2 podman[101010]: 2025-11-29 07:17:30.215073893 +0000 UTC m=+0.052070955 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:17:30 compute-2 podman[101010]: 2025-11-29 07:17:30.227695098 +0000 UTC m=+0.064692150 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:17:30 compute-2 podman[101074]: 2025-11-29 07:17:30.413069578 +0000 UTC m=+0.048450603 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, name=keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Nov 29 07:17:30 compute-2 podman[101074]: 2025-11-29 07:17:30.426647614 +0000 UTC m=+0.062028619 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, distribution-scope=public)
Nov 29 07:17:30 compute-2 sudo[100757]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:30 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:17:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:31.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 252..921) lease_timeout -- calling new election
Nov 29 07:17:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:33.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:17:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:35.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:35 compute-2 sudo[101111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:35 compute-2 sudo[101111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:35 compute-2 sudo[101111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:36 compute-2 sudo[101136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:36 compute-2 sudo[101136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:36 compute-2 sudo[101136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:37.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:17:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:39.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:40 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc MDS connection to Monitors appears to be laggy; 17.9366s since last acked beacon
Nov 29 07:17:40 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:17:40 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:17:40 compute-2 ceph-mon[77138]: paxos.1).electionLogic(16) init, last seen epoch 16
Nov 29 07:17:40 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:17:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:40.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:41.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:42 compute-2 sudo[101289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhlqrsblniqdypxokgcrxpftnroyeqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400661.9329522-405-207196417292578/AnsiballZ_setup.py'
Nov 29 07:17:42 compute-2 sudo[101289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:42 compute-2 python3.9[101291]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:42 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:17:42 compute-2 sudo[101289]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:43 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:17:43 compute-2 sudo[101444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nklashaayqjfoafrouizgfnztcoueson ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400663.2652934-430-191378557502080/AnsiballZ_stat.py'
Nov 29 07:17:43 compute-2 sudo[101444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:43.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:43 compute-2 python3.9[101446]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:17:43 compute-2 sudo[101444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:44 compute-2 ceph-mon[77138]: pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:17:44 compute-2 sudo[101608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibrhxklopmiafrxmnnzejbbieinmmawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400664.2032044-457-271589660346476/AnsiballZ_stat.py'
Nov 29 07:17:44 compute-2 sudo[101608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:44 compute-2 sudo[101585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:44 compute-2 sudo[101585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:44 compute-2 sudo[101585]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:44 compute-2 sudo[101624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:17:44 compute-2 sudo[101624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:44 compute-2 sudo[101624]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:44 compute-2 sudo[101649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:44 compute-2 sudo[101649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:44 compute-2 sudo[101649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:44 compute-2 python3.9[101621]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:17:44 compute-2 sudo[101674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:17:44 compute-2 sudo[101674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:44 compute-2 sudo[101608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:44.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:45 compute-2 sudo[101674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.18 scrub starts
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.18 scrub ok
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.14 scrub starts
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.5 scrub starts
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 9.19 scrub starts
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.5 scrub ok
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 9.19 scrub ok
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 10.14 scrub ok
Nov 29 07:17:45 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 9.1a scrub starts
Nov 29 07:17:45 compute-2 ceph-mon[77138]: pgmap v383: 305 pgs: 3 active+clean+scrubbing, 302 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:45 compute-2 ceph-mon[77138]: 9.1a scrub ok
Nov 29 07:17:45 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:17:45 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:17:45 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:17:45 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:17:45 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 8m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:17:45 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:17:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:17:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:17:45 compute-2 sudo[101880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mifdffurxxtvehemfwfnqvuwmrflroyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400665.0941336-486-225617017075550/AnsiballZ_command.py'
Nov 29 07:17:45 compute-2 sudo[101880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:45 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:17:45 compute-2 python3.9[101882]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:17:45 compute-2 sudo[101880]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:45.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:46 compute-2 sudo[102033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnrrkldhckuaguoadcrvflwhzhnmjrzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400665.9520354-517-157328944029925/AnsiballZ_service_facts.py'
Nov 29 07:17:46 compute-2 sudo[102033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:17:46 compute-2 ceph-mon[77138]: pgmap v384: 305 pgs: 3 active+clean+scrubbing, 302 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:17:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:17:46 compute-2 python3.9[102035]: ansible-service_facts Invoked
Nov 29 07:17:46 compute-2 network[102052]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:17:46 compute-2 network[102053]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:17:46 compute-2 network[102054]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:17:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc  MDS is no longer laggy
Nov 29 07:17:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:48 compute-2 ceph-mon[77138]: 9.1b scrub starts
Nov 29 07:17:48 compute-2 ceph-mon[77138]: 9.1b scrub ok
Nov 29 07:17:48 compute-2 ceph-mon[77138]: pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:48 compute-2 ceph-mon[77138]: 9.1e scrub starts
Nov 29 07:17:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:49 compute-2 ceph-mon[77138]: 9.1e scrub ok
Nov 29 07:17:49 compute-2 ceph-mon[77138]: 9.1f scrub starts
Nov 29 07:17:49 compute-2 ceph-mon[77138]: 9.1f scrub ok
Nov 29 07:17:49 compute-2 ceph-mon[77138]: pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:50 compute-2 sudo[102033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:50.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:51.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:51 compute-2 sudo[102341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaatscwuqtexzinfuvnuqzqymaizcful ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764400671.3376427-562-76483585338736/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764400671.3376427-562-76483585338736/args'
Nov 29 07:17:51 compute-2 sudo[102341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:51 compute-2 sudo[102341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:52 compute-2 ceph-mon[77138]: pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:52 compute-2 sudo[102508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fydsmnybblathqivbrcjouwoumxqkcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400672.2489488-595-47834712619269/AnsiballZ_dnf.py'
Nov 29 07:17:52 compute-2 sudo[102508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:52.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:52 compute-2 python3.9[102510]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:17:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:53 compute-2 ceph-mon[77138]: pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:54 compute-2 sudo[102508]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:54.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:55 compute-2 sudo[102590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:55 compute-2 sudo[102590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:55 compute-2 sudo[102590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:55 compute-2 sudo[102615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:17:55 compute-2 sudo[102615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:55 compute-2 sudo[102615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:55.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:17:55 compute-2 sudo[102713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wllggaleraxnywycahfonooirlyimkmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400675.026539-634-6409590346674/AnsiballZ_package_facts.py'
Nov 29 07:17:55 compute-2 sudo[102713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:55 compute-2 ceph-mon[77138]: pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:17:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:17:55 compute-2 python3.9[102715]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:17:56 compute-2 sudo[102716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:56 compute-2 sudo[102716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:56 compute-2 sudo[102716]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:56 compute-2 sudo[102741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:56 compute-2 sudo[102741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:56 compute-2 sudo[102741]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:56 compute-2 sudo[102713]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:57 compute-2 ceph-mon[77138]: pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:17:57 compute-2 sudo[102917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffkosxbwhsnlqwngxzjgryfgqjmqysz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400677.1038074-664-176957491402969/AnsiballZ_stat.py'
Nov 29 07:17:57 compute-2 sudo[102917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:57.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:57 compute-2 python3.9[102919]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:17:57 compute-2 sudo[102917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:57 compute-2 sudo[102995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yullbubedebvhcfwtwweypajobbzfnnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400677.1038074-664-176957491402969/AnsiballZ_file.py'
Nov 29 07:17:57 compute-2 sudo[102995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:17:58 compute-2 python3.9[102997]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:58 compute-2 sudo[102995]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:17:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:17:58 compute-2 sudo[103147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgcuafiflhmdgfsuhzrnpzqmhplvpvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400678.4910178-700-46176617654995/AnsiballZ_stat.py'
Nov 29 07:17:58 compute-2 sudo[103147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:59 compute-2 python3.9[103149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:17:59 compute-2 sudo[103147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:59 compute-2 sudo[103226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokcjcgkrkmysrjsygezsemxxatazlln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400678.4910178-700-46176617654995/AnsiballZ_file.py'
Nov 29 07:17:59 compute-2 sudo[103226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:59 compute-2 python3.9[103228]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:59 compute-2 sudo[103226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:17:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:17:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:18:00 compute-2 ceph-mon[77138]: pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:00.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:01 compute-2 sudo[103379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hduktybnlmfhxvqndpvyixbewwuxggfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400680.6160617-755-249848592421761/AnsiballZ_lineinfile.py'
Nov 29 07:18:01 compute-2 sudo[103379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:01 compute-2 python3.9[103381]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:01 compute-2 sudo[103379]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:02 compute-2 ceph-mon[77138]: pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:02.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:02 compute-2 sudo[103531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elocfiecibsvfvtzlndmtstflefulcdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400682.6361604-800-3632487464324/AnsiballZ_setup.py'
Nov 29 07:18:02 compute-2 sudo[103531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:03 compute-2 python3.9[103534]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:18:03 compute-2 ceph-mon[77138]: pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:03 compute-2 sudo[103531]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:03 compute-2 sudo[103616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opwzhrityoljwmgcegnslwfrzqwsgaoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400682.6361604-800-3632487464324/AnsiballZ_systemd.py'
Nov 29 07:18:03 compute-2 sudo[103616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:04 compute-2 python3.9[103618]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:04 compute-2 sudo[103616]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:05 compute-2 sshd-session[98316]: Connection closed by 192.168.122.30 port 58948
Nov 29 07:18:05 compute-2 sshd-session[98312]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:18:05 compute-2 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 07:18:05 compute-2 systemd[1]: session-37.scope: Consumed 24.313s CPU time.
Nov 29 07:18:05 compute-2 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Nov 29 07:18:05 compute-2 systemd-logind[787]: Removed session 37.
Nov 29 07:18:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:05 compute-2 ceph-mon[77138]: pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:18:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:18:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:08 compute-2 ceph-mon[77138]: pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:09.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:09 compute-2 ceph-mon[77138]: pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:10.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:11 compute-2 sshd-session[103649]: Accepted publickey for zuul from 192.168.122.30 port 48802 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:18:11 compute-2 systemd-logind[787]: New session 38 of user zuul.
Nov 29 07:18:11 compute-2 systemd[1]: Started Session 38 of User zuul.
Nov 29 07:18:11 compute-2 sshd-session[103649]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:18:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:18:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:18:11 compute-2 sudo[103802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goilowxnnoqxxkdtvdvdmchemxncfoet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400691.2558136-33-269495727330621/AnsiballZ_file.py'
Nov 29 07:18:11 compute-2 sudo[103802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:11 compute-2 python3.9[103804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:11 compute-2 sudo[103802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:12 compute-2 sudo[103954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjwhnuzyrreifavpcviijkgglumyxbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400692.202521-70-276182194598715/AnsiballZ_stat.py'
Nov 29 07:18:12 compute-2 sudo[103954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:12.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:12 compute-2 python3.9[103956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:12 compute-2 sudo[103954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:12 compute-2 ceph-mon[77138]: pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:13 compute-2 sudo[104033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrnfmoqauwxuvvwgtxbjdjzcsmyaumv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400692.202521-70-276182194598715/AnsiballZ_file.py'
Nov 29 07:18:13 compute-2 sudo[104033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:13 compute-2 python3.9[104035]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:13 compute-2 sudo[104033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:13.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:13 compute-2 sshd-session[104042]: Invalid user solana from 45.148.10.240 port 46604
Nov 29 07:18:13 compute-2 sshd-session[103652]: Connection closed by 192.168.122.30 port 48802
Nov 29 07:18:13 compute-2 sshd-session[103649]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:18:13 compute-2 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 07:18:13 compute-2 systemd[1]: session-38.scope: Consumed 1.633s CPU time.
Nov 29 07:18:13 compute-2 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Nov 29 07:18:13 compute-2 systemd-logind[787]: Removed session 38.
Nov 29 07:18:13 compute-2 sshd-session[104042]: Connection closed by invalid user solana 45.148.10.240 port 46604 [preauth]
Nov 29 07:18:14 compute-2 ceph-mon[77138]: pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:14.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:15 compute-2 ceph-mon[77138]: pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:15.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:16 compute-2 sudo[104063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:16 compute-2 sudo[104063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:16 compute-2 sudo[104063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:16 compute-2 sudo[104088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:16 compute-2 sudo[104088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:16 compute-2 sudo[104088]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:17 compute-2 ceph-mon[77138]: pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:20 compute-2 ceph-mon[77138]: pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:20 compute-2 sshd-session[104115]: Accepted publickey for zuul from 192.168.122.30 port 46098 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:18:20 compute-2 systemd-logind[787]: New session 39 of user zuul.
Nov 29 07:18:20 compute-2 systemd[1]: Started Session 39 of User zuul.
Nov 29 07:18:20 compute-2 sshd-session[104115]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:18:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:21 compute-2 python3.9[104269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:18:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:22 compute-2 sudo[104423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ialiyhdhvgoxaaplcudgfdpilnzlzjgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400701.8235285-66-214932854903161/AnsiballZ_file.py'
Nov 29 07:18:22 compute-2 sudo[104423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:22 compute-2 ceph-mon[77138]: pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:22 compute-2 python3.9[104425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:22 compute-2 sudo[104423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:23 compute-2 sudo[104599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwnjovlslomajcwtzojllwgcgjpqpje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400702.753556-90-52916349712813/AnsiballZ_stat.py'
Nov 29 07:18:23 compute-2 sudo[104599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:23 compute-2 python3.9[104601]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:23 compute-2 sudo[104599]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:23 compute-2 sudo[104677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdnxomvebtmgddoseuuqpbpjhmcjlhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400702.753556-90-52916349712813/AnsiballZ_file.py'
Nov 29 07:18:23 compute-2 sudo[104677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:23.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:23 compute-2 python3.9[104679]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8eleu_x0 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:23 compute-2 sudo[104677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:23 compute-2 ceph-mon[77138]: pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:24 compute-2 sudo[104829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pditcuhjyuyvycqwfkxxlvupmrdqmhki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400704.51035-150-16901920639919/AnsiballZ_stat.py'
Nov 29 07:18:24 compute-2 sudo[104829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:24 compute-2 python3.9[104831]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:25 compute-2 sudo[104829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:25 compute-2 sudo[104908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grccokvqujgqratsmixbkzxhrmblcxju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400704.51035-150-16901920639919/AnsiballZ_file.py'
Nov 29 07:18:25 compute-2 sudo[104908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:25 compute-2 python3.9[104910]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._xwjle44 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:25 compute-2 sudo[104908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:25 compute-2 sudo[105060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgpqiwhnbjguqswvptzoadtossraatfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400705.6928973-189-127552923553315/AnsiballZ_file.py'
Nov 29 07:18:26 compute-2 sudo[105060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:26 compute-2 python3.9[105062]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:26 compute-2 sudo[105060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:26 compute-2 sudo[105212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzxnqytqlpmmjdptafynwcjqzrasyznh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400706.3964765-213-67563116197192/AnsiballZ_stat.py'
Nov 29 07:18:26 compute-2 sudo[105212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:26 compute-2 python3.9[105214]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:26 compute-2 sudo[105212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:27 compute-2 sudo[105291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-choyazskswxohhhjlnezpqdcyoveqtac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400706.3964765-213-67563116197192/AnsiballZ_file.py'
Nov 29 07:18:27 compute-2 sudo[105291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:27 compute-2 ceph-mon[77138]: pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:27 compute-2 python3.9[105293]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:27 compute-2 sudo[105291]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:27 compute-2 sudo[105443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cysvncfrhwfzypjsvqennszwqqbcywcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400707.531771-213-11042714698188/AnsiballZ_stat.py'
Nov 29 07:18:27 compute-2 sudo[105443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:28 compute-2 python3.9[105445]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:28 compute-2 sudo[105443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:28 compute-2 sudo[105521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizgbomryrmluudxyzzevgvqmpojgnuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400707.531771-213-11042714698188/AnsiballZ_file.py'
Nov 29 07:18:28 compute-2 sudo[105521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:28 compute-2 ceph-mon[77138]: pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:28 compute-2 python3.9[105523]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:28 compute-2 sudo[105521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:28.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:29 compute-2 sudo[105674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypptgoudkbwyfijnuxvmljuyxgbxtglx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400708.7883112-282-72001623341255/AnsiballZ_file.py'
Nov 29 07:18:29 compute-2 sudo[105674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:29 compute-2 python3.9[105676]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:29 compute-2 sudo[105674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:29 compute-2 sudo[105826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzqtjinkjjbrxcmnpxewisccdwewbdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400709.562356-306-152476251954152/AnsiballZ_stat.py'
Nov 29 07:18:29 compute-2 sudo[105826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:30 compute-2 python3.9[105828]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:30 compute-2 sudo[105826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:30 compute-2 sudo[105904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakuguzkgbihlfxezjkzsgcswxtimzbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400709.562356-306-152476251954152/AnsiballZ_file.py'
Nov 29 07:18:30 compute-2 sudo[105904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:30 compute-2 ceph-mon[77138]: pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:30 compute-2 python3.9[105906]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:30 compute-2 sudo[105904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:31 compute-2 sudo[106057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccygqgnluljytmehpkyufbnsuhnqibgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400710.934897-342-201176808069037/AnsiballZ_stat.py'
Nov 29 07:18:31 compute-2 sudo[106057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:31 compute-2 python3.9[106059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:31 compute-2 sudo[106057]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:31 compute-2 sudo[106135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkiyapafsjsullasschxcighsgqvxnxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400710.934897-342-201176808069037/AnsiballZ_file.py'
Nov 29 07:18:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:31 compute-2 sudo[106135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:31 compute-2 python3.9[106137]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:31 compute-2 sudo[106135]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:32 compute-2 sudo[106287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbjoezefrqkgiyhdxvoqodzbqrxpnio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400712.218816-378-136459403479146/AnsiballZ_systemd.py'
Nov 29 07:18:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:32.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:32 compute-2 sudo[106287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:33 compute-2 python3.9[106289]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:33 compute-2 systemd[1]: Reloading.
Nov 29 07:18:33 compute-2 ceph-mon[77138]: pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:33 compute-2 systemd-rc-local-generator[106314]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:18:33 compute-2 systemd-sysv-generator[106317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:18:33 compute-2 sudo[106287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:34.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:35.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:36 compute-2 sudo[106354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:36 compute-2 sudo[106354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:36 compute-2 sudo[106354]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:36 compute-2 sudo[106379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:36 compute-2 sudo[106379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:36 compute-2 sudo[106379]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:36.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:37 compute-2 ceph-mon[77138]: pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:37 compute-2 sudo[106530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwdatqxksehtbypmthvphiwrrivyjva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400717.5630267-402-194850678539927/AnsiballZ_stat.py'
Nov 29 07:18:37 compute-2 sudo[106530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:38 compute-2 python3.9[106532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:38 compute-2 sudo[106530]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:38 compute-2 sudo[106608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsrprufavtmtehcuuhplupdqnevdtiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400717.5630267-402-194850678539927/AnsiballZ_file.py'
Nov 29 07:18:38 compute-2 sudo[106608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:38 compute-2 python3.9[106610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:38 compute-2 sudo[106608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:38.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:39 compute-2 sudo[106761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjlfphubgeztzhkzgxseezgzcqlujxpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400719.1268628-438-256045182515437/AnsiballZ_stat.py'
Nov 29 07:18:39 compute-2 sudo[106761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:39 compute-2 ceph-mon[77138]: pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:39 compute-2 ceph-mon[77138]: pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:39 compute-2 python3.9[106763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:39 compute-2 sudo[106761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:39 compute-2 sudo[106839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbwtkrwdvjosifcowgryfoajlgcczcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400719.1268628-438-256045182515437/AnsiballZ_file.py'
Nov 29 07:18:39 compute-2 sudo[106839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:40 compute-2 python3.9[106841]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:40 compute-2 sudo[106839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:40 compute-2 ceph-mon[77138]: pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:40 compute-2 sudo[106991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brbqhdofgkcwwqpoiloppohlbmklqtnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400720.464692-475-187755378227955/AnsiballZ_systemd.py'
Nov 29 07:18:40 compute-2 sudo[106991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:41 compute-2 python3.9[106993]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:41 compute-2 systemd[1]: Reloading.
Nov 29 07:18:41 compute-2 systemd-sysv-generator[107025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:18:41 compute-2 systemd-rc-local-generator[107020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:18:41 compute-2 systemd[1]: Starting Create netns directory...
Nov 29 07:18:41 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:18:41 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:18:41 compute-2 systemd[1]: Finished Create netns directory.
Nov 29 07:18:41 compute-2 sudo[106991]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:42 compute-2 ceph-mon[77138]: pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:42 compute-2 python3.9[107184]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:18:42 compute-2 network[107201]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:18:42 compute-2 network[107202]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:18:42 compute-2 network[107203]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:18:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:42.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:43 compute-2 ceph-mon[77138]: pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:44.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:45 compute-2 ceph-mon[77138]: pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:47 compute-2 sudo[107466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwyfghzcmlilpnpyjsshnrpqtekpmxma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400727.4743483-553-192525489443520/AnsiballZ_stat.py'
Nov 29 07:18:47 compute-2 sudo[107466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:48 compute-2 python3.9[107468]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:48 compute-2 sudo[107466]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:48 compute-2 sudo[107544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slterwohezdnvuuiffmhabvwilwdacyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400727.4743483-553-192525489443520/AnsiballZ_file.py'
Nov 29 07:18:48 compute-2 sudo[107544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:48 compute-2 python3.9[107546]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:48 compute-2 sudo[107544]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:48.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:49 compute-2 sudo[107697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krggbctzdwtagnnsdpiidlpuzukrwowr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400729.3672826-592-256692700200213/AnsiballZ_file.py'
Nov 29 07:18:49 compute-2 sudo[107697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:49 compute-2 python3.9[107699]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:49 compute-2 sudo[107697]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:50 compute-2 sudo[107849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znshgpxveuzyhfcffbdhonbniqazuzru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400730.260118-616-251305074201856/AnsiballZ_stat.py'
Nov 29 07:18:50 compute-2 sudo[107849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:50 compute-2 python3.9[107851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:50 compute-2 sudo[107849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-2 sudo[107928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfebrlyhafgnkdbzlvfljfyjuhmibdjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400730.260118-616-251305074201856/AnsiballZ_file.py'
Nov 29 07:18:51 compute-2 sudo[107928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:51 compute-2 python3.9[107930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:51 compute-2 sudo[107928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:51.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:52 compute-2 sudo[108080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhvxfmfulwghzzewluydwkajuxcvvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400732.0344126-660-56054582722173/AnsiballZ_timezone.py'
Nov 29 07:18:52 compute-2 sudo[108080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:52 compute-2 python3.9[108082]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:18:52 compute-2 systemd[1]: Starting Time & Date Service...
Nov 29 07:18:52 compute-2 systemd[1]: Started Time & Date Service.
Nov 29 07:18:52 compute-2 sudo[108080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:52.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:53 compute-2 ceph-mon[77138]: pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:53 compute-2 ceph-mon[77138]: pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:53 compute-2 ceph-mon[77138]: pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:53 compute-2 sudo[108238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngidmhwqtwzcpklvzypnvhmzxqahlfin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400733.3250356-688-194897872006464/AnsiballZ_file.py'
Nov 29 07:18:53 compute-2 sudo[108238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:53 compute-2 python3.9[108240]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:53 compute-2 sudo[108238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:54 compute-2 sudo[108390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccfueodorcyktmljnmgnfdyzfjdohzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400734.214907-713-26952597345373/AnsiballZ_stat.py'
Nov 29 07:18:54 compute-2 sudo[108390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:54 compute-2 python3.9[108392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:54 compute-2 sudo[108390]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:18:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:54.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:18:55 compute-2 sudo[108420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:55 compute-2 sudo[108420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:55 compute-2 sudo[108420]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:55 compute-2 sudo[108471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:18:55 compute-2 sudo[108471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:55 compute-2 sudo[108471]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:55 compute-2 ceph-mon[77138]: pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:55 compute-2 sudo[108534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpddpnltymdjaueacpqrlloiuuffcler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400734.214907-713-26952597345373/AnsiballZ_file.py'
Nov 29 07:18:55 compute-2 sudo[108534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:55 compute-2 sudo[108506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:55 compute-2 sudo[108506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:55 compute-2 sudo[108506]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:55 compute-2 sudo[108547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:18:55 compute-2 sudo[108547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:55.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:55 compute-2 python3.9[108544]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:55 compute-2 sudo[108534]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:56 compute-2 podman[108690]: 2025-11-29 07:18:56.108495962 +0000 UTC m=+0.125569892 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:18:56 compute-2 podman[108690]: 2025-11-29 07:18:56.222848143 +0000 UTC m=+0.239922053 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:18:56 compute-2 sudo[108821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhqqytjzyzvyipnayrrifpmrtoniidr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400735.9826472-747-220022514071082/AnsiballZ_stat.py'
Nov 29 07:18:56 compute-2 sudo[108821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:56 compute-2 python3.9[108827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:56 compute-2 sudo[108821]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:56 compute-2 sudo[108861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:56 compute-2 sudo[108861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:56 compute-2 sudo[108861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:56 compute-2 ceph-mon[77138]: pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:56 compute-2 sudo[108917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:56 compute-2 sudo[108917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:56 compute-2 sudo[108917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:56 compute-2 sudo[109012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpjgfgtxnfkecycpxqwzexcsombbhtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400735.9826472-747-220022514071082/AnsiballZ_file.py'
Nov 29 07:18:56 compute-2 sudo[109012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:56.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:56 compute-2 python3.9[109017]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xqa9wuav recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:56 compute-2 sudo[109012]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:57 compute-2 podman[109081]: 2025-11-29 07:18:57.488626314 +0000 UTC m=+0.483936149 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:18:57 compute-2 podman[109081]: 2025-11-29 07:18:57.657518574 +0000 UTC m=+0.652828389 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:18:57 compute-2 sudo[109259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqcnuhrmfomcxfckyyoeyopyorgznxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400737.2638428-784-177882647890088/AnsiballZ_stat.py'
Nov 29 07:18:57 compute-2 sudo[109259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:18:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:57.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:18:57 compute-2 ceph-mon[77138]: pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:18:57 compute-2 python3.9[109265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:57 compute-2 sudo[109259]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 sudo[109385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzwsnwnhcugzpiceafsysfvagfgwqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400737.2638428-784-177882647890088/AnsiballZ_file.py'
Nov 29 07:18:58 compute-2 sudo[109385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:58 compute-2 podman[109345]: 2025-11-29 07:18:58.164388019 +0000 UTC m=+0.072476642 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 07:18:58 compute-2 podman[109345]: 2025-11-29 07:18:58.180668956 +0000 UTC m=+0.088757559 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, release=1793)
Nov 29 07:18:58 compute-2 sudo[108547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 python3.9[109391]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:58 compute-2 sudo[109385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 sudo[109406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:58 compute-2 sudo[109406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:58 compute-2 sudo[109406]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 sudo[109431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:18:58 compute-2 sudo[109431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:58 compute-2 sudo[109431]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 sudo[109480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:58 compute-2 sudo[109480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:58 compute-2 sudo[109480]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:18:58 compute-2 sudo[109505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:18:58 compute-2 sudo[109505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:58.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:18:58 compute-2 sudo[109505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:59 compute-2 sudo[109688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktcwxljexvwpeczwyjlxmvtfiztbedsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400738.7476783-823-175085417954707/AnsiballZ_command.py'
Nov 29 07:18:59 compute-2 sudo[109688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:59 compute-2 python3.9[109690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:18:59 compute-2 sudo[109688]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:18:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:18:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:00 compute-2 sudo[109841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvaolkatpmudzqdckupyithlhasknul ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400739.7203681-846-200952525098077/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:19:00 compute-2 sudo[109841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:00 compute-2 python3[109843]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:19:00 compute-2 sudo[109841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:00.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:01 compute-2 sudo[109994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yunmudrxpbgvnbhzmuqqxbingmsksxpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400740.719232-871-52726846733873/AnsiballZ_stat.py'
Nov 29 07:19:01 compute-2 sudo[109994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:01 compute-2 python3.9[109996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:01 compute-2 sudo[109994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:01 compute-2 sudo[110072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkwiwdqxawtpvddtntnxjjokemlazcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400740.719232-871-52726846733873/AnsiballZ_file.py'
Nov 29 07:19:01 compute-2 sudo[110072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:01 compute-2 python3.9[110074]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:01 compute-2 sudo[110072]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:02 compute-2 sudo[110224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frmfywrpdswvlkkjvrxllisykjzbdwrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400742.1048114-907-170427051245433/AnsiballZ_stat.py'
Nov 29 07:19:02 compute-2 sudo[110224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:02 compute-2 python3.9[110226]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:02 compute-2 sudo[110224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:02.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:02 compute-2 sudo[110302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulcysetqscqyznjwwcamywswgcubmbud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400742.1048114-907-170427051245433/AnsiballZ_file.py'
Nov 29 07:19:02 compute-2 sudo[110302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:03 compute-2 python3.9[110304]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:03 compute-2 sudo[110302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:03.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:04 compute-2 sudo[110455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkjqucnnjtzdmmyqwooodmxjctzpomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400743.6219294-943-240676051846397/AnsiballZ_stat.py'
Nov 29 07:19:04 compute-2 sudo[110455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:04 compute-2 python3.9[110457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:04 compute-2 sudo[110455]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:04 compute-2 sudo[110533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsuhpfpjcyephvfanxxocbbazitxttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400743.6219294-943-240676051846397/AnsiballZ_file.py'
Nov 29 07:19:04 compute-2 sudo[110533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:04 compute-2 python3.9[110535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:04 compute-2 sudo[110533]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:04.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:19:04 compute-2 ceph-mon[77138]: pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:19:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:19:05 compute-2 sudo[110686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonwkuepfohdfkrnrbltfrjlwfimgors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400745.2007632-979-94177023090340/AnsiballZ_stat.py'
Nov 29 07:19:05 compute-2 sudo[110686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:05 compute-2 python3.9[110688]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:05 compute-2 sudo[110686]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:05 compute-2 sudo[110764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuonxnbotgvmlpmsguhuuxpyperimcve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400745.2007632-979-94177023090340/AnsiballZ_file.py'
Nov 29 07:19:05 compute-2 sudo[110764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:06 compute-2 python3.9[110766]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:06 compute-2 sudo[110764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:06 compute-2 ceph-mon[77138]: pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:06 compute-2 ceph-mon[77138]: pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:06 compute-2 ceph-mon[77138]: pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:06.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:06 compute-2 sudo[110917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgzpjwnxofnkqmbliwrlzjmnysxijkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400746.5520222-1015-112816606503975/AnsiballZ_stat.py'
Nov 29 07:19:06 compute-2 sudo[110917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:07 compute-2 python3.9[110919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:07 compute-2 sudo[110917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:07 compute-2 ceph-mon[77138]: pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.306331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747306482, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2731, "num_deletes": 251, "total_data_size": 6144199, "memory_usage": 6231248, "flush_reason": "Manual Compaction"}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747337296, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3969675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7104, "largest_seqno": 9830, "table_properties": {"data_size": 3958555, "index_size": 6910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26743, "raw_average_key_size": 21, "raw_value_size": 3934543, "raw_average_value_size": 3167, "num_data_blocks": 306, "num_entries": 1242, "num_filter_entries": 1242, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400503, "oldest_key_time": 1764400503, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 30993 microseconds, and 10251 cpu microseconds.
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.337380) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3969675 bytes OK
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.337407) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.338796) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.338819) EVENT_LOG_v1 {"time_micros": 1764400747338804, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.338841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6131731, prev total WAL file size 6131731, number of live WAL files 2.
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.340478) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3876KB)], [15(7619KB)]
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747340550, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11772349, "oldest_snapshot_seqno": -1}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3915 keys, 10208771 bytes, temperature: kUnknown
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747420727, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10208771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10175900, "index_size": 22009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94658, "raw_average_key_size": 24, "raw_value_size": 10098537, "raw_average_value_size": 2579, "num_data_blocks": 963, "num_entries": 3915, "num_filter_entries": 3915, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.421056) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10208771 bytes
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.424801) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(5.5) write-amplify(2.6) OK, records in: 4441, records dropped: 526 output_compression: NoCompression
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.424828) EVENT_LOG_v1 {"time_micros": 1764400747424815, "job": 6, "event": "compaction_finished", "compaction_time_micros": 80326, "compaction_time_cpu_micros": 24520, "output_level": 6, "num_output_files": 1, "total_output_size": 10208771, "num_input_records": 4441, "num_output_records": 3915, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747425586, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747426850, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.340401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.426882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.426888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.426890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.426891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:07.426893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:07 compute-2 sudo[110995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrfdhwrvrvhtnatzyfomjonbufngxwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400746.5520222-1015-112816606503975/AnsiballZ_file.py'
Nov 29 07:19:07 compute-2 sudo[110995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:07 compute-2 python3.9[110997]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:07 compute-2 sudo[110995]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:08 compute-2 sudo[111147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usqeztrzxqtljpdzhmptaysnlphggtbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400748.179848-1055-82516784340217/AnsiballZ_command.py'
Nov 29 07:19:08 compute-2 sudo[111147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:08 compute-2 python3.9[111149]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:08 compute-2 sudo[111147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:08.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:10.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:11 compute-2 sudo[111304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudrzfafjnwukeybqzedixnlvksfzfpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400750.5928893-1078-59370186330780/AnsiballZ_blockinfile.py'
Nov 29 07:19:11 compute-2 sudo[111304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:11 compute-2 python3.9[111306]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:11 compute-2 sudo[111304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:11.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:11 compute-2 sudo[111456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arvmyuqyjsxazyzmjuityuzmfgnanfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400751.6771367-1105-257853724613588/AnsiballZ_file.py'
Nov 29 07:19:11 compute-2 sudo[111456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:12 compute-2 python3.9[111458]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:12 compute-2 sudo[111456]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:12 compute-2 ceph-mon[77138]: pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:12.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:13 compute-2 ceph-mon[77138]: pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:13 compute-2 ceph-mon[77138]: pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:13 compute-2 sudo[111609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfhvfwfepqeerzhqkcifmfhhnappezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400753.3290973-1105-180554238578773/AnsiballZ_file.py'
Nov 29 07:19:13 compute-2 sudo[111609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:13 compute-2 python3.9[111611]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:13 compute-2 sudo[111609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:14 compute-2 sudo[111761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbxufvtepjbovvyfspcdkshbwmllcji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400754.1943657-1150-24194657640342/AnsiballZ_mount.py'
Nov 29 07:19:14 compute-2 sudo[111761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:14.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:15 compute-2 python3.9[111763]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:19:15 compute-2 sudo[111761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:15 compute-2 sudo[111914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sywruajeuxgurrqekvlbcbzuemqgxnxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400755.2036054-1150-274563031699482/AnsiballZ_mount.py'
Nov 29 07:19:15 compute-2 sudo[111914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:15 compute-2 python3.9[111916]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:19:15 compute-2 sudo[111914]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:15 compute-2 ceph-mon[77138]: pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.357255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756357399, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 315, "num_deletes": 250, "total_data_size": 213772, "memory_usage": 220664, "flush_reason": "Manual Compaction"}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756378850, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 140562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9836, "largest_seqno": 10145, "table_properties": {"data_size": 138546, "index_size": 244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5246, "raw_average_key_size": 18, "raw_value_size": 134612, "raw_average_value_size": 485, "num_data_blocks": 11, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400747, "oldest_key_time": 1764400747, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 21652 microseconds, and 1874 cpu microseconds.
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.378968) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 140562 bytes OK
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.379001) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.382240) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.382271) EVENT_LOG_v1 {"time_micros": 1764400756382263, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.382294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 211533, prev total WAL file size 211533, number of live WAL files 2.
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.382921) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(137KB)], [18(9969KB)]
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756382997, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10349333, "oldest_snapshot_seqno": -1}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3684 keys, 7523846 bytes, temperature: kUnknown
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756514125, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7523846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7496143, "index_size": 17417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 90400, "raw_average_key_size": 24, "raw_value_size": 7426337, "raw_average_value_size": 2015, "num_data_blocks": 761, "num_entries": 3684, "num_filter_entries": 3684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.514427) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7523846 bytes
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.517882) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.9 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(127.2) write-amplify(53.5) OK, records in: 4192, records dropped: 508 output_compression: NoCompression
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.517907) EVENT_LOG_v1 {"time_micros": 1764400756517895, "job": 8, "event": "compaction_finished", "compaction_time_micros": 131202, "compaction_time_cpu_micros": 24495, "output_level": 6, "num_output_files": 1, "total_output_size": 7523846, "num_input_records": 4192, "num_output_records": 3684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756518098, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756520585, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.382829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.520682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.520688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.520689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.520691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:19:16.520692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:19:16 compute-2 sshd-session[104118]: Connection closed by 192.168.122.30 port 46098
Nov 29 07:19:16 compute-2 sshd-session[104115]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:19:16 compute-2 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 07:19:16 compute-2 systemd[1]: session-39.scope: Consumed 30.179s CPU time.
Nov 29 07:19:16 compute-2 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Nov 29 07:19:16 compute-2 systemd-logind[787]: Removed session 39.
Nov 29 07:19:16 compute-2 sudo[111941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:16 compute-2 sudo[111941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:16 compute-2 sudo[111941]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:16 compute-2 sudo[111966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:16 compute-2 sudo[111966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:16 compute-2 sudo[111966]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:16.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:17 compute-2 ceph-mon[77138]: pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:18 compute-2 sudo[111992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:18 compute-2 sudo[111992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:18 compute-2 sudo[111992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:18 compute-2 sudo[112017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:19:18 compute-2 sudo[112017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:18 compute-2 sudo[112017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:18.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:19:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:19:19 compute-2 ceph-mon[77138]: pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:20.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:21 compute-2 ceph-mon[77138]: pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:22 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:19:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:22.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:23 compute-2 ceph-mon[77138]: pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:24.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:25.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:26 compute-2 ceph-mon[77138]: pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:26 compute-2 sshd-session[112050]: Accepted publickey for zuul from 192.168.122.30 port 44638 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:19:26 compute-2 systemd-logind[787]: New session 40 of user zuul.
Nov 29 07:19:26 compute-2 systemd[1]: Started Session 40 of User zuul.
Nov 29 07:19:26 compute-2 sshd-session[112050]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:19:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:26.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:27 compute-2 sudo[112204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkolrertdndrrhlxkdxtqfnhuwcvtsms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400766.550989-25-63199657371035/AnsiballZ_tempfile.py'
Nov 29 07:19:27 compute-2 sudo[112204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:27 compute-2 python3.9[112206]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:19:27 compute-2 sudo[112204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:27.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:28 compute-2 sudo[112356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqhmkhsqabaycdtxlokwgpdsvjupuwfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400767.9520442-61-264159875603999/AnsiballZ_stat.py'
Nov 29 07:19:28 compute-2 sudo[112356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:28 compute-2 python3.9[112358]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:19:28 compute-2 sudo[112356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:28.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:28 compute-2 ceph-mon[77138]: pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:29 compute-2 sudo[112511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sntoratrpeeidxtcwpuxieackjnfistq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400768.8739042-85-279564730598485/AnsiballZ_slurp.py'
Nov 29 07:19:29 compute-2 sudo[112511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:29 compute-2 python3.9[112513]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 07:19:29 compute-2 sudo[112511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:29.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:29 compute-2 ceph-mon[77138]: pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:29 compute-2 sudo[112663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlaxujjgnfakgbjskmxqrysvfuqwpzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400769.739763-109-110761589196264/AnsiballZ_stat.py'
Nov 29 07:19:29 compute-2 sudo[112663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:30 compute-2 python3.9[112665]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.nlyfxcvq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:30 compute-2 sudo[112663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:30 compute-2 sudo[112788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfmmaggqhoplbwwvplnidzxkwuoxdpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400769.739763-109-110761589196264/AnsiballZ_copy.py'
Nov 29 07:19:30 compute-2 sudo[112788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:30 compute-2 python3.9[112790]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.nlyfxcvq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400769.739763-109-110761589196264/.source.nlyfxcvq _original_basename=.swp7ic2k follow=False checksum=425f0dec1542497d25012cf56c23eb3f3f1d2c45 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:30 compute-2 sudo[112788]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:31.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:31 compute-2 sudo[112941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgrhfpyvpmzefmjqdozehnpfqcsignw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400771.1940663-154-147580080695103/AnsiballZ_setup.py'
Nov 29 07:19:31 compute-2 sudo[112941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:32 compute-2 python3.9[112943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:32 compute-2 sudo[112941]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:32.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:32 compute-2 sudo[113094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tncgzuwylrgerchbetotossmkvlqdhnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400772.5431561-179-88343834799497/AnsiballZ_blockinfile.py'
Nov 29 07:19:32 compute-2 sudo[113094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:33 compute-2 python3.9[113096]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=
                                              create=True mode=0644 path=/tmp/ansible.nlyfxcvq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:33 compute-2 sudo[113094]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:33.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:33 compute-2 sudo[113246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhdrsrqulxvsngjiobwgsgofvpwjlffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400773.4214892-203-134488247590200/AnsiballZ_command.py'
Nov 29 07:19:33 compute-2 sudo[113246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:34 compute-2 python3.9[113248]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.nlyfxcvq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:34 compute-2 sudo[113246]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:34 compute-2 sudo[113400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvadnbewnwhlexaezduxoyqggrrrgnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400774.39581-228-66728699425546/AnsiballZ_file.py'
Nov 29 07:19:34 compute-2 sudo[113400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:35 compute-2 python3.9[113403]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.nlyfxcvq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:35 compute-2 sudo[113400]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:35 compute-2 ceph-mon[77138]: pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:35 compute-2 sshd-session[112053]: Connection closed by 192.168.122.30 port 44638
Nov 29 07:19:35 compute-2 sshd-session[112050]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:19:35 compute-2 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 07:19:35 compute-2 systemd[1]: session-40.scope: Consumed 4.997s CPU time.
Nov 29 07:19:35 compute-2 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Nov 29 07:19:35 compute-2 systemd-logind[787]: Removed session 40.
Nov 29 07:19:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:19:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:19:36 compute-2 ceph-mon[77138]: pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:36 compute-2 ceph-mon[77138]: pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:36 compute-2 sudo[113429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:36 compute-2 sudo[113429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:36 compute-2 sudo[113429]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:36 compute-2 sudo[113454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:36 compute-2 sudo[113454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:36 compute-2 sudo[113454]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:39.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:40 compute-2 ceph-mon[77138]: pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:40 compute-2 sshd-session[113481]: Accepted publickey for zuul from 192.168.122.30 port 57842 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:19:40 compute-2 systemd-logind[787]: New session 41 of user zuul.
Nov 29 07:19:40 compute-2 systemd[1]: Started Session 41 of User zuul.
Nov 29 07:19:40 compute-2 sshd-session[113481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:19:41 compute-2 ceph-mon[77138]: pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:41 compute-2 ceph-mon[77138]: pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:42 compute-2 python3.9[113635]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:43 compute-2 sudo[113790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbojqjkgsokdngjfocgjfxacuuxddlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400782.4618692-63-7291401369331/AnsiballZ_systemd.py'
Nov 29 07:19:43 compute-2 sudo[113790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:43 compute-2 python3.9[113792]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:19:43 compute-2 sudo[113790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:44 compute-2 sudo[113944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzrslfnuvnrcuucuhsmtvxjcfcivcyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400783.7494833-87-196905524566267/AnsiballZ_systemd.py'
Nov 29 07:19:44 compute-2 sudo[113944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:44 compute-2 python3.9[113946]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:19:44 compute-2 sudo[113944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:44 compute-2 ceph-mon[77138]: pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:45 compute-2 sudo[114098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epwmqrayqhvdkmktcjsnhwelekzhncjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400784.7473285-114-259224273859180/AnsiballZ_command.py'
Nov 29 07:19:45 compute-2 sudo[114098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:45 compute-2 python3.9[114100]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:45 compute-2 sudo[114098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:45.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:46 compute-2 ceph-mon[77138]: pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:46 compute-2 sudo[114251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqsiccjxulxeomlnavhhrwpupvjifnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400785.8023772-138-56563954376634/AnsiballZ_stat.py'
Nov 29 07:19:46 compute-2 sudo[114251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:46 compute-2 python3.9[114253]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:19:46 compute-2 sudo[114251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:47 compute-2 sudo[114404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizxtgyejsxhqjdelftefgpikcgeuwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400786.7244098-165-84709984746713/AnsiballZ_file.py'
Nov 29 07:19:47 compute-2 sudo[114404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:47 compute-2 ceph-mon[77138]: pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:47 compute-2 python3.9[114406]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:47 compute-2 sudo[114404]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:47 compute-2 sshd-session[113485]: Connection closed by 192.168.122.30 port 57842
Nov 29 07:19:47 compute-2 sshd-session[113481]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:19:47 compute-2 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 07:19:47 compute-2 systemd[1]: session-41.scope: Consumed 3.765s CPU time.
Nov 29 07:19:47 compute-2 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Nov 29 07:19:47 compute-2 systemd-logind[787]: Removed session 41.
Nov 29 07:19:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:48.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:49 compute-2 ceph-mon[77138]: pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:52 compute-2 ceph-mon[77138]: pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:53 compute-2 sshd-session[114434]: Accepted publickey for zuul from 192.168.122.30 port 37858 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:19:53 compute-2 systemd-logind[787]: New session 42 of user zuul.
Nov 29 07:19:53 compute-2 systemd[1]: Started Session 42 of User zuul.
Nov 29 07:19:53 compute-2 sshd-session[114434]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:19:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:53 compute-2 ceph-mon[77138]: pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:54 compute-2 python3.9[114588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:55 compute-2 sudo[114743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuedmmchnswdxwtjiuojgohfljszlzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400794.8302221-69-266386231564301/AnsiballZ_setup.py'
Nov 29 07:19:55 compute-2 sudo[114743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:55 compute-2 python3.9[114745]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:19:55 compute-2 sudo[114743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:56 compute-2 sudo[114827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxcdjkwtpmldievnwktgpxxceceijwdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400794.8302221-69-266386231564301/AnsiballZ_dnf.py'
Nov 29 07:19:56 compute-2 sudo[114827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:56 compute-2 python3.9[114829]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:19:56 compute-2 ceph-mon[77138]: pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:56.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:57 compute-2 sudo[114832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:57 compute-2 sudo[114832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:57 compute-2 sudo[114832]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:57 compute-2 sudo[114857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:57 compute-2 sudo[114857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:57 compute-2 sudo[114857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:57 compute-2 sudo[114827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:19:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:19:58 compute-2 ceph-mon[77138]: pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:19:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:19:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:58.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:19:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:19:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:19:59 compute-2 python3.9[115032]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:20:00 compute-2 ceph-mon[77138]: pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:01 compute-2 python3.9[115184]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:20:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:20:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:20:02 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:20:02 compute-2 ceph-mon[77138]: pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:02 compute-2 python3.9[115334]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:02 compute-2 python3.9[115484]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:03 compute-2 ceph-mon[77138]: pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:03 compute-2 sshd-session[114438]: Connection closed by 192.168.122.30 port 37858
Nov 29 07:20:03 compute-2 sshd-session[114434]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:03 compute-2 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 07:20:03 compute-2 systemd[1]: session-42.scope: Consumed 6.235s CPU time.
Nov 29 07:20:03 compute-2 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Nov 29 07:20:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:03 compute-2 systemd-logind[787]: Removed session 42.
Nov 29 07:20:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:05.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:05 compute-2 ceph-mon[77138]: pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:07.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:08.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:09 compute-2 ceph-mon[77138]: pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:09 compute-2 sshd-session[115513]: Accepted publickey for zuul from 192.168.122.30 port 35738 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:20:09 compute-2 systemd-logind[787]: New session 43 of user zuul.
Nov 29 07:20:09 compute-2 systemd[1]: Started Session 43 of User zuul.
Nov 29 07:20:09 compute-2 sshd-session[115513]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:20:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:10 compute-2 python3.9[115666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:20:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:12 compute-2 sudo[115821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gerijanfpzljupbvevlyznebfzanttvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400811.7956483-115-99170986373241/AnsiballZ_file.py'
Nov 29 07:20:12 compute-2 sudo[115821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:12 compute-2 python3.9[115823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:12 compute-2 sudo[115821]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:12.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:12 compute-2 sudo[115973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuyoadhjtjennlfsaelfjgomwwnkgxki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400812.6743293-115-274876555866656/AnsiballZ_file.py'
Nov 29 07:20:12 compute-2 sudo[115973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:13 compute-2 python3.9[115975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:13 compute-2 sudo[115973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:13 compute-2 ceph-mon[77138]: pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:13 compute-2 sudo[116126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haehcxqsjclpustssjvjjpcxstetvwsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400813.3293464-158-143937460422132/AnsiballZ_stat.py'
Nov 29 07:20:13 compute-2 sudo[116126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:14 compute-2 python3.9[116128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:14 compute-2 sudo[116126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:14 compute-2 sshd-session[116129]: Invalid user sol from 45.148.10.240 port 35436
Nov 29 07:20:14 compute-2 sshd-session[116129]: Connection closed by invalid user sol 45.148.10.240 port 35436 [preauth]
Nov 29 07:20:14 compute-2 sudo[116251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiqqbhgnkzoyhswhhtflrhywvbssvvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400813.3293464-158-143937460422132/AnsiballZ_copy.py'
Nov 29 07:20:14 compute-2 sudo[116251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:14.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:14 compute-2 python3.9[116253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400813.3293464-158-143937460422132/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=bb8b6703070ad397c78cfbb8db08c2e28738037a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:15 compute-2 sudo[116251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:15 compute-2 ceph-mon[77138]: pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:15 compute-2 ceph-mon[77138]: pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:15 compute-2 sudo[116404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mksacnfcwwdyjqtwxstxipsdlvkehmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400815.1389117-158-81353110781704/AnsiballZ_stat.py'
Nov 29 07:20:15 compute-2 sudo[116404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:15 compute-2 python3.9[116406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:15 compute-2 sudo[116404]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:16 compute-2 sudo[116527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvttqumigpdwkshjvulwzusmvkmfigrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400815.1389117-158-81353110781704/AnsiballZ_copy.py'
Nov 29 07:20:16 compute-2 sudo[116527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:16 compute-2 python3.9[116529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400815.1389117-158-81353110781704/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=cd8ab8ed4fdf501d1b4ce95ba4f398e005279fa9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:16 compute-2 sudo[116527]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:16 compute-2 ceph-mon[77138]: pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:16 compute-2 sudo[116679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iontqxlkoenpwxbrtjkhfbxuzrxwzfdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400816.3975189-158-263024164139820/AnsiballZ_stat.py'
Nov 29 07:20:16 compute-2 sudo[116679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:16 compute-2 python3.9[116681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:16 compute-2 sudo[116679]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:20:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:16.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:20:17 compute-2 sudo[116776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:17 compute-2 sudo[116776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:17 compute-2 sudo[116776]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:17 compute-2 sudo[116828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geubibesvshkqvstmiukxyyscvybsmmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400816.3975189-158-263024164139820/AnsiballZ_copy.py'
Nov 29 07:20:17 compute-2 sudo[116828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:17 compute-2 sudo[116829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:17 compute-2 sudo[116829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:17 compute-2 sudo[116829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:17 compute-2 python3.9[116838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400816.3975189-158-263024164139820/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=5eb18c56100c7bc77ee64426a64e7637b23bdc6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:17 compute-2 sudo[116828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:17 compute-2 ceph-mon[77138]: pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:18 compute-2 sudo[117005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqjcbsqnohsqetqeuczspxbshwhryds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400817.6881711-287-38138809532020/AnsiballZ_file.py'
Nov 29 07:20:18 compute-2 sudo[117005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:18 compute-2 python3.9[117007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:18 compute-2 sudo[117005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:18 compute-2 sudo[117120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:18 compute-2 sudo[117120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:18 compute-2 sudo[117120]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:18 compute-2 sudo[117185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saclnjdppexhrymgfgngavxgxoyekwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400818.348275-287-119879133590249/AnsiballZ_file.py'
Nov 29 07:20:18 compute-2 sudo[117185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:18 compute-2 sudo[117182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:18 compute-2 sudo[117182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:18 compute-2 sudo[117182]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:18 compute-2 sudo[117210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:18 compute-2 sudo[117210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:18 compute-2 sudo[117210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:18 compute-2 sudo[117235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:20:18 compute-2 sudo[117235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:19 compute-2 python3.9[117201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:19 compute-2 sudo[117185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-2 podman[117433]: 2025-11-29 07:20:19.456215043 +0000 UTC m=+0.066220888 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:20:19 compute-2 sudo[117503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhfojnvkuqtvupjcqkpwlkrwlllnvbmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400819.2469332-331-140707322306474/AnsiballZ_stat.py'
Nov 29 07:20:19 compute-2 sudo[117503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:19 compute-2 podman[117433]: 2025-11-29 07:20:19.580702101 +0000 UTC m=+0.190707936 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:20:19 compute-2 python3.9[117505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:19 compute-2 sudo[117503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:20 compute-2 sudo[117728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocbhukigdlwapdkugxjoxkhichcsrekk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400819.2469332-331-140707322306474/AnsiballZ_copy.py'
Nov 29 07:20:20 compute-2 sudo[117728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:20 compute-2 ceph-mon[77138]: pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:20 compute-2 python3.9[117734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400819.2469332-331-140707322306474/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=536ab7959ce43a2b8c4b802f53192a5bf34dae55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:20 compute-2 podman[117761]: 2025-11-29 07:20:20.241964653 +0000 UTC m=+0.082715941 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:20:20 compute-2 sudo[117728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:20 compute-2 podman[117761]: 2025-11-29 07:20:20.278848168 +0000 UTC m=+0.119599456 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:20:20 compute-2 podman[117901]: 2025-11-29 07:20:20.570373178 +0000 UTC m=+0.060223607 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 07:20:20 compute-2 podman[117901]: 2025-11-29 07:20:20.585665791 +0000 UTC m=+0.075516180 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64)
Nov 29 07:20:20 compute-2 sudo[117235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:20 compute-2 sudo[118009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywxgflawyflxqvpsjsewqezemlscrlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400820.4258657-331-80510907936888/AnsiballZ_stat.py'
Nov 29 07:20:20 compute-2 sudo[118009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:20 compute-2 python3.9[118011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:20:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:20:20 compute-2 sudo[118009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 sudo[118036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:21 compute-2 sudo[118036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:21 compute-2 sudo[118036]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 sudo[118085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:21 compute-2 sudo[118085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:21 compute-2 sudo[118085]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 sudo[118133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:21 compute-2 sudo[118133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:21 compute-2 sudo[118133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 sudo[118158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:20:21 compute-2 sudo[118158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:21 compute-2 sudo[118235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmgetoeqeqdwewmzwcowqgynfuhummjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400820.4258657-331-80510907936888/AnsiballZ_copy.py'
Nov 29 07:20:21 compute-2 sudo[118235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:21 compute-2 python3.9[118242]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400820.4258657-331-80510907936888/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:21 compute-2 sudo[118235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 sudo[118158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:20:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:20:22 compute-2 ceph-mon[77138]: pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:22 compute-2 sudo[118416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzujnuqkrngtntzmsbaceevkfpibkzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400821.847654-331-74872480602811/AnsiballZ_stat.py'
Nov 29 07:20:22 compute-2 sudo[118416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:22 compute-2 python3.9[118418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:22 compute-2 sudo[118416]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:22 compute-2 sudo[118539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowvvbstggpaffsvrjgzuzrcfkljthjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400821.847654-331-74872480602811/AnsiballZ_copy.py'
Nov 29 07:20:22 compute-2 sudo[118539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:22 compute-2 python3.9[118541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400821.847654-331-74872480602811/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=4788ae34b554c55af4433cdd645eda822b542751 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:22 compute-2 sudo[118539]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:20:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:23 compute-2 ceph-mon[77138]: pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:23 compute-2 sudo[118692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kehlzgyaqbbpvzkycjsiskeqcuoxqcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400823.2146533-459-260808341510631/AnsiballZ_file.py'
Nov 29 07:20:23 compute-2 sudo[118692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:23 compute-2 python3.9[118694]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:23.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:23 compute-2 sudo[118692]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:24 compute-2 sudo[118844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzoevppocnvfibqauvkobgqcevnluss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400823.9706192-459-211710607696645/AnsiballZ_file.py'
Nov 29 07:20:24 compute-2 sudo[118844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:24 compute-2 python3.9[118846]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:24 compute-2 sudo[118844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:24 compute-2 sudo[118996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tldttnevehwcbmclzxbchzutfgqzkurw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400824.6489384-507-95049653027477/AnsiballZ_stat.py'
Nov 29 07:20:24 compute-2 sudo[118996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:25 compute-2 python3.9[118998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:25 compute-2 sudo[118996]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:25 compute-2 sudo[119120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dffolfhmauekwmhrrlmyladawiixdspi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400824.6489384-507-95049653027477/AnsiballZ_copy.py'
Nov 29 07:20:25 compute-2 sudo[119120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:25 compute-2 ceph-mon[77138]: pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:25 compute-2 python3.9[119122]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400824.6489384-507-95049653027477/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=110b4334b207e87be0bb32a47f9c85a46c489956 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:25 compute-2 sudo[119120]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:26 compute-2 sudo[119272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egazqmosdxvthvdwnvoamgvljjesznmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400825.9821088-507-205330598513459/AnsiballZ_stat.py'
Nov 29 07:20:26 compute-2 sudo[119272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:26 compute-2 python3.9[119274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:26 compute-2 sudo[119272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:26 compute-2 sudo[119395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffropzoiikfnacajrmraybqnhwyumsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400825.9821088-507-205330598513459/AnsiballZ_copy.py'
Nov 29 07:20:26 compute-2 sudo[119395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:27 compute-2 python3.9[119397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400825.9821088-507-205330598513459/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:27 compute-2 sudo[119395]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:27 compute-2 sudo[119548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xadmoowuevwziiiszyqqeixgocsbfglx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400827.2607615-507-205670356625624/AnsiballZ_stat.py'
Nov 29 07:20:27 compute-2 sudo[119548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:27 compute-2 ceph-mon[77138]: pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:27 compute-2 python3.9[119550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:27 compute-2 sudo[119548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:28 compute-2 sudo[119671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlfxrkjntdyluuucyxstwzfkuzxqsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400827.2607615-507-205670356625624/AnsiballZ_copy.py'
Nov 29 07:20:28 compute-2 sudo[119671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:28 compute-2 python3.9[119673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400827.2607615-507-205670356625624/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=e33f38ea2c28f84a1f25f733d99d357e1cd91675 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:28 compute-2 sudo[119671]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:29 compute-2 sudo[119699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:29 compute-2 sudo[119699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:29 compute-2 sudo[119699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:29 compute-2 sudo[119724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:20:29 compute-2 sudo[119724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:29 compute-2 sudo[119724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:29 compute-2 ceph-mon[77138]: pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:20:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:29 compute-2 sudo[119874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxvwygxjafvwcwmgzyfozjdgirkhwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400829.6532354-697-9349362157896/AnsiballZ_file.py'
Nov 29 07:20:29 compute-2 sudo[119874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:30 compute-2 python3.9[119876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:30 compute-2 sudo[119874]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:30 compute-2 sudo[120026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtlwhubucxadsrgeiqseqdpjmhvwuyoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400830.4264932-730-272396004031314/AnsiballZ_stat.py'
Nov 29 07:20:30 compute-2 sudo[120026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:30 compute-2 python3.9[120028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:31 compute-2 sudo[120026]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:31 compute-2 sudo[120150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrebdoxarjzxxrjddhhwxmgwzcawlki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400830.4264932-730-272396004031314/AnsiballZ_copy.py'
Nov 29 07:20:31 compute-2 sudo[120150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:31 compute-2 python3.9[120152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400830.4264932-730-272396004031314/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:31 compute-2 sudo[120150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:31 compute-2 ceph-mon[77138]: pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:32 compute-2 sudo[120302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkvmdwnbirxewxafvxsaocjvlbdcmhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400832.0087984-779-213937711852079/AnsiballZ_file.py'
Nov 29 07:20:32 compute-2 sudo[120302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:32 compute-2 python3.9[120304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:32 compute-2 sudo[120302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:33 compute-2 sudo[120455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqkdhduxlbwqnpcqiiircpczurqtyqpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400832.7611036-809-222690162618441/AnsiballZ_stat.py'
Nov 29 07:20:33 compute-2 sudo[120455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:33 compute-2 python3.9[120457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:33 compute-2 sudo[120455]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:33 compute-2 sudo[120578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mopjfyvuqnwcjytzvmofpxihzeiifmvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400832.7611036-809-222690162618441/AnsiballZ_copy.py'
Nov 29 07:20:33 compute-2 sudo[120578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:33 compute-2 python3.9[120580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400832.7611036-809-222690162618441/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:33 compute-2 sudo[120578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:34 compute-2 ceph-mon[77138]: pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:34 compute-2 sudo[120730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmsegjxgtibdqadlsgnujaezwptmjam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400834.0369482-858-31931516690514/AnsiballZ_file.py'
Nov 29 07:20:34 compute-2 sudo[120730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:34 compute-2 python3.9[120732]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:34 compute-2 sudo[120730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:34.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:35 compute-2 sudo[120883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqvljpkqwnyoazwlnavxvjcnhnlpbqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400834.8663101-878-44428598625443/AnsiballZ_stat.py'
Nov 29 07:20:35 compute-2 sudo[120883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:35 compute-2 python3.9[120885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:35 compute-2 sudo[120883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:35 compute-2 sudo[121006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpinppijbhjxqrtypusdjxkbelyqvyma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400834.8663101-878-44428598625443/AnsiballZ_copy.py'
Nov 29 07:20:35 compute-2 sudo[121006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:35 compute-2 python3.9[121008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400834.8663101-878-44428598625443/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:35 compute-2 sudo[121006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:36 compute-2 ceph-mon[77138]: pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:36 compute-2 sudo[121158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgwvkfehhheojhwpdkpepwrrdujpups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400836.159616-924-79970351034814/AnsiballZ_file.py'
Nov 29 07:20:36 compute-2 sudo[121158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:36 compute-2 python3.9[121160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:36 compute-2 sudo[121158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:37 compute-2 sudo[121311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpkpuhpmuygkydytpznpqchawfqvccvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400836.803267-947-213819610422862/AnsiballZ_stat.py'
Nov 29 07:20:37 compute-2 sudo[121311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:37 compute-2 python3.9[121313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:37 compute-2 ceph-mon[77138]: pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:37 compute-2 sudo[121311]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:37 compute-2 sudo[121337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:37 compute-2 sudo[121337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:37 compute-2 sudo[121337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:37 compute-2 sudo[121366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:37 compute-2 sudo[121366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:37 compute-2 sudo[121366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:37 compute-2 sudo[121484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkessgvxmocyvmcwonuesiyrzlbnunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400836.803267-947-213819610422862/AnsiballZ_copy.py'
Nov 29 07:20:37 compute-2 sudo[121484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:37 compute-2 python3.9[121486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400836.803267-947-213819610422862/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:37.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:37 compute-2 sudo[121484]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:38 compute-2 sudo[121636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhuxzslbknggapjimhxobhpokacteesz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400838.0724864-995-246493449875437/AnsiballZ_file.py'
Nov 29 07:20:38 compute-2 sudo[121636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:38 compute-2 python3.9[121638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:38 compute-2 sudo[121636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:39 compute-2 sudo[121789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxsmspmairxiepdhlmgyuojxztyuwuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400838.832552-1023-189070647576555/AnsiballZ_stat.py'
Nov 29 07:20:39 compute-2 sudo[121789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:39 compute-2 python3.9[121791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:39 compute-2 sudo[121789]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:39 compute-2 sudo[121912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulxalgwfiimvfgcbzhwrmqyvnabcnpya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400838.832552-1023-189070647576555/AnsiballZ_copy.py'
Nov 29 07:20:39 compute-2 sudo[121912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:40 compute-2 python3.9[121914]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400838.832552-1023-189070647576555/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:40 compute-2 sudo[121912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:40 compute-2 ceph-mon[77138]: pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:40 compute-2 sudo[122064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfuayehujurxvlyiumopzjidfppuxqol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400840.3025677-1072-158241858768264/AnsiballZ_file.py'
Nov 29 07:20:40 compute-2 sudo[122064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:40 compute-2 python3.9[122066]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:40 compute-2 sudo[122064]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:41 compute-2 sudo[122217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilsbqxoghejaoknvmbdruzeuavrprtom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400841.170761-1091-227307122729320/AnsiballZ_stat.py'
Nov 29 07:20:41 compute-2 sudo[122217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:41 compute-2 ceph-mon[77138]: pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:41 compute-2 python3.9[122219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:41 compute-2 sudo[122217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:41.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:42 compute-2 sudo[122340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-visiabtmzkjvgandhxednahwnhtkvatu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400841.170761-1091-227307122729320/AnsiballZ_copy.py'
Nov 29 07:20:42 compute-2 sudo[122340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:42 compute-2 python3.9[122342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400841.170761-1091-227307122729320/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:42 compute-2 sudo[122340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:43 compute-2 sshd-session[115516]: Connection closed by 192.168.122.30 port 35738
Nov 29 07:20:43 compute-2 sshd-session[115513]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:43 compute-2 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 07:20:43 compute-2 systemd[1]: session-43.scope: Consumed 24.818s CPU time.
Nov 29 07:20:43 compute-2 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Nov 29 07:20:43 compute-2 systemd-logind[787]: Removed session 43.
Nov 29 07:20:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:45.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:46 compute-2 ceph-mon[77138]: pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:46.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:47 compute-2 ceph-mon[77138]: pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:47 compute-2 ceph-mon[77138]: pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:49 compute-2 sshd-session[122370]: Accepted publickey for zuul from 192.168.122.30 port 40964 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:20:49 compute-2 systemd-logind[787]: New session 44 of user zuul.
Nov 29 07:20:49 compute-2 systemd[1]: Started Session 44 of User zuul.
Nov 29 07:20:49 compute-2 sshd-session[122370]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:20:49 compute-2 sudo[122524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcztegocbxepmdjcwedkzkkkqjiqdtyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400849.1658227-33-271521906913383/AnsiballZ_file.py'
Nov 29 07:20:49 compute-2 sudo[122524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:49 compute-2 python3.9[122526]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:49 compute-2 sudo[122524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:50 compute-2 ceph-mon[77138]: pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:50 compute-2 sudo[122676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnyvflbzzfltzkqaodwysamxqbaytfsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400850.0901113-69-54366041841741/AnsiballZ_stat.py'
Nov 29 07:20:50 compute-2 sudo[122676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:50 compute-2 python3.9[122678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:50 compute-2 sudo[122676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:50.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:51 compute-2 sudo[122800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxnifrikeyrosicoshwzxfxsipyquxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400850.0901113-69-54366041841741/AnsiballZ_copy.py'
Nov 29 07:20:51 compute-2 sudo[122800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:51 compute-2 python3.9[122802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400850.0901113-69-54366041841741/.source.conf _original_basename=ceph.conf follow=False checksum=c098df1eed8765439af66fe3d0de96ae0e466ab0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:51 compute-2 sudo[122800]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:51.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:51 compute-2 sudo[122952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylnznakdoswxcufkevsdsadnwxgcpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400851.5798495-69-252530652551931/AnsiballZ_stat.py'
Nov 29 07:20:51 compute-2 sudo[122952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:52 compute-2 ceph-mon[77138]: pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:52 compute-2 python3.9[122954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:52 compute-2 sudo[122952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:52 compute-2 sudo[123075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfsovsbmjzkfqrzzryupdbsxvhuykmab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400851.5798495-69-252530652551931/AnsiballZ_copy.py'
Nov 29 07:20:52 compute-2 sudo[123075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:52 compute-2 python3.9[123077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400851.5798495-69-252530652551931/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=b1c127dd74be8d747654d0d3f00b29a32faa6866 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:52 compute-2 sudo[123075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:53 compute-2 sshd-session[122374]: Connection closed by 192.168.122.30 port 40964
Nov 29 07:20:53 compute-2 sshd-session[122370]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:53 compute-2 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 07:20:53 compute-2 systemd[1]: session-44.scope: Consumed 2.975s CPU time.
Nov 29 07:20:53 compute-2 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Nov 29 07:20:53 compute-2 systemd-logind[787]: Removed session 44.
Nov 29 07:20:53 compute-2 ceph-mon[77138]: pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:53.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:57 compute-2 ceph-mon[77138]: pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:57 compute-2 sudo[123106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:57 compute-2 sudo[123106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:57 compute-2 sudo[123106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:57 compute-2 sudo[123131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:57 compute-2 sudo[123131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:57 compute-2 sudo[123131]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:20:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:20:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:20:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:59 compute-2 ceph-mon[77138]: pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:20:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:20:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:20:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:59.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:20:59 compute-2 sshd-session[123157]: Accepted publickey for zuul from 192.168.122.30 port 45764 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:20:59 compute-2 systemd-logind[787]: New session 45 of user zuul.
Nov 29 07:20:59 compute-2 systemd[1]: Started Session 45 of User zuul.
Nov 29 07:20:59 compute-2 sshd-session[123157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:21:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:01.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:01 compute-2 python3.9[123310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:21:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:01.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:21:01 compute-2 sudo[123465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyobmcuzwynminagrpsjkwphyvioenlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400861.5312808-69-90070113718287/AnsiballZ_file.py'
Nov 29 07:21:01 compute-2 sudo[123465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:02 compute-2 python3.9[123467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:21:02 compute-2 sudo[123465]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:02 compute-2 ceph-mon[77138]: pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:02 compute-2 sudo[123617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjnqgbybgpupgwgllggjlucgqmbokfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400862.3145823-69-263375064623982/AnsiballZ_file.py'
Nov 29 07:21:02 compute-2 sudo[123617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:02 compute-2 python3.9[123619]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:21:02 compute-2 sudo[123617]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:03 compute-2 python3.9[123770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:03.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:03 compute-2 ceph-mon[77138]: pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:03 compute-2 ceph-mon[77138]: pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:04 compute-2 sudo[123920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpyxfwmkagzfvismzdwpsatknkstlwln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400863.9638221-139-93541042827640/AnsiballZ_seboolean.py'
Nov 29 07:21:04 compute-2 sudo[123920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:04 compute-2 python3.9[123922]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:21:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:05.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:05 compute-2 sudo[123920]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:07.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:08 compute-2 ceph-mon[77138]: pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:09 compute-2 sudo[124079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlgbrhyarcorvcwiusskyrxbweemexx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400869.0507407-168-10884845902888/AnsiballZ_setup.py'
Nov 29 07:21:09 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 07:21:09 compute-2 sudo[124079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:09 compute-2 python3.9[124081]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:21:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:09.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:09 compute-2 sudo[124079]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:10 compute-2 sudo[124163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovyyjtvbvhysqphizrbwwcdkxpdnzuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400869.0507407-168-10884845902888/AnsiballZ_dnf.py'
Nov 29 07:21:10 compute-2 sudo[124163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:10 compute-2 python3.9[124165]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:21:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:11.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:12 compute-2 sudo[124163]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:12 compute-2 ceph-mon[77138]: pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:12 compute-2 ceph-mon[77138]: pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:13 compute-2 sudo[124318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbapiidrxkwomcpcbkiowtsxjuqtgnfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400872.8052375-204-111875395240692/AnsiballZ_systemd.py'
Nov 29 07:21:13 compute-2 sudo[124318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:13 compute-2 python3.9[124320]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:21:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:13 compute-2 sudo[124318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:14 compute-2 sudo[124473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkfbfurplbtxlsuydevmvgcxxqbcnnuc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400874.0649033-229-22856005627323/AnsiballZ_edpm_nftables_snippet.py'
Nov 29 07:21:14 compute-2 sudo[124473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:14 compute-2 python3[124475]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 07:21:14 compute-2 sudo[124473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:15 compute-2 sudo[124626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdmbokqgyozezankutjcvzsxwunueeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400875.242945-255-155578955556887/AnsiballZ_file.py'
Nov 29 07:21:15 compute-2 sudo[124626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:15 compute-2 python3.9[124628]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:15 compute-2 sudo[124626]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:16 compute-2 ceph-mon[77138]: pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:16 compute-2 ceph-mon[77138]: pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:16 compute-2 sudo[124778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavjmagysxhwgrkjodgigcsbhxseryje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400876.27241-279-240043365497588/AnsiballZ_stat.py'
Nov 29 07:21:16 compute-2 sudo[124778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:16 compute-2 python3.9[124780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:16 compute-2 sudo[124778]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:17 compute-2 sudo[124857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpientofnycwodnoeuikcmblazjymsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400876.27241-279-240043365497588/AnsiballZ_file.py'
Nov 29 07:21:17 compute-2 sudo[124857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:17 compute-2 python3.9[124859]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:17 compute-2 sudo[124857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-2 sudo[124884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:17 compute-2 sudo[124884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:17 compute-2 sudo[124884]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-2 sudo[124912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:17 compute-2 sudo[124912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:17 compute-2 sudo[124912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:17 compute-2 sudo[125059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfqtzugzyozjipbnmsdzdfdljxptzbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400877.6995153-316-39809965283472/AnsiballZ_stat.py'
Nov 29 07:21:17 compute-2 sudo[125059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:18 compute-2 python3.9[125061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:18 compute-2 sudo[125059]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:18 compute-2 sudo[125137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzegprkzybtltjqladiwjzjnmutahds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400877.6995153-316-39809965283472/AnsiballZ_file.py'
Nov 29 07:21:18 compute-2 sudo[125137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:18 compute-2 python3.9[125139]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o_er3bks recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:18 compute-2 sudo[125137]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:19 compute-2 ceph-mon[77138]: pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:19 compute-2 ceph-mon[77138]: pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:19 compute-2 sudo[125290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhomekajhdahbbgspwinwnwejnpzmkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400878.8925123-352-171635153099245/AnsiballZ_stat.py'
Nov 29 07:21:19 compute-2 sudo[125290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:19 compute-2 python3.9[125292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:19 compute-2 sudo[125290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:19 compute-2 sudo[125368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcckfobeuohpkvoadugvdgyoizpnnusf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400878.8925123-352-171635153099245/AnsiballZ_file.py'
Nov 29 07:21:19 compute-2 sudo[125368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:19.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:19 compute-2 python3.9[125370]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:19 compute-2 sudo[125368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:20 compute-2 ceph-mon[77138]: pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:20 compute-2 sudo[125520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfppowvoifjpdqzodlqlswocumwapbyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400880.2417254-391-25237133283844/AnsiballZ_command.py'
Nov 29 07:21:20 compute-2 sudo[125520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:20 compute-2 python3.9[125522]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:20 compute-2 sudo[125520]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:21 compute-2 sudo[125674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqiaxhrsyijfpzgxpjehtyyjndhvribp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400881.2587037-415-218178959718634/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:21:21 compute-2 sudo[125674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:21 compute-2 python3[125676]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:21:21 compute-2 sudo[125674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:22 compute-2 sudo[125826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgngyemmomhesaniabgljmeehmgtpvfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400882.2313466-439-26841543795441/AnsiballZ_stat.py'
Nov 29 07:21:22 compute-2 sudo[125826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:22 compute-2 python3.9[125828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:22 compute-2 sudo[125826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:23.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:23 compute-2 sudo[125952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jktdxzkkvemrrupbevpvscbwidohcicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400882.2313466-439-26841543795441/AnsiballZ_copy.py'
Nov 29 07:21:23 compute-2 sudo[125952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:23 compute-2 python3.9[125954]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400882.2313466-439-26841543795441/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:23 compute-2 sudo[125952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:23 compute-2 ceph-mon[77138]: pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:24 compute-2 sudo[126104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rppwhvfhfnufqgrwkyetulxwdnwunfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400883.8223014-484-41981498728165/AnsiballZ_stat.py'
Nov 29 07:21:24 compute-2 sudo[126104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:24 compute-2 python3.9[126106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:24 compute-2 sudo[126104]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:24 compute-2 sudo[126229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpgqasjasmixmggtqqduzkwlljdefhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400883.8223014-484-41981498728165/AnsiballZ_copy.py'
Nov 29 07:21:24 compute-2 sudo[126229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:25 compute-2 python3.9[126231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400883.8223014-484-41981498728165/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:21:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:21:25 compute-2 sudo[126229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:26 compute-2 sudo[126382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmljvvwlmutfhvzhjmydkcksktaqzmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400886.0100677-528-146892457978075/AnsiballZ_stat.py'
Nov 29 07:21:26 compute-2 sudo[126382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:26 compute-2 python3.9[126384]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:26 compute-2 sudo[126382]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:27 compute-2 sudo[126508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkowvbctntrtgpkhkcbknslkolhommnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400886.0100677-528-146892457978075/AnsiballZ_copy.py'
Nov 29 07:21:27 compute-2 sudo[126508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:27 compute-2 python3.9[126510]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400886.0100677-528-146892457978075/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:27 compute-2 sudo[126508]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:21:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:21:27 compute-2 sudo[126660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuevkhbocbgitfpupkedzdzwxjqeocvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400887.6405-573-261605552450808/AnsiballZ_stat.py'
Nov 29 07:21:27 compute-2 sudo[126660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:28 compute-2 python3.9[126662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:28 compute-2 sudo[126660]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:28 compute-2 sudo[126785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urzdmvzsbgieasqzhbecyhayyjyphden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400887.6405-573-261605552450808/AnsiballZ_copy.py'
Nov 29 07:21:28 compute-2 sudo[126785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:28 compute-2 python3.9[126787]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400887.6405-573-261605552450808/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:28 compute-2 sudo[126785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:21:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:21:29 compute-2 sudo[126813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:29 compute-2 sudo[126813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:29 compute-2 sudo[126813]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-2 ceph-mon[77138]: pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:29 compute-2 ceph-mon[77138]: pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:29 compute-2 sudo[126861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:21:29 compute-2 sudo[126861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:29 compute-2 sudo[126861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-2 sudo[126915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:29 compute-2 sudo[126915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:29 compute-2 sudo[126915]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-2 sudo[126940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:21:29 compute-2 sudo[126940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:29 compute-2 sudo[126940]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-2 sudo[127059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhdgrvbnjhvnewwfptvvhtfzlqwnasry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400889.2631726-619-115763533673338/AnsiballZ_stat.py'
Nov 29 07:21:29 compute-2 sudo[127059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:29.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:29 compute-2 python3.9[127061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:29 compute-2 sudo[127059]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:30 compute-2 sudo[127184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkiyuxasdjgygphkjzzetxmnaagwjcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400889.2631726-619-115763533673338/AnsiballZ_copy.py'
Nov 29 07:21:30 compute-2 sudo[127184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:30 compute-2 python3.9[127186]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400889.2631726-619-115763533673338/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:30 compute-2 sudo[127184]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:31.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 1875 writes, 11K keys, 1875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 1875 writes, 1875 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1875 writes, 11K keys, 1875 commit groups, 1.0 writes per commit group, ingest: 21.49 MB, 0.04 MB/s
                                           Interval WAL: 1875 writes, 1875 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     52.5      0.22              0.04         4    0.054       0      0       0.0       0.0
                                             L6      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    101.3     86.5      0.28              0.07         3    0.094     12K   1292       0.0       0.0
                                            Sum      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     57.3     71.7      0.50              0.10         7    0.071     12K   1292       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     69.2     86.5      0.41              0.10         6    0.069     12K   1292       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    101.3     86.5      0.28              0.07         3    0.094     12K   1292       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     86.7      0.13              0.04         3    0.044       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 1.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.00011 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(58,1.29 MB,0.422784%) FilterBlock(7,41.30 KB,0.0132661%) IndexBlock(7,92.83 KB,0.0298199%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:21:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:33.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:21:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:37.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:37 compute-2 sudo[127215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:37 compute-2 sudo[127215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:37 compute-2 sudo[127215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:37 compute-2 sudo[127240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:37 compute-2 sudo[127240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:37 compute-2 sudo[127240]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:37.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:21:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos active c 503..1138) lease_timeout -- calling new election
Nov 29 07:21:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:41.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:42 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:21:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:43.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:43.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:45.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:45 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc MDS connection to Monitors appears to be laggy; 18.9256s since last acked beacon
Nov 29 07:21:45 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:21:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:45 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:21:45 compute-2 ceph-mon[77138]: paxos.1).electionLogic(20) init, last seen epoch 20
Nov 29 07:21:46 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:21:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:21:46 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:21:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:47 compute-2 sudo[127395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuiwmfptgixvrwihsfoewvngcrucwfdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400907.1279225-663-132331715653429/AnsiballZ_file.py'
Nov 29 07:21:47 compute-2 sudo[127395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:47 compute-2 python3.9[127397]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:47 compute-2 sudo[127395]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:48 compute-2 sudo[127547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdhrtlqqdvenimopiqnzrnmxfksgrbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400907.913621-687-112849317945331/AnsiballZ_command.py'
Nov 29 07:21:48 compute-2 sudo[127547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:48 compute-2 python3.9[127549]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:48 compute-2 sudo[127547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:49.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:21:49 compute-2 sudo[127703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqdoaukyyhajyywxciyqtzchlvwvajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400908.717866-712-213817234993729/AnsiballZ_blockinfile.py'
Nov 29 07:21:49 compute-2 sudo[127703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:49 compute-2 python3.9[127705]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:49 compute-2 sudo[127703]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:49 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc  MDS is no longer laggy
Nov 29 07:21:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:49.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:50 compute-2 sudo[127855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxxumnxzsdnthcxnhakpldmqyoyozhyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400910.1620402-739-61522729210309/AnsiballZ_command.py'
Nov 29 07:21:50 compute-2 sudo[127855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:50 compute-2 python3.9[127857]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:50 compute-2 sudo[127855]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:51 compute-2 sudo[128009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwqrberjpostqimehvlqwinxknfomxit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400911.305445-763-108298784321540/AnsiballZ_stat.py'
Nov 29 07:21:51 compute-2 sudo[128009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:51 compute-2 python3.9[128011]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:21:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:21:51 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:21:51 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:21:51 compute-2 ceph-mon[77138]: pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:51 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:21:51 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:21:51 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:21:51 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 12m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:21:51 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:21:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:51 compute-2 sudo[128009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:52 compute-2 sudo[128183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbnigklllbyqaubdmwqupysiuedkjnfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400912.1368675-787-64633723286695/AnsiballZ_command.py'
Nov 29 07:21:52 compute-2 sudo[128183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:52 compute-2 sudo[128144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:52 compute-2 sudo[128144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:52 compute-2 sudo[128144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:52 compute-2 sudo[128191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:21:52 compute-2 sudo[128191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:52 compute-2 sudo[128191]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:52 compute-2 sudo[128216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:52 compute-2 sudo[128216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:52 compute-2 sudo[128216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:52 compute-2 python3.9[128189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:52 compute-2 sudo[128241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:21:52 compute-2 sudo[128241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:52 compute-2 sudo[128183]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:53 compute-2 sudo[128241]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:53 compute-2 sudo[128451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domidhmfykjxlltfzkuhwxarqzerwcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400912.9741347-810-95187755557792/AnsiballZ_file.py'
Nov 29 07:21:53 compute-2 sudo[128451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:53 compute-2 python3.9[128453]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:53 compute-2 sudo[128451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:53.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:55 compute-2 ceph-mon[77138]: pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:21:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 07:21:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:21:55 compute-2 python3.9[128603]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:56 compute-2 sudo[128755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppwsvssuqdntvuvymooetxfpfnimhvvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400915.9135294-930-266478613652404/AnsiballZ_command.py'
Nov 29 07:21:56 compute-2 sudo[128755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:56 compute-2 python3.9[128757]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:56 compute-2 ovs-vsctl[128758]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 07:21:56 compute-2 sudo[128755]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:57 compute-2 sudo[128909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-errdzajknqwizpceaukuresbqyjallfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400917.3737729-957-145918653609455/AnsiballZ_command.py'
Nov 29 07:21:57 compute-2 sudo[128909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:57.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:21:57 compute-2 sudo[128912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:57 compute-2 sudo[128912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:57 compute-2 sudo[128912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:57 compute-2 python3.9[128911]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:58 compute-2 sudo[128909]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:58 compute-2 sudo[128937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:58 compute-2 sudo[128937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:58 compute-2 sudo[128937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:58 compute-2 sudo[129114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwahdbzoxvnbhrylqxwpxxmfeurweoxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400918.2390258-981-9917131921503/AnsiballZ_command.py'
Nov 29 07:21:58 compute-2 sudo[129114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:58 compute-2 python3.9[129116]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:58 compute-2 ovs-vsctl[129117]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 07:21:58 compute-2 sudo[129114]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:21:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:21:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:21:59 compute-2 ceph-mon[77138]: pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:21:59 compute-2 ceph-mon[77138]: pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:21:59 compute-2 python3.9[129268]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:21:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:21:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:21:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:00 compute-2 sudo[129420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kumqtwnygqgkshpzscmlhiknstokzrmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400920.2129705-1033-82873574572079/AnsiballZ_file.py'
Nov 29 07:22:00 compute-2 sudo[129420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:00 compute-2 python3.9[129422]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:00 compute-2 sudo[129420]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:01 compute-2 sudo[129573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjinycmexixcgfzdcslgvpffikbkktxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400921.0905933-1057-225577765041053/AnsiballZ_stat.py'
Nov 29 07:22:01 compute-2 sudo[129573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:01 compute-2 python3.9[129575]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:01 compute-2 sudo[129573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:01 compute-2 sudo[129651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuegggyvtctsggnwtlrhtqikvgkceuqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400921.0905933-1057-225577765041053/AnsiballZ_file.py'
Nov 29 07:22:01 compute-2 sudo[129651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:02 compute-2 python3.9[129653]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:02 compute-2 sudo[129651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:02 compute-2 sudo[129803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxunivlopdimhrnnphypffohgdwfyrbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400922.2443683-1057-74410184453666/AnsiballZ_stat.py'
Nov 29 07:22:02 compute-2 sudo[129803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:02 compute-2 python3.9[129805]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:02 compute-2 sudo[129803]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:02 compute-2 ceph-mon[77138]: pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:22:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:22:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:22:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:22:02 compute-2 ceph-mon[77138]: pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:03 compute-2 sudo[129882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffgjfabaqfobehnxasxrmjohhnjzeqbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400922.2443683-1057-74410184453666/AnsiballZ_file.py'
Nov 29 07:22:03 compute-2 sudo[129882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:22:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:22:03 compute-2 python3.9[129884]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:03 compute-2 sudo[129882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:04 compute-2 sudo[130034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyjfugsnfoskviqeuebadbrmatzrbrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400923.988074-1126-38492628750422/AnsiballZ_file.py'
Nov 29 07:22:04 compute-2 sudo[130034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:04 compute-2 python3.9[130036]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:04 compute-2 sudo[130034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:04 compute-2 ceph-mon[77138]: pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:04 compute-2 ceph-mon[77138]: pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:05 compute-2 sudo[130187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpstwkwrkjzotbxgrebireoxzjiejhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400924.7899632-1150-263430457953850/AnsiballZ_stat.py'
Nov 29 07:22:05 compute-2 sudo[130187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:05 compute-2 python3.9[130189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:05 compute-2 sudo[130187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:05 compute-2 sudo[130265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsbhbkolaohnatexmskwzxsargufdmnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400924.7899632-1150-263430457953850/AnsiballZ_file.py'
Nov 29 07:22:05 compute-2 sudo[130265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:05 compute-2 python3.9[130267]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:05 compute-2 sudo[130265]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000069s ======
Nov 29 07:22:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000069s
Nov 29 07:22:06 compute-2 ceph-mon[77138]: pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:06 compute-2 sudo[130417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzibmbvcgvnikvxlialwymdtaclcalo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400926.1667209-1185-220397024745479/AnsiballZ_stat.py'
Nov 29 07:22:06 compute-2 sudo[130417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:06 compute-2 python3.9[130419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:06 compute-2 sudo[130417]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:06 compute-2 sudo[130495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimcnrhxhbcdtosuebdklnqkjzurhasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400926.1667209-1185-220397024745479/AnsiballZ_file.py'
Nov 29 07:22:06 compute-2 sudo[130495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:22:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:22:07 compute-2 python3.9[130498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:07 compute-2 sudo[130495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:07 compute-2 sudo[130648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlceaxuywrgxeokdcrmxnwxyhsrqoaqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400927.4407954-1221-76649749029874/AnsiballZ_systemd.py'
Nov 29 07:22:07 compute-2 sudo[130648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:07 compute-2 ceph-mon[77138]: pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:08 compute-2 python3.9[130650]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:22:08 compute-2 systemd[1]: Reloading.
Nov 29 07:22:08 compute-2 systemd-rc-local-generator[130671]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:08 compute-2 systemd-sysv-generator[130680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:08 compute-2 sudo[130648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:09 compute-2 sudo[130841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hztmcluojwwmgohokzduvnsmdqttbjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400928.7045946-1246-44102286094055/AnsiballZ_stat.py'
Nov 29 07:22:09 compute-2 sudo[130841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:09 compute-2 sshd-session[130737]: Invalid user ubuntu from 45.148.10.240 port 37768
Nov 29 07:22:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:09 compute-2 sshd-session[130737]: Connection closed by invalid user ubuntu 45.148.10.240 port 37768 [preauth]
Nov 29 07:22:09 compute-2 python3.9[130843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:09 compute-2 sudo[130841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-2 sudo[130919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnjpbkdxjwofdpwgiryqgvydgeaidnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400928.7045946-1246-44102286094055/AnsiballZ_file.py'
Nov 29 07:22:09 compute-2 sudo[130919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:09 compute-2 python3.9[130921]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:09 compute-2 sudo[130919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:22:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:22:09 compute-2 ceph-mon[77138]: pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:10 compute-2 sudo[131071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mceimnjzqzloksqbgzaeoxtwpbfbkbhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400929.889028-1281-79443035865249/AnsiballZ_stat.py'
Nov 29 07:22:10 compute-2 sudo[131071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:10 compute-2 python3.9[131073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:10 compute-2 sudo[131071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:10 compute-2 sudo[131149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyadfyajlomgocdbkzvfeeosfzregsxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400929.889028-1281-79443035865249/AnsiballZ_file.py'
Nov 29 07:22:10 compute-2 sudo[131149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:10 compute-2 python3.9[131151]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:10 compute-2 sudo[131149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:11 compute-2 sudo[131302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhkylomdxgzpbpbdivrsvsxsszljvwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400931.1947665-1318-60005457715254/AnsiballZ_systemd.py'
Nov 29 07:22:11 compute-2 sudo[131302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:11 compute-2 python3.9[131304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:22:11 compute-2 systemd[1]: Reloading.
Nov 29 07:22:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:11.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:11 compute-2 ceph-mon[77138]: pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:12 compute-2 systemd-rc-local-generator[131335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:12 compute-2 systemd-sysv-generator[131338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:12 compute-2 systemd[1]: Starting Create netns directory...
Nov 29 07:22:12 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:22:12 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:22:12 compute-2 systemd[1]: Finished Create netns directory.
Nov 29 07:22:12 compute-2 sudo[131302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:13 compute-2 sudo[131497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunajfqsvysmknxaoprowjvwrezltkhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400932.9497442-1347-64064716708868/AnsiballZ_file.py'
Nov 29 07:22:13 compute-2 sudo[131497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:13 compute-2 python3.9[131499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:13 compute-2 sudo[131497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:13 compute-2 sudo[131649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcusldttfsrzzblhuzopcqcddckdcgfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400933.704677-1372-159847261591801/AnsiballZ_stat.py'
Nov 29 07:22:13 compute-2 sudo[131649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:14 compute-2 python3.9[131651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:14 compute-2 ceph-mon[77138]: pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:14 compute-2 sudo[131649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:14 compute-2 sudo[131772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvaraxoibdysgfctvfewrxaqvrjhotxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400933.704677-1372-159847261591801/AnsiballZ_copy.py'
Nov 29 07:22:14 compute-2 sudo[131772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:14 compute-2 python3.9[131774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400933.704677-1372-159847261591801/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:14 compute-2 sudo[131772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:15 compute-2 ceph-mon[77138]: pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:15 compute-2 sudo[131899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:15 compute-2 sudo[131899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:15 compute-2 sudo[131899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:15 compute-2 sudo[131949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skkconoojojifzvseccobuhskfyrwrpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400935.3939059-1423-14754911947532/AnsiballZ_file.py'
Nov 29 07:22:15 compute-2 sudo[131949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:15 compute-2 sudo[131952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:22:15 compute-2 sudo[131952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:15 compute-2 sudo[131952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:16 compute-2 python3.9[131955]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:16 compute-2 sudo[131949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:22:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:22:16 compute-2 sudo[132127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmhfanrbxgbsuakqxhfvpetwlnukoghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400936.2982972-1446-133131825511259/AnsiballZ_stat.py'
Nov 29 07:22:16 compute-2 sudo[132127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:16 compute-2 python3.9[132129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:16 compute-2 sudo[132127]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-2 sudo[132251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-honrrdtnfrgkpudbfghciuakrlmcbuai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400936.2982972-1446-133131825511259/AnsiballZ_copy.py'
Nov 29 07:22:17 compute-2 sudo[132251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:17 compute-2 python3.9[132253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400936.2982972-1446-133131825511259/.source.json _original_basename=.btrlxnws follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:17 compute-2 sudo[132251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-2 ceph-mon[77138]: pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:17 compute-2 sudo[132403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imysygvhvkmtcxxxliihontfnwhfingm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400937.5252302-1492-221349458220024/AnsiballZ_file.py'
Nov 29 07:22:17 compute-2 sudo[132403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:18 compute-2 python3.9[132405]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:18 compute-2 sudo[132403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-2 sudo[132406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:18 compute-2 sudo[132406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:18 compute-2 sudo[132406]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-2 sudo[132439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:18 compute-2 sudo[132439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:18 compute-2 sudo[132439]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-2 sudo[132605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtuhmzmepmzytzojcphgndqbfjlxino ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400938.3182695-1515-71597278937103/AnsiballZ_stat.py'
Nov 29 07:22:18 compute-2 sudo[132605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5351 writes, 23K keys, 5351 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5351 writes, 711 syncs, 7.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5351 writes, 23K keys, 5351 commit groups, 1.0 writes per commit group, ingest: 18.59 MB, 0.03 MB/s
                                           Interval WAL: 5351 writes, 711 syncs, 7.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:22:18 compute-2 sudo[132605]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:19.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:19 compute-2 sudo[132729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqpadultpniwuddjompcmalzgybobce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400938.3182695-1515-71597278937103/AnsiballZ_copy.py'
Nov 29 07:22:19 compute-2 sudo[132729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:19 compute-2 sudo[132729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:19.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:20 compute-2 ceph-mon[77138]: pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:20 compute-2 sudo[132881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhywzsxqzdzzsgzfcyrzfhmxbemzwkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.8929064-1567-39659537893328/AnsiballZ_container_config_data.py'
Nov 29 07:22:20 compute-2 sudo[132881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:20 compute-2 python3.9[132883]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 07:22:20 compute-2 sudo[132881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:21.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:21 compute-2 sudo[133034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oazzarxdlsqpamamjkjgtlkvakpbqqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400940.83062-1593-99789620478768/AnsiballZ_container_config_hash.py'
Nov 29 07:22:21 compute-2 sudo[133034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:21 compute-2 python3.9[133036]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:22:21 compute-2 sudo[133034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:21.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:22 compute-2 ceph-mon[77138]: pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:22 compute-2 sudo[133186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxthelelgseqcooayftejwvezoakipmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400941.898947-1620-133905946699458/AnsiballZ_podman_container_info.py'
Nov 29 07:22:22 compute-2 sudo[133186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:22 compute-2 python3.9[133188]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:22:22 compute-2 sudo[133186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000034s ======
Nov 29 07:22:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Nov 29 07:22:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:24 compute-2 ceph-mon[77138]: pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:24 compute-2 sudo[133365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilsgorjanplczvxmseqrphtlaagnogjy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.8281221-1660-111415489064848/AnsiballZ_edpm_container_manage.py'
Nov 29 07:22:24 compute-2 sudo[133365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:24 compute-2 python3[133367]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:22:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:22:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:25.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:22:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:26 compute-2 ceph-mon[77138]: pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:27 compute-2 ceph-mon[77138]: pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:27.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000035s ======
Nov 29 07:22:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:29.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Nov 29 07:22:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:29 compute-2 ceph-mon[77138]: pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:29.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:31.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:31 compute-2 podman[133382]: 2025-11-29 07:22:31.578169537 +0000 UTC m=+6.657701337 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:22:31 compute-2 ceph-mon[77138]: pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:31 compute-2 podman[133504]: 2025-11-29 07:22:31.769814786 +0000 UTC m=+0.058778256 container create d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:22:31 compute-2 podman[133504]: 2025-11-29 07:22:31.738527519 +0000 UTC m=+0.027490989 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:22:31 compute-2 python3[133367]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:22:31 compute-2 sudo[133365]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:33 compute-2 sudo[133694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhaosnmxfzrhmmiyjkgthwsxhdvpkzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400952.6886904-1683-239032054333180/AnsiballZ_stat.py'
Nov 29 07:22:33 compute-2 sudo[133694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:33 compute-2 python3.9[133696]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:22:33 compute-2 sudo[133694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:22:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:33.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:22:34 compute-2 sudo[133848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbpmztisbezlwhpujyqjitjgkahjkai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400953.585741-1710-92948417939004/AnsiballZ_file.py'
Nov 29 07:22:34 compute-2 sudo[133848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:34 compute-2 python3.9[133850]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:34 compute-2 sudo[133848]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:34 compute-2 ceph-mon[77138]: pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:34 compute-2 sudo[133924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecrnxgmakbtgqtcbcwuhldjfrtmoihli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400953.585741-1710-92948417939004/AnsiballZ_stat.py'
Nov 29 07:22:34 compute-2 sudo[133924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:34 compute-2 python3.9[133926]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:22:34 compute-2 sudo[133924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:35 compute-2 sudo[134076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqikzxlfsbqxoohmxhnabsmsjxmgkxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400954.9067008-1710-280389792202268/AnsiballZ_copy.py'
Nov 29 07:22:35 compute-2 sudo[134076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:36 compute-2 python3.9[134078]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400954.9067008-1710-280389792202268/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:36 compute-2 sudo[134076]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:36 compute-2 sudo[134152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jamebpzhbxevaadjedeouqvanifpemxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400954.9067008-1710-280389792202268/AnsiballZ_systemd.py'
Nov 29 07:22:36 compute-2 sudo[134152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:37 compute-2 ceph-mon[77138]: pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:37 compute-2 python3.9[134154]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:22:37 compute-2 systemd[1]: Reloading.
Nov 29 07:22:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:22:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:22:37 compute-2 systemd-rc-local-generator[134183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:37 compute-2 systemd-sysv-generator[134186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:37 compute-2 sudo[134152]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:22:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:22:37 compute-2 sudo[134264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmmwadclonqantswnvlhqynscofgvgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400954.9067008-1710-280389792202268/AnsiballZ_systemd.py'
Nov 29 07:22:38 compute-2 sudo[134264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:38 compute-2 sudo[134267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:38 compute-2 sudo[134267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:38 compute-2 sudo[134267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:38 compute-2 python3.9[134266]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:22:38 compute-2 sudo[134292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:38 compute-2 sudo[134292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:38 compute-2 sudo[134292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:38 compute-2 systemd[1]: Reloading.
Nov 29 07:22:38 compute-2 ceph-mon[77138]: pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:38 compute-2 systemd-rc-local-generator[134347]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:38 compute-2 systemd-sysv-generator[134351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:38 compute-2 systemd[1]: Starting ovn_controller container...
Nov 29 07:22:39 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:22:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71034a2f9cdbdcded93e003c1c00b2cda1bced50654c6984b56dc5bde00954ea/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:39 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09.
Nov 29 07:22:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:41 compute-2 podman[134358]: 2025-11-29 07:22:41.215112114 +0000 UTC m=+2.437357743 container init d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:22:41 compute-2 ceph-mon[77138]: pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + sudo -E kolla_set_configs
Nov 29 07:22:41 compute-2 podman[134358]: 2025-11-29 07:22:41.248267615 +0000 UTC m=+2.470513214 container start d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 07:22:41 compute-2 systemd[1]: Created slice User Slice of UID 0.
Nov 29 07:22:41 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 07:22:41 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 07:22:41 compute-2 systemd[1]: Starting User Manager for UID 0...
Nov 29 07:22:41 compute-2 systemd[134404]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 29 07:22:41 compute-2 systemd[134404]: Queued start job for default target Main User Target.
Nov 29 07:22:41 compute-2 systemd[134404]: Created slice User Application Slice.
Nov 29 07:22:41 compute-2 systemd[134404]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 07:22:41 compute-2 systemd[134404]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:22:41 compute-2 systemd[134404]: Reached target Paths.
Nov 29 07:22:41 compute-2 systemd[134404]: Reached target Timers.
Nov 29 07:22:41 compute-2 systemd[134404]: Starting D-Bus User Message Bus Socket...
Nov 29 07:22:41 compute-2 systemd[134404]: Starting Create User's Volatile Files and Directories...
Nov 29 07:22:41 compute-2 systemd[134404]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:22:41 compute-2 systemd[134404]: Reached target Sockets.
Nov 29 07:22:41 compute-2 systemd[134404]: Finished Create User's Volatile Files and Directories.
Nov 29 07:22:41 compute-2 systemd[134404]: Reached target Basic System.
Nov 29 07:22:41 compute-2 systemd[134404]: Reached target Main User Target.
Nov 29 07:22:41 compute-2 systemd[134404]: Startup finished in 163ms.
Nov 29 07:22:41 compute-2 systemd[1]: Started User Manager for UID 0.
Nov 29 07:22:41 compute-2 systemd[1]: Started Session c1 of User root.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:22:41 compute-2 ovn_controller[134375]: INFO:__main__:Validating config file
Nov 29 07:22:41 compute-2 ovn_controller[134375]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:22:41 compute-2 ovn_controller[134375]: INFO:__main__:Writing out command to execute
Nov 29 07:22:41 compute-2 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: ++ cat /run_command
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + ARGS=
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + sudo kolla_copy_cacerts
Nov 29 07:22:41 compute-2 systemd[1]: Started Session c2 of User root.
Nov 29 07:22:41 compute-2 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + [[ ! -n '' ]]
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + . kolla_extend_start
Nov 29 07:22:41 compute-2 ovn_controller[134375]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + umask 0022
Nov 29 07:22:41 compute-2 ovn_controller[134375]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7150] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7166] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7192] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7204] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7211] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:22:41 compute-2 kernel: br-int: entered promiscuous mode
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00014|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00015|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00016|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00018|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00019|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 07:22:41 compute-2 systemd-udevd[134430]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:22:41 compute-2 ovn_controller[134375]: 2025-11-29T07:22:41Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7841] manager: (ovn-011fdd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7847] manager: (ovn-45d4c7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.7852] manager: (ovn-cb98fb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 07:22:41 compute-2 edpm-start-podman-container[134358]: ovn_controller
Nov 29 07:22:41 compute-2 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 07:22:41 compute-2 systemd-udevd[134434]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.8078] device (genev_sys_6081): carrier: link connected
Nov 29 07:22:41 compute-2 NetworkManager[48993]: <info>  [1764400961.8080] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 07:22:41 compute-2 ceph-mon[77138]: pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:41 compute-2 edpm-start-podman-container[134357]: Creating additional drop-in dependency for "ovn_controller" (d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09)
Nov 29 07:22:41 compute-2 systemd[1]: Reloading.
Nov 29 07:22:41 compute-2 podman[134391]: 2025-11-29 07:22:41.911409647 +0000 UTC m=+0.638189158 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 07:22:41 compute-2 systemd-sysv-generator[134499]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:41 compute-2 systemd-rc-local-generator[134492]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:42 compute-2 systemd[1]: Started ovn_controller container.
Nov 29 07:22:42 compute-2 sudo[134264]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:42 compute-2 sudo[134655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujkupgckcxxishggqzlqcitwfqrxhrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400962.4668105-1795-30741794547141/AnsiballZ_command.py'
Nov 29 07:22:42 compute-2 sudo[134655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:43 compute-2 python3.9[134657]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:43 compute-2 ovs-vsctl[134659]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 07:22:43 compute-2 sudo[134655]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:22:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:43.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:22:43 compute-2 sudo[134809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojzfzpllxsdowdpfngnebbtpnprynmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400963.3023496-1819-171948704368567/AnsiballZ_command.py'
Nov 29 07:22:43 compute-2 sudo[134809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:43 compute-2 python3.9[134811]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:43 compute-2 ovs-vsctl[134813]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 07:22:43 compute-2 ceph-mon[77138]: pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:43 compute-2 sudo[134809]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:44 compute-2 sudo[134964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfmjcetoltauiasblawgndbsrydjakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400964.5598805-1861-103626187201555/AnsiballZ_command.py'
Nov 29 07:22:44 compute-2 sudo[134964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:45 compute-2 python3.9[134966]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:45 compute-2 ovs-vsctl[134968]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 07:22:45 compute-2 sudo[134964]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:45.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:45 compute-2 sshd-session[123160]: Connection closed by 192.168.122.30 port 45764
Nov 29 07:22:45 compute-2 sshd-session[123157]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:45 compute-2 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 07:22:45 compute-2 systemd[1]: session-45.scope: Consumed 1min 3.477s CPU time.
Nov 29 07:22:45 compute-2 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Nov 29 07:22:45 compute-2 systemd-logind[787]: Removed session 45.
Nov 29 07:22:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:22:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:22:46 compute-2 ceph-mon[77138]: pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:48 compute-2 ceph-mon[77138]: pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 ceph-mon[77138]: pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 07:22:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.005000157s ======
Nov 29 07:22:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:49.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000157s
Nov 29 07:22:51 compute-2 sshd-session[134996]: Accepted publickey for zuul from 192.168.122.30 port 41392 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:22:51 compute-2 systemd-logind[787]: New session 47 of user zuul.
Nov 29 07:22:51 compute-2 systemd[1]: Started Session 47 of User zuul.
Nov 29 07:22:51 compute-2 sshd-session[134996]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:22:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:51.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:51 compute-2 ceph-mon[77138]: pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 07:22:51 compute-2 systemd[1]: Stopping User Manager for UID 0...
Nov 29 07:22:51 compute-2 systemd[134404]: Activating special unit Exit the Session...
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped target Main User Target.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped target Basic System.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped target Paths.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped target Sockets.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped target Timers.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:22:51 compute-2 systemd[134404]: Closed D-Bus User Message Bus Socket.
Nov 29 07:22:51 compute-2 systemd[134404]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:22:51 compute-2 systemd[134404]: Removed slice User Application Slice.
Nov 29 07:22:51 compute-2 systemd[134404]: Reached target Shutdown.
Nov 29 07:22:51 compute-2 systemd[134404]: Finished Exit the Session.
Nov 29 07:22:51 compute-2 systemd[134404]: Reached target Exit the Session.
Nov 29 07:22:51 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 07:22:51 compute-2 systemd[1]: Stopped User Manager for UID 0.
Nov 29 07:22:51 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 07:22:51 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 07:22:51 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 07:22:51 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 07:22:51 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 07:22:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:52 compute-2 python3.9[135152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:22:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:53.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:22:53 compute-2 sudo[135307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsenycorxvpbamutrxsaqjiibeoolytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400972.7951434-69-261695497304963/AnsiballZ_file.py'
Nov 29 07:22:53 compute-2 sudo[135307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:53 compute-2 python3.9[135309]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:53 compute-2 sudo[135307]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:53 compute-2 ceph-mon[77138]: pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Nov 29 07:22:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:54 compute-2 sudo[135459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilnpdiotzvldqnlliwickhqidncyaecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400973.6809492-69-198290563711397/AnsiballZ_file.py'
Nov 29 07:22:54 compute-2 sudo[135459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:54 compute-2 python3.9[135461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:54 compute-2 sudo[135459]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:54 compute-2 sudo[135611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htvjffeoakvtxlzvtnogaknephvhjxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400974.4418232-69-252524343552984/AnsiballZ_file.py'
Nov 29 07:22:54 compute-2 sudo[135611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:54 compute-2 python3.9[135613]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:54 compute-2 sudo[135611]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:55.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:55 compute-2 sudo[135764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfezjxdbxxgkvvegabfyivnxtfplflv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400975.176797-69-136505641195136/AnsiballZ_file.py'
Nov 29 07:22:55 compute-2 sudo[135764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:55 compute-2 python3.9[135766]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:55 compute-2 sudo[135764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:56 compute-2 ceph-mon[77138]: pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Nov 29 07:22:56 compute-2 sudo[135916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hegjyiygivnzzownlwzmmzpkkqkcmcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400975.9100506-69-162247257841340/AnsiballZ_file.py'
Nov 29 07:22:56 compute-2 sudo[135916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:56 compute-2 python3.9[135918]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:56 compute-2 sudo[135916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:22:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:22:57 compute-2 python3.9[136069]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:57 compute-2 ceph-mon[77138]: pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 07:22:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:58 compute-2 sudo[136219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyenhpyfebdzlnphhmasfzfvzloyqdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400977.666633-201-227703961771646/AnsiballZ_seboolean.py'
Nov 29 07:22:58 compute-2 sudo[136219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:58 compute-2 python3.9[136221]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:22:58 compute-2 sudo[136222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:58 compute-2 sudo[136222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:58 compute-2 sudo[136222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:58 compute-2 sudo[136247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:58 compute-2 sudo[136247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:58 compute-2 sudo[136247]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:59 compute-2 sudo[136219]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:22:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:22:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:22:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:22:59 compute-2 ceph-mon[77138]: pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 07:23:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:00 compute-2 python3.9[136423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:00 compute-2 python3.9[136544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400979.3786037-225-132180879003160/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:01.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:01 compute-2 python3.9[136695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:02 compute-2 ceph-mon[77138]: pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 07:23:02 compute-2 python3.9[136816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400981.1061163-271-89928474522028/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:03 compute-2 sudo[136967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqzelgojncpcainonqkfhqivrkqqpwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400982.8666744-321-138818580976757/AnsiballZ_setup.py'
Nov 29 07:23:03 compute-2 sudo[136967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:03 compute-2 python3.9[136969]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:23:03 compute-2 ceph-mon[77138]: pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Nov 29 07:23:03 compute-2 sudo[136967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:04 compute-2 sudo[137051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybkahqtpukfwsnjltqfikdcbawhrqgdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400982.8666744-321-138818580976757/AnsiballZ_dnf.py'
Nov 29 07:23:04 compute-2 sudo[137051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:04 compute-2 python3.9[137053]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:23:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:05.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:05 compute-2 ceph-mon[77138]: pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Nov 29 07:23:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:06 compute-2 sudo[137051]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:08 compute-2 sudo[137206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckyhqrholealojsjustyszdibszafuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400987.6502573-357-175725774950593/AnsiballZ_systemd.py'
Nov 29 07:23:08 compute-2 sudo[137206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:08 compute-2 python3.9[137208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:23:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:09.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:09 compute-2 sudo[137206]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:09 compute-2 ceph-mon[77138]: pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 07:23:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:10 compute-2 python3.9[137362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:11 compute-2 ceph-mon[77138]: pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:11 compute-2 python3.9[137484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400990.0322497-381-90906025282427/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:11.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:11 compute-2 python3.9[137634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:12 compute-2 ceph-mon[77138]: pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:12.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:12 compute-2 ovn_controller[134375]: 2025-11-29T07:23:12Z|00025|memory|INFO|16256 kB peak resident set size after 30.8 seconds
Nov 29 07:23:12 compute-2 ovn_controller[134375]: 2025-11-29T07:23:12Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 29 07:23:12 compute-2 podman[137729]: 2025-11-29 07:23:12.59014696 +0000 UTC m=+0.180052524 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:23:12 compute-2 python3.9[137765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400991.4592013-381-6537099712131/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:14 compute-2 python3.9[137929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:14.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:14 compute-2 ceph-mon[77138]: pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:14 compute-2 python3.9[138050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400993.5518813-513-13453934444072/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:15 compute-2 python3.9[138201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:15 compute-2 ceph-mon[77138]: pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:15 compute-2 python3.9[138322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400994.750808-513-81306406784955/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:15 compute-2 sudo[138329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:15 compute-2 sudo[138329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:16 compute-2 sudo[138329]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:16 compute-2 sudo[138372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:16 compute-2 sudo[138372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:16 compute-2 sudo[138372]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:16 compute-2 sudo[138397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:16 compute-2 sudo[138397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:16 compute-2 sudo[138397]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:16 compute-2 sudo[138445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:23:16 compute-2 sudo[138445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:16 compute-2 python3.9[138586]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:16 compute-2 sudo[138445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:17 compute-2 sudo[138756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykytkilpzzleryjvbplzfwoshbvjvjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400996.9768553-627-64736923658356/AnsiballZ_file.py'
Nov 29 07:23:17 compute-2 sudo[138756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:17 compute-2 python3.9[138758]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:17 compute-2 sudo[138756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-2 ceph-mon[77138]: pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:23:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:18 compute-2 sudo[138908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjhefedkptfcwoaloqwburgzfegjiout ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400997.723082-651-14822175138332/AnsiballZ_stat.py'
Nov 29 07:23:18 compute-2 sudo[138908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:18 compute-2 python3.9[138910]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:18 compute-2 sudo[138908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:18 compute-2 sudo[138986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbctgilcmydbstiwdywcyrnmdmqjcnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400997.723082-651-14822175138332/AnsiballZ_file.py'
Nov 29 07:23:18 compute-2 sudo[138986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:18 compute-2 sudo[138989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:18 compute-2 sudo[138989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:18 compute-2 sudo[138989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:18 compute-2 python3.9[138988]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:18 compute-2 sudo[139014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:18 compute-2 sudo[139014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:18 compute-2 sudo[139014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:18 compute-2 sudo[138986]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-2 sudo[139189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxrqdlqekfnfetquakcxwijefxmgtmrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400998.878243-651-10528792593596/AnsiballZ_stat.py'
Nov 29 07:23:19 compute-2 sudo[139189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:19 compute-2 python3.9[139191]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:19 compute-2 sudo[139189]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-2 sudo[139267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrsqlzawkmxfrwyzqnxgumtudgqivnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400998.878243-651-10528792593596/AnsiballZ_file.py'
Nov 29 07:23:19 compute-2 sudo[139267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:19 compute-2 ceph-mon[77138]: pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:19 compute-2 python3.9[139269]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:20 compute-2 sudo[139267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:20 compute-2 sudo[139419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspajkiqoodvzqvgqaauljycmifkslmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401000.2408903-721-258498247253184/AnsiballZ_file.py'
Nov 29 07:23:20 compute-2 sudo[139419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:20 compute-2 python3.9[139421]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:20 compute-2 sudo[139419]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:21 compute-2 sudo[139572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldyolkuorddcstfgohncobzmahtngxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401001.0154166-744-156257438038448/AnsiballZ_stat.py'
Nov 29 07:23:21 compute-2 sudo[139572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:21 compute-2 python3.9[139574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:21 compute-2 sudo[139572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-2 ceph-mon[77138]: pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:21 compute-2 sudo[139650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbdtjdtmieuzgtmpkzrvgewpmlarsvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401001.0154166-744-156257438038448/AnsiballZ_file.py'
Nov 29 07:23:21 compute-2 sudo[139650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:22.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:22 compute-2 python3.9[139652]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:22 compute-2 sudo[139650]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:22 compute-2 sudo[139802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdpfbtffwualzyqzhjtbytumoouwsvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401002.3204875-781-176827147624164/AnsiballZ_stat.py'
Nov 29 07:23:22 compute-2 sudo[139802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:22 compute-2 python3.9[139804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:22 compute-2 sudo[139802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-2 sudo[139815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:23 compute-2 sudo[139815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:23 compute-2 sudo[139815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-2 sudo[139870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:23:23 compute-2 sudo[139870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:23 compute-2 sudo[139870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-2 sudo[139931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwedbxbhuzypmnlmtskezuesissevtwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401002.3204875-781-176827147624164/AnsiballZ_file.py'
Nov 29 07:23:23 compute-2 sudo[139931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:23.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:23 compute-2 python3.9[139933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:23 compute-2 sudo[139931]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-2 ceph-mon[77138]: pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:23:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:23:23 compute-2 sudo[140083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuxopbbwsvlzbsldizjvujhzuzplactp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401003.6772487-817-17606273038651/AnsiballZ_systemd.py'
Nov 29 07:23:23 compute-2 sudo[140083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:24.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:24 compute-2 python3.9[140085]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:24 compute-2 systemd[1]: Reloading.
Nov 29 07:23:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:24 compute-2 systemd-sysv-generator[140114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:24 compute-2 systemd-rc-local-generator[140109]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:24 compute-2 sudo[140083]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:25.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:25 compute-2 sudo[140272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikfmuggcjkcekynkqhfibxelfwjryeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401004.992286-841-190620868431549/AnsiballZ_stat.py'
Nov 29 07:23:25 compute-2 sudo[140272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:25 compute-2 python3.9[140274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:25 compute-2 sudo[140272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:25 compute-2 ceph-mon[77138]: pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:25 compute-2 sudo[140350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpfmymksrkgphrdlzfyohivirextwznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401004.992286-841-190620868431549/AnsiballZ_file.py'
Nov 29 07:23:25 compute-2 sudo[140350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:26.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:26 compute-2 python3.9[140352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:26 compute-2 sudo[140350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:26 compute-2 sudo[140502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtuxlafmeytaznfbjishjqwaadlhawmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.3408978-876-276790801640865/AnsiballZ_stat.py'
Nov 29 07:23:26 compute-2 sudo[140502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:26 compute-2 python3.9[140504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:27 compute-2 sudo[140502]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:27 compute-2 sudo[140581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihgzsxctvvxwulqppsyzqbfvuwgwzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.3408978-876-276790801640865/AnsiballZ_file.py'
Nov 29 07:23:27 compute-2 sudo[140581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:27 compute-2 python3.9[140583]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:27 compute-2 sudo[140581]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:27 compute-2 ceph-mon[77138]: pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:28.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:28 compute-2 sudo[140733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlizqeiedjrihsnhvbsidmwcgkotmoqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401007.7946055-912-27427798651277/AnsiballZ_systemd.py'
Nov 29 07:23:28 compute-2 sudo[140733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:28 compute-2 python3.9[140735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:28 compute-2 systemd[1]: Reloading.
Nov 29 07:23:28 compute-2 systemd-rc-local-generator[140760]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:28 compute-2 systemd-sysv-generator[140764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:28 compute-2 systemd[1]: Starting Create netns directory...
Nov 29 07:23:28 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:23:28 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:23:28 compute-2 systemd[1]: Finished Create netns directory.
Nov 29 07:23:29 compute-2 sudo[140733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:29 compute-2 sudo[140929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvshrwuvretwkkerbpzvrasceczzmlmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401009.4318848-942-107992694204235/AnsiballZ_file.py'
Nov 29 07:23:29 compute-2 sudo[140929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:29 compute-2 python3.9[140931]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:29 compute-2 ceph-mon[77138]: pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:30 compute-2 sudo[140929]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:30 compute-2 sudo[141081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwuvxctimazrqguqyotgkmpofluzvst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401010.2176378-967-21218602659705/AnsiballZ_stat.py'
Nov 29 07:23:30 compute-2 sudo[141081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:30 compute-2 python3.9[141083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:30 compute-2 sudo[141081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:31.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:31 compute-2 sudo[141205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztaspdpebylwbyagufhjdavyqcvbzeuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401010.2176378-967-21218602659705/AnsiballZ_copy.py'
Nov 29 07:23:31 compute-2 sudo[141205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:31 compute-2 python3.9[141207]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401010.2176378-967-21218602659705/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:31 compute-2 sudo[141205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:32 compute-2 ceph-mon[77138]: pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:32 compute-2 sudo[141357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgghivjdwrqgqxlxddjwnzrvjknuxejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401011.9407854-1017-145909534320608/AnsiballZ_file.py'
Nov 29 07:23:32 compute-2 sudo[141357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:32 compute-2 python3.9[141359]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:32 compute-2 sudo[141357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:33 compute-2 sudo[141510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokxfbegameyboirhxctycwzkonnzymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401012.7216682-1041-262297242054807/AnsiballZ_stat.py'
Nov 29 07:23:33 compute-2 sudo[141510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:33 compute-2 python3.9[141512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:33 compute-2 sudo[141510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:33 compute-2 sudo[141633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqggpfrhsexotonvyfaqtefahmrillcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401012.7216682-1041-262297242054807/AnsiballZ_copy.py'
Nov 29 07:23:33 compute-2 sudo[141633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:33 compute-2 python3.9[141635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401012.7216682-1041-262297242054807/.source.json _original_basename=.cnc9dqua follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:33 compute-2 sudo[141633]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:34 compute-2 ceph-mon[77138]: pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:34 compute-2 sudo[141785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akehazuxrntkqxoohmzasqkaofqomdrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.2040427-1087-204255477220726/AnsiballZ_file.py'
Nov 29 07:23:34 compute-2 sudo[141785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:34 compute-2 python3.9[141787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:34 compute-2 sudo[141785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-2 ceph-mon[77138]: pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:35 compute-2 sudo[141938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmanflukbbnmijagybrvavabvvokmwgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.9646432-1110-190902302390457/AnsiballZ_stat.py'
Nov 29 07:23:35 compute-2 sudo[141938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:35 compute-2 sudo[141938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-2 sudo[142061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjyfeigtqxpdqeadcoylivxmezghbqtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.9646432-1110-190902302390457/AnsiballZ_copy.py'
Nov 29 07:23:35 compute-2 sudo[142061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:36.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:36 compute-2 sudo[142061]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:37 compute-2 sudo[142214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pftutwqtmsdchjbtuhbgzzuvywvpzioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401016.682545-1162-269258639756340/AnsiballZ_container_config_data.py'
Nov 29 07:23:37 compute-2 sudo[142214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:37 compute-2 python3.9[142216]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 07:23:37 compute-2 sudo[142214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:37 compute-2 ceph-mon[77138]: pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:38 compute-2 sudo[142366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvlqegjwsividkmftifivvfmjdarhwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401017.6475692-1188-85479446461439/AnsiballZ_container_config_hash.py'
Nov 29 07:23:38 compute-2 sudo[142366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 07:23:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 07:23:38 compute-2 python3.9[142368]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:23:38 compute-2 sudo[142366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:38 compute-2 sudo[142445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:38 compute-2 sudo[142445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:38 compute-2 sudo[142445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:38 compute-2 sudo[142477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:38 compute-2 sudo[142477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:38 compute-2 sudo[142477]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:39 compute-2 sudo[142569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwrpeiunuvgtqzwylsjbpdenmnjkcxst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401018.6177988-1215-255607670988941/AnsiballZ_podman_container_info.py'
Nov 29 07:23:39 compute-2 sudo[142569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:39 compute-2 python3.9[142571]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:23:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:39.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:39 compute-2 sudo[142569]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:40 compute-2 ceph-mon[77138]: pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:40 compute-2 sudo[142746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llallplpugzttnqkepngtptailmufatb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401020.3081744-1254-264306017095579/AnsiballZ_edpm_container_manage.py'
Nov 29 07:23:40 compute-2 sudo[142746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:41 compute-2 python3[142748]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:23:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:41.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:42 compute-2 ceph-mon[77138]: pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:45 compute-2 podman[142809]: 2025-11-29 07:23:45.02878905 +0000 UTC m=+2.090431890 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:23:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:45.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:48.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:49 compute-2 ceph-mon[77138]: pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:50 compute-2 podman[142762]: 2025-11-29 07:23:50.457081542 +0000 UTC m=+9.167050632 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:23:50 compute-2 podman[142926]: 2025-11-29 07:23:50.631734211 +0000 UTC m=+0.061529701 container create d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 07:23:50 compute-2 podman[142926]: 2025-11-29 07:23:50.596075593 +0000 UTC m=+0.025871063 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:23:50 compute-2 python3[142748]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:23:50 compute-2 sudo[142746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 29 07:23:52 compute-2 ceph-mon[77138]: pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:52 compute-2 ceph-mon[77138]: pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:52 compute-2 ceph-mon[77138]: pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.505147) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032505284, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2344, "num_deletes": 252, "total_data_size": 6092042, "memory_usage": 6178208, "flush_reason": "Manual Compaction"}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032537196, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3966002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10150, "largest_seqno": 12489, "table_properties": {"data_size": 3956367, "index_size": 6129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19066, "raw_average_key_size": 20, "raw_value_size": 3937177, "raw_average_value_size": 4153, "num_data_blocks": 274, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400757, "oldest_key_time": 1764400757, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 32161 microseconds, and 10899 cpu microseconds.
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.537279) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3966002 bytes OK
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.537349) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.539293) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.539335) EVENT_LOG_v1 {"time_micros": 1764401032539304, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.539360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6081922, prev total WAL file size 6097364, number of live WAL files 2.
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.542164) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3873KB)], [21(7347KB)]
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032542385, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 11489848, "oldest_snapshot_seqno": -1}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4107 keys, 9260413 bytes, temperature: kUnknown
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032672190, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 9260413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9229088, "index_size": 19951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100039, "raw_average_key_size": 24, "raw_value_size": 9150986, "raw_average_value_size": 2228, "num_data_blocks": 863, "num_entries": 4107, "num_filter_entries": 4107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.672873) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9260413 bytes
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.676522) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.2 rd, 71.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4632, records dropped: 525 output_compression: NoCompression
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.676559) EVENT_LOG_v1 {"time_micros": 1764401032676544, "job": 10, "event": "compaction_finished", "compaction_time_micros": 130202, "compaction_time_cpu_micros": 27883, "output_level": 6, "num_output_files": 1, "total_output_size": 9260413, "num_input_records": 4632, "num_output_records": 4107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032677418, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032678697, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.541924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.678733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.678741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.678742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.678744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:23:52.678746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:23:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:23:55 compute-2 ceph-mon[77138]: pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:55 compute-2 ceph-mon[77138]: pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:56 compute-2 sudo[143114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axpewixdcdzvtioznnxlajndzatulmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401035.8238711-1278-230605490061827/AnsiballZ_stat.py'
Nov 29 07:23:56 compute-2 sudo[143114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:56 compute-2 ceph-mon[77138]: pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:56 compute-2 python3.9[143116]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:56 compute-2 sudo[143114]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:57 compute-2 sudo[143269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usbsaqknnfvijjlugemmovnqlktmrbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401036.7951627-1305-96320405564198/AnsiballZ_file.py'
Nov 29 07:23:57 compute-2 sudo[143269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:57 compute-2 python3.9[143271]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:57 compute-2 sudo[143269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:23:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:23:57 compute-2 ceph-mon[77138]: pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:23:57 compute-2 sudo[143345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzqgimwgvpibxjsouwdrshlxumxknor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401036.7951627-1305-96320405564198/AnsiballZ_stat.py'
Nov 29 07:23:57 compute-2 sudo[143345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:57 compute-2 python3.9[143347]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:57 compute-2 sudo[143345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:57 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:23:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:23:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:23:58 compute-2 sudo[143497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjdtjiwvlwcvngkwxfjgvkvdnmznyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.8206794-1305-260763429290864/AnsiballZ_copy.py'
Nov 29 07:23:58 compute-2 sudo[143497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:58 compute-2 python3.9[143499]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401037.8206794-1305-260763429290864/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:58 compute-2 sudo[143497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:58 compute-2 sudo[143573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egzgbpeaylnqrltfuttcazjzaxclatsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.8206794-1305-260763429290864/AnsiballZ_systemd.py'
Nov 29 07:23:58 compute-2 sudo[143573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:59 compute-2 sudo[143576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:59 compute-2 sudo[143576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:59 compute-2 sudo[143576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-2 sudo[143602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:59 compute-2 sudo[143602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:59 compute-2 sudo[143602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-2 python3.9[143575]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:23:59 compute-2 systemd[1]: Reloading.
Nov 29 07:23:59 compute-2 systemd-rc-local-generator[143649]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:59 compute-2 systemd-sysv-generator[143657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:23:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:23:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:23:59 compute-2 sudo[143573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-2 sudo[143735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buihelufbpktofagzrdqsznntrlrzoba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.8206794-1305-260763429290864/AnsiballZ_systemd.py'
Nov 29 07:23:59 compute-2 sudo[143735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:00 compute-2 python3.9[143737]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:00 compute-2 systemd[1]: Reloading.
Nov 29 07:24:00 compute-2 systemd-sysv-generator[143771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:00 compute-2 systemd-rc-local-generator[143767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:00 compute-2 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 07:24:00 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:24:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8b070ed9393e2b82e602af6ac14da78f37907059ea99f33c69398d904d07ac/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8b070ed9393e2b82e602af6ac14da78f37907059ea99f33c69398d904d07ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:00 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12.
Nov 29 07:24:00 compute-2 podman[143778]: 2025-11-29 07:24:00.79164011 +0000 UTC m=+0.301666214 container init d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + sudo -E kolla_set_configs
Nov 29 07:24:00 compute-2 podman[143778]: 2025-11-29 07:24:00.819215185 +0000 UTC m=+0.329241279 container start d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:24:00 compute-2 edpm-start-podman-container[143778]: ovn_metadata_agent
Nov 29 07:24:00 compute-2 edpm-start-podman-container[143777]: Creating additional drop-in dependency for "ovn_metadata_agent" (d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12)
Nov 29 07:24:00 compute-2 systemd[1]: Reloading.
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Validating config file
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Copying service configuration files
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Writing out command to execute
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: ++ cat /run_command
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + CMD=neutron-ovn-metadata-agent
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + ARGS=
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + sudo kolla_copy_cacerts
Nov 29 07:24:00 compute-2 podman[143802]: 2025-11-29 07:24:00.949542204 +0000 UTC m=+0.106969327 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + [[ ! -n '' ]]
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + . kolla_extend_start
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + umask 0022
Nov 29 07:24:00 compute-2 ovn_metadata_agent[143796]: + exec neutron-ovn-metadata-agent
Nov 29 07:24:00 compute-2 systemd-rc-local-generator[143875]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:00 compute-2 systemd-sysv-generator[143878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:01 compute-2 ceph-mon[77138]: pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:01 compute-2 systemd[1]: Started ovn_metadata_agent container.
Nov 29 07:24:01 compute-2 sshd-session[143791]: Invalid user ubuntu from 45.148.10.240 port 52162
Nov 29 07:24:01 compute-2 sudo[143735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:01 compute-2 sshd-session[143791]: Connection closed by invalid user ubuntu 45.148.10.240 port 52162 [preauth]
Nov 29 07:24:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:01.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:02.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:02 compute-2 ceph-mon[77138]: pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.189 143801 INFO neutron.common.config [-] Logging enabled!
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.189 143801 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.189 143801 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.190 143801 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.191 143801 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.192 143801 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.193 143801 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.194 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.195 143801 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.196 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.197 143801 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.198 143801 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.199 143801 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.200 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.201 143801 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.202 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.203 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.204 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.205 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.206 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.207 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.208 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.209 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.210 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.211 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.212 143801 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.213 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.214 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.215 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.216 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.217 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.218 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.219 143801 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.220 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.221 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.222 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.223 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.224 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.224 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.224 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.224 143801 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.224 143801 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.272 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.272 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.272 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.273 143801 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.273 143801 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.287 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c8abfd39-a629-4854-b6ed-e2d68f35f5fb (UUID: c8abfd39-a629-4854-b6ed-e2d68f35f5fb) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.311 143801 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.311 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.311 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.311 143801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.314 143801 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.320 143801 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.325 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], external_ids={}, name=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, nb_cfg_timestamp=1764400969739, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.326 143801 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f0430a7af70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.327 143801 INFO oslo_service.service [-] Starting 1 workers
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.334 143801 DEBUG oslo_service.service [-] Started child 143912 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.338 143912 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-430559'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.338 143801 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp6_svnl76/privsep.sock']
Nov 29 07:24:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:03.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.365 143912 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.366 143912 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.366 143912 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.370 143912 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.376 143912 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.383 143912 INFO eventlet.wsgi.server [-] (143912) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 29 07:24:03 compute-2 ceph-mon[77138]: pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:03 compute-2 sshd-session[134999]: Connection closed by 192.168.122.30 port 41392
Nov 29 07:24:03 compute-2 sshd-session[134996]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:24:03 compute-2 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 07:24:03 compute-2 systemd[1]: session-47.scope: Consumed 1min 2.463s CPU time.
Nov 29 07:24:03 compute-2 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Nov 29 07:24:03 compute-2 systemd-logind[787]: Removed session 47.
Nov 29 07:24:03 compute-2 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.071 143801 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.072 143801 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6_svnl76/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.914 143917 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.920 143917 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.922 143917 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:03.922 143917 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143917
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.076 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a11295-97ad-4a53-8a9f-379cf1db3124]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:24:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:04.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.679 143917 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.679 143917 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:24:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:04.679 143917 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:24:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.266 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b15dcb4a-0db5-4c9f-ac9e-abba1f4f9ec3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.270 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, column=external_ids, values=({'neutron:ovn-metadata-id': 'f6ff0ea2-6515-5fe1-a2da-3e6bdb605b56'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.282 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.304 143801 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.304 143801 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.304 143801 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.305 143801 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.305 143801 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.306 143801 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.306 143801 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.306 143801 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.306 143801 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.307 143801 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.307 143801 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.307 143801 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.307 143801 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.307 143801 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.308 143801 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.308 143801 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.308 143801 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.308 143801 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.309 143801 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.310 143801 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.311 143801 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.311 143801 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.311 143801 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.311 143801 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.311 143801 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.312 143801 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.313 143801 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.314 143801 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.315 143801 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.316 143801 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.317 143801 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.318 143801 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.319 143801 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.320 143801 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.321 143801 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.321 143801 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.321 143801 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.321 143801 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.321 143801 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.322 143801 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.322 143801 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.322 143801 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.322 143801 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.322 143801 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.323 143801 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.323 143801 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.323 143801 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.323 143801 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.324 143801 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.324 143801 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.324 143801 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.325 143801 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.325 143801 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.325 143801 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.326 143801 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.326 143801 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.326 143801 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.326 143801 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.327 143801 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.327 143801 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.327 143801 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.327 143801 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.328 143801 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.328 143801 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.328 143801 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.329 143801 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.329 143801 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.329 143801 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.329 143801 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.330 143801 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.330 143801 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.330 143801 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.330 143801 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.331 143801 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.331 143801 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.331 143801 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.331 143801 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.332 143801 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.332 143801 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.332 143801 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.332 143801 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.333 143801 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.333 143801 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.333 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.333 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.333 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.334 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.334 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.334 143801 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.335 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.335 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.335 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.336 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.336 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.336 143801 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.336 143801 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.336 143801 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.337 143801 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.337 143801 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.337 143801 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.337 143801 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.337 143801 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.338 143801 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.339 143801 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.339 143801 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.339 143801 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.339 143801 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.339 143801 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.340 143801 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.340 143801 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.340 143801 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.340 143801 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.341 143801 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.341 143801 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.341 143801 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.341 143801 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.341 143801 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.342 143801 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.342 143801 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.342 143801 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.342 143801 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.342 143801 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.343 143801 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.343 143801 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.343 143801 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.343 143801 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.344 143801 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.344 143801 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.344 143801 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.344 143801 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.344 143801 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.345 143801 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.345 143801 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.345 143801 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.345 143801 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.345 143801 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.346 143801 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.346 143801 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.346 143801 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.346 143801 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.346 143801 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.347 143801 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.347 143801 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.347 143801 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.347 143801 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:05.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.348 143801 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.348 143801 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.348 143801 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.348 143801 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.348 143801 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.349 143801 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.349 143801 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.349 143801 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.349 143801 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.349 143801 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.350 143801 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.351 143801 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.352 143801 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.353 143801 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.354 143801 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.355 143801 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.356 143801 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.357 143801 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.357 143801 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.357 143801 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.357 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.357 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.358 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.359 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.360 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.361 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.362 143801 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.363 143801 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:24:05.363 143801 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:24:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:06 compute-2 ceph-mon[77138]: pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:07.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:08 compute-2 ceph-mon[77138]: pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:09 compute-2 ceph-mon[77138]: pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:09.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:09 compute-2 sshd-session[143925]: Accepted publickey for zuul from 192.168.122.30 port 37982 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:24:09 compute-2 systemd-logind[787]: New session 48 of user zuul.
Nov 29 07:24:09 compute-2 systemd[1]: Started Session 48 of User zuul.
Nov 29 07:24:09 compute-2 sshd-session[143925]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:24:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:10.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:10 compute-2 python3.9[144078]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:24:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:11 compute-2 ceph-mon[77138]: pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:11 compute-2 sudo[144233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bipgmvdsxixvczimlzgvpdfnyjuezyke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401051.488427-69-262742287124378/AnsiballZ_command.py'
Nov 29 07:24:11 compute-2 sudo[144233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:12.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:12 compute-2 python3.9[144235]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:12 compute-2 sudo[144233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:13 compute-2 sudo[144399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmavnfgamelzlnypzmpjelwzkkhulty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401052.7151716-102-63214715906057/AnsiballZ_systemd_service.py'
Nov 29 07:24:13 compute-2 sudo[144399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:13 compute-2 python3.9[144401]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:24:13 compute-2 systemd[1]: Reloading.
Nov 29 07:24:13 compute-2 ceph-mon[77138]: pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:13 compute-2 systemd-rc-local-generator[144428]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:13 compute-2 systemd-sysv-generator[144432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:14 compute-2 sudo[144399]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:14 compute-2 python3.9[144586]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:24:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:15 compute-2 network[144604]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:24:15 compute-2 network[144605]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:24:15 compute-2 network[144606]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:24:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:15.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:16 compute-2 ceph-mon[77138]: pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:18 compute-2 ceph-mon[77138]: pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:18 compute-2 podman[144686]: 2025-11-29 07:24:18.750980226 +0000 UTC m=+0.137487974 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 07:24:19 compute-2 sudo[144738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:19 compute-2 sudo[144738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:19 compute-2 sudo[144738]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:19 compute-2 sudo[144766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:19 compute-2 sudo[144766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:19 compute-2 sudo[144766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:20 compute-2 ceph-mon[77138]: pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:20 compute-2 sudo[144944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyeakyclirbpbdvrtjkwjbqacjjllily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401059.863845-160-62137819373854/AnsiballZ_systemd_service.py'
Nov 29 07:24:20 compute-2 sudo[144944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:20 compute-2 python3.9[144946]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:20 compute-2 sudo[144944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:21 compute-2 sudo[145098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmnqdadmiwzpcfsxrcrmlfiiwqukemxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401060.8440132-160-224260443181890/AnsiballZ_systemd_service.py'
Nov 29 07:24:21 compute-2 sudo[145098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:21.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:21 compute-2 python3.9[145100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:21 compute-2 sudo[145098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-2 sudo[145251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssojbkruiexiwwbmmiobirhihzjjpwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401061.7583697-160-139200191132609/AnsiballZ_systemd_service.py'
Nov 29 07:24:22 compute-2 sudo[145251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:22 compute-2 ceph-mon[77138]: pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:23 compute-2 sudo[145263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:23 compute-2 sudo[145263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-2 sudo[145263]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-2 sudo[145288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:24:23 compute-2 sudo[145288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-2 sudo[145288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:23 compute-2 sudo[145313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:23 compute-2 sudo[145313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-2 sudo[145313]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-2 sudo[145338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:24:23 compute-2 sudo[145338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-2 ceph-mon[77138]: pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:23 compute-2 python3.9[145253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:23 compute-2 sudo[145251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:24 compute-2 sudo[145338]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:24 compute-2 sudo[145543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckegixhncqttzhocinwndpezbmcuctyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401064.119196-160-86056159976944/AnsiballZ_systemd_service.py'
Nov 29 07:24:24 compute-2 sudo[145543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:24 compute-2 python3.9[145545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:24:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:24 compute-2 sudo[145543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:25 compute-2 sudo[145697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biwebazimsdefyjlxzetaxmqtyzmspku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401064.9766648-160-149675459809178/AnsiballZ_systemd_service.py'
Nov 29 07:24:25 compute-2 sudo[145697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:25.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:25 compute-2 python3.9[145699]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:25 compute-2 sudo[145697]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:26.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:26 compute-2 sudo[145850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwtuxtfcmhqgrpkuwlsjukdxijqqyfeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401065.8045156-160-183261229469464/AnsiballZ_systemd_service.py'
Nov 29 07:24:26 compute-2 sudo[145850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:26 compute-2 python3.9[145852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:26 compute-2 sudo[145850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:26 compute-2 sudo[146003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zusjtxlziurphxgbwsmyvtzhoiuewbdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401066.6401732-160-104600171755310/AnsiballZ_systemd_service.py'
Nov 29 07:24:26 compute-2 sudo[146003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:27 compute-2 python3.9[146005]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:27 compute-2 sudo[146003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:27.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:29.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:30 compute-2 ceph-mon[77138]: pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:31 compute-2 sudo[146159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydusjlwjangeiqvakbrfawdayaycuxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401070.643331-316-234517367135855/AnsiballZ_file.py'
Nov 29 07:24:31 compute-2 sudo[146159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:31 compute-2 ceph-mon[77138]: pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:31 compute-2 ceph-mon[77138]: pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:31 compute-2 python3.9[146161]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:31 compute-2 sudo[146159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:31.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:31 compute-2 podman[146234]: 2025-11-29 07:24:31.680693066 +0000 UTC m=+0.075766137 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 07:24:31 compute-2 sudo[146330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjnykufkhoxljbdiavgmfztibldrcznu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401071.5128276-316-47084787376956/AnsiballZ_file.py'
Nov 29 07:24:31 compute-2 sudo[146330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:31 compute-2 python3.9[146332]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:32 compute-2 sudo[146330]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:32 compute-2 ceph-mon[77138]: pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:32 compute-2 sudo[146482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvueenitleywevcsnrcqcklcpxvwvkad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401072.1591613-316-247335641933853/AnsiballZ_file.py'
Nov 29 07:24:32 compute-2 sudo[146482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:32 compute-2 python3.9[146484]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:32 compute-2 sudo[146482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-2 sudo[146635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nboabpwkwrliggmjdzkvuntrivwdwcwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401072.865871-316-232837396050/AnsiballZ_file.py'
Nov 29 07:24:33 compute-2 sudo[146635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:33 compute-2 ceph-mon[77138]: pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:33 compute-2 python3.9[146637]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:33.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:33 compute-2 sudo[146635]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-2 sudo[146787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxfvuwcngjuzhobabmygyipgcsgtkjov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401073.590762-316-131027896745902/AnsiballZ_file.py'
Nov 29 07:24:33 compute-2 sudo[146787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:34 compute-2 python3.9[146789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:34 compute-2 sudo[146787]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:34 compute-2 sudo[146940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhqizbieawjewhdetctvcuerkxlccdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401074.4539802-316-186699924363233/AnsiballZ_file.py'
Nov 29 07:24:34 compute-2 sudo[146940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:34 compute-2 sudo[146939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:34 compute-2 sudo[146939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:34 compute-2 sudo[146939]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:34 compute-2 sudo[146967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:24:34 compute-2 sudo[146967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:34 compute-2 sudo[146967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:35 compute-2 python3.9[146958]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:35 compute-2 sudo[146940]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:24:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:24:35 compute-2 ceph-mon[77138]: pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:35 compute-2 sudo[147142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwatzjmampioftkzmmplnuabqubsokbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401075.2105746-316-186619705907977/AnsiballZ_file.py'
Nov 29 07:24:35 compute-2 sudo[147142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:35 compute-2 python3.9[147144]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:35 compute-2 sudo[147142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:36 compute-2 sudo[147294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhopsnwmmhrlkbgdlrlchaidwfkghxib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401075.9555452-466-158048364217303/AnsiballZ_file.py'
Nov 29 07:24:36 compute-2 sudo[147294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:36 compute-2 python3.9[147296]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:36 compute-2 sudo[147294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:36 compute-2 sudo[147446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mavraeenacwvltjcfhxvrtstfcqszeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401076.6356258-466-99461829890835/AnsiballZ_file.py'
Nov 29 07:24:36 compute-2 sudo[147446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:37 compute-2 python3.9[147449]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:37 compute-2 sudo[147446]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:37.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:37 compute-2 sudo[147599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxitajbsdibwahxxlpvdfcdprnhfiirr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401077.3774996-466-232348531589842/AnsiballZ_file.py'
Nov 29 07:24:37 compute-2 sudo[147599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:37 compute-2 python3.9[147601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:37 compute-2 sudo[147599]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:38.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:38 compute-2 ceph-mon[77138]: pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:38 compute-2 sudo[147751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfbdvcqowxhrsrqpglnycvjhdqfgyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401078.079273-466-132804106661664/AnsiballZ_file.py'
Nov 29 07:24:38 compute-2 sudo[147751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:38 compute-2 python3.9[147753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:38 compute-2 sudo[147751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:39 compute-2 sudo[147904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasfvnbfzamhclzeidatgumhvsiufdeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401078.8993132-466-278453932082559/AnsiballZ_file.py'
Nov 29 07:24:39 compute-2 sudo[147904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:39 compute-2 sudo[147907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:39 compute-2 sudo[147907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:39 compute-2 sudo[147907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:39 compute-2 python3.9[147906]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:39.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:39 compute-2 sudo[147904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:39 compute-2 ceph-mon[77138]: pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:39 compute-2 sudo[147932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:39 compute-2 sudo[147932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:39 compute-2 sudo[147932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:39 compute-2 sudo[148106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfrwkuxhclpyydvjzqdkuoaukpvoonae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401079.5709462-466-90401456342453/AnsiballZ_file.py'
Nov 29 07:24:39 compute-2 sudo[148106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:40 compute-2 python3.9[148108]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:40 compute-2 sudo[148106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:40 compute-2 sudo[148258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrklpdxyxcrdnulutejtovsvimrquhmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401080.2502725-466-132906693711474/AnsiballZ_file.py'
Nov 29 07:24:40 compute-2 sudo[148258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:40 compute-2 python3.9[148260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:40 compute-2 sudo[148258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:41 compute-2 sudo[148411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpliizjrrhryaqpokhedkhkwsnhgnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401081.2038372-619-78288124979729/AnsiballZ_command.py'
Nov 29 07:24:41 compute-2 sudo[148411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:41 compute-2 python3.9[148413]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:41 compute-2 sudo[148411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:43.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:44.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:24:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:48.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:48 compute-2 ceph-mon[77138]: pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:48 compute-2 python3.9[148568]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:24:49 compute-2 sudo[148730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnjsjgjczoeyqjutrntehvyqibueghef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401088.8306153-673-187564317499982/AnsiballZ_systemd_service.py'
Nov 29 07:24:49 compute-2 sudo[148730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:49 compute-2 podman[148693]: 2025-11-29 07:24:49.276973528 +0000 UTC m=+0.128969883 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 07:24:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:49 compute-2 python3.9[148734]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:24:49 compute-2 systemd[1]: Reloading.
Nov 29 07:24:49 compute-2 systemd-rc-local-generator[148772]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:49 compute-2 systemd-sysv-generator[148776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:49 compute-2 sudo[148730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:49 compute-2 ceph-mon[77138]: pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:49 compute-2 ceph-mon[77138]: pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:49 compute-2 ceph-mon[77138]: pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:50 compute-2 sudo[148929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgseqwggfyuprtwlingufdxkxickrdzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401090.2152874-697-46999377332378/AnsiballZ_command.py'
Nov 29 07:24:50 compute-2 sudo[148929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:50 compute-2 python3.9[148931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:50 compute-2 sudo[148929]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:50 compute-2 ceph-mon[77138]: pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:51 compute-2 sudo[149083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmymdjcohkkrynshkqdzrpbgmtxcumya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401090.9309685-697-10579229961793/AnsiballZ_command.py'
Nov 29 07:24:51 compute-2 sudo[149083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:51 compute-2 python3.9[149085]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:51 compute-2 sudo[149083]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:52 compute-2 sudo[149236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngjlpgwkseyhjyfduodmshzxiniumhbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401091.7179148-697-252792319100066/AnsiballZ_command.py'
Nov 29 07:24:52 compute-2 sudo[149236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:24:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:24:52 compute-2 python3.9[149238]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:52 compute-2 sudo[149236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:52 compute-2 sudo[149389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lewbnookmxmlwhghcsiupjyypbgizqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401092.435948-697-35603118434732/AnsiballZ_command.py'
Nov 29 07:24:52 compute-2 sudo[149389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:52 compute-2 ceph-mon[77138]: pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:52 compute-2 python3.9[149391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:52 compute-2 sudo[149389]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:53.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:53 compute-2 sudo[149543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwvnqmzkktmusygilyhvodloqejkiwsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401093.096902-697-60116442725613/AnsiballZ_command.py'
Nov 29 07:24:53 compute-2 sudo[149543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:53 compute-2 python3.9[149545]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:53 compute-2 sudo[149543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:53 compute-2 ceph-mon[77138]: pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:54 compute-2 sudo[149696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ralbvkshhveszmltwvjwtkocuhrpvycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401093.9887593-697-136060336900800/AnsiballZ_command.py'
Nov 29 07:24:54 compute-2 sudo[149696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:54 compute-2 python3.9[149698]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:54 compute-2 sudo[149696]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:54 compute-2 sudo[149849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkajqsteztkhpaihouqhtjydwaxboxmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401094.6973708-697-264609560200148/AnsiballZ_command.py'
Nov 29 07:24:54 compute-2 sudo[149849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:24:55 compute-2 python3.9[149852]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:55 compute-2 sudo[149849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:55 compute-2 ceph-mon[77138]: pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:56 compute-2 sudo[150003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqsmbfejguqrulppcjouavhgqxltnnts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401095.938141-859-227699405441989/AnsiballZ_getent.py'
Nov 29 07:24:56 compute-2 sudo[150003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:56 compute-2 python3.9[150005]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 07:24:56 compute-2 sudo[150003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:57 compute-2 sudo[150157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pipjzbgicyvieboicemiarseqhkqhlyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401096.884111-882-139578281299768/AnsiballZ_group.py'
Nov 29 07:24:57 compute-2 sudo[150157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:57 compute-2 python3.9[150159]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:24:57 compute-2 groupadd[150160]: group added to /etc/group: name=libvirt, GID=42473
Nov 29 07:24:57 compute-2 groupadd[150160]: group added to /etc/gshadow: name=libvirt
Nov 29 07:24:57 compute-2 groupadd[150160]: new group: name=libvirt, GID=42473
Nov 29 07:24:57 compute-2 ceph-mon[77138]: pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:24:57 compute-2 sudo[150157]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:24:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:24:58 compute-2 sudo[150315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhqmkegimdydbqjwqxpjrbqwafsvstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401098.1479077-906-231896986269324/AnsiballZ_user.py'
Nov 29 07:24:58 compute-2 sudo[150315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:58 compute-2 python3.9[150317]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:24:58 compute-2 useradd[150319]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:24:58 compute-2 sudo[150315]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:24:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:24:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:59.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:24:59 compute-2 sudo[150374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:59 compute-2 sudo[150374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:59 compute-2 sudo[150374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:59 compute-2 sudo[150428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:59 compute-2 sudo[150428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:59 compute-2 sudo[150428]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:59 compute-2 sudo[150526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txslelfvxxxmzhupggbkxidrvhgeaqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401099.5017183-939-227084093346252/AnsiballZ_setup.py'
Nov 29 07:24:59 compute-2 sudo[150526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:00 compute-2 python3.9[150528]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:25:00 compute-2 ceph-mon[77138]: pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:00 compute-2 sudo[150526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:00 compute-2 sudo[150610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhuzeglwqatonbqgcsbxtdkjovxyrxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401099.5017183-939-227084093346252/AnsiballZ_dnf.py'
Nov 29 07:25:00 compute-2 sudo[150610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:01 compute-2 python3.9[150612]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:25:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:01.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:02 compute-2 ceph-mon[77138]: pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:02 compute-2 podman[150617]: 2025-11-29 07:25:02.711168961 +0000 UTC m=+0.090135392 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:25:03.265 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:25:03.266 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:25:03.267 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:25:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:03.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:03 compute-2 ceph-mon[77138]: pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:04.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:06 compute-2 ceph-mon[77138]: pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:07 compute-2 ceph-mon[77138]: pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:07.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:09.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:09 compute-2 ceph-mon[77138]: pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:10.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:11.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:13 compute-2 ceph-mon[77138]: pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:14 compute-2 ceph-mon[77138]: pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:15.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:15 compute-2 ceph-mon[77138]: pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:17.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:18 compute-2 ceph-mon[77138]: pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:19.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:19 compute-2 ceph-mon[77138]: pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:19 compute-2 sudo[150840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:19 compute-2 sudo[150840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:19 compute-2 sudo[150840]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:19 compute-2 podman[150829]: 2025-11-29 07:25:19.727747533 +0000 UTC m=+0.116195351 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:25:19 compute-2 sudo[150882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:19 compute-2 sudo[150882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:19 compute-2 sudo[150882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:22.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:22 compute-2 ceph-mon[77138]: pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:23 compute-2 ceph-mon[77138]: pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:25.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:26 compute-2 ceph-mon[77138]: pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:29.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:29 compute-2 ceph-mon[77138]: pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:29 compute-2 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:25:29 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:25:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:30 compute-2 ceph-mon[77138]: pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:30.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:31.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:32.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:33 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 07:25:33 compute-2 podman[150925]: 2025-11-29 07:25:33.714403474 +0000 UTC m=+0.076651699 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:25:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:35 compute-2 sudo[150943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:35 compute-2 sudo[150943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:35 compute-2 sudo[150943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:35 compute-2 sudo[150969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:25:35 compute-2 sudo[150969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:35 compute-2 sudo[150969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:35 compute-2 sudo[150994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:35 compute-2 sudo[150994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:35 compute-2 sudo[150994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:35 compute-2 sudo[151019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:25:35 compute-2 sudo[151019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:35 compute-2 sudo[151019]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:37.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:25:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:39.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:39 compute-2 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:25:39 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:25:39 compute-2 sudo[151081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:39 compute-2 sudo[151081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:39 compute-2 sudo[151081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:40 compute-2 sudo[151106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:40 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 07:25:40 compute-2 sudo[151106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:40 compute-2 sudo[151106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:41 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:25:41 compute-2 ceph-mon[77138]: paxos.1).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Nov 29 07:25:41 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:25:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:41.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:42 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:25:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:43.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:25:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:25:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:45.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:25:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:47 compute-2 ceph-mon[77138]: pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:47 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:25:47 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:25:47 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:25:47 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:25:47 compute-2 ceph-mon[77138]: pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:47 compute-2 ceph-mon[77138]: pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:47 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:25:47 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:25:47 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:25:47 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 16m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:25:47 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:25:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:47.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:25:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:49.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:25:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:50.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:50 compute-2 podman[151137]: 2025-11-29 07:25:50.729838899 +0000 UTC m=+0.113134576 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 07:25:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:51.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:52.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:25:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:25:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:54.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:54 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:25:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:25:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:25:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:25:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:56.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:25:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:57.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:25:57 compute-2 ceph-mon[77138]: pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:25:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:25:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:25:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:25:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:58.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:58 compute-2 ceph-mon[77138]: pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:58 compute-2 ceph-mon[77138]: pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:58 compute-2 ceph-mon[77138]: pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:58 compute-2 ceph-mon[77138]: pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:58 compute-2 ceph-mon[77138]: pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:25:59 compute-2 sshd-session[154750]: Invalid user sol from 45.148.10.240 port 33682
Nov 29 07:25:59 compute-2 sshd-session[154750]: Connection closed by invalid user sol 45.148.10.240 port 33682 [preauth]
Nov 29 07:25:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:25:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:25:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:25:59 compute-2 ceph-mon[77138]: pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:00 compute-2 sudo[155608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:00 compute-2 sudo[155608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:00 compute-2 sudo[155608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:00 compute-2 sudo[155678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:00 compute-2 sudo[155678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:00 compute-2 sudo[155678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:01 compute-2 ceph-mon[77138]: pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:26:03.267 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:26:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:26:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:26:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:03.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:04 compute-2 ceph-mon[77138]: pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:04 compute-2 podman[158401]: 2025-11-29 07:26:04.681730097 +0000 UTC m=+0.069005481 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:26:04 compute-2 sudo[158605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:04 compute-2 sudo[158605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:04 compute-2 sudo[158605]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:05 compute-2 sudo[158678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:26:05 compute-2 sudo[158678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:05 compute-2 sudo[158678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:26:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:26:05 compute-2 ceph-mon[77138]: pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:07 compute-2 ceph-mon[77138]: pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:26:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:26:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:10 compute-2 ceph-mon[77138]: pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:11 compute-2 ceph-mon[77138]: pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:12.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:13.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000065s ======
Nov 29 07:26:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:14.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Nov 29 07:26:14 compute-2 ceph-mon[77138]: pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:15.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:15 compute-2 ceph-mon[77138]: pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:26:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:16.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:26:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:17.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:17 compute-2 ceph-mon[77138]: pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:19 compute-2 ceph-mon[77138]: pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:19.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:20 compute-2 sudo[167019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:20 compute-2 sudo[167019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:20 compute-2 sudo[167019]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:20 compute-2 sudo[167089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:20 compute-2 sudo[167089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:20 compute-2 sudo[167089]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:26:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:21.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:26:21 compute-2 podman[167900]: 2025-11-29 07:26:21.6958971 +0000 UTC m=+0.095139743 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 07:26:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:22.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:22 compute-2 ceph-mon[77138]: pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:23.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:24.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:24 compute-2 ceph-mon[77138]: pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:25.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:26 compute-2 ceph-mon[77138]: pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:26.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:27.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:27 compute-2 ceph-mon[77138]: pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:28.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:30 compute-2 ceph-mon[77138]: pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:30.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:26:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:26:35 compute-2 ceph-mon[77138]: pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:35 compute-2 ceph-mon[77138]: pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:35 compute-2 podman[168183]: 2025-11-29 07:26:35.68925836 +0000 UTC m=+0.092669445 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:26:36 compute-2 ceph-mon[77138]: pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:36.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:38.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:38 compute-2 ceph-mon[77138]: pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:39.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:40 compute-2 sudo[168209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:40 compute-2 sudo[168209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:40 compute-2 sudo[168209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:40 compute-2 sudo[168234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:40 compute-2 sudo[168234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:40 compute-2 sudo[168234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:41.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:42.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:42 compute-2 ceph-mon[77138]: pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:44.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:44 compute-2 ceph-mon[77138]: pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:45 compute-2 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:26:45 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:26:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:46.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:47.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:47 compute-2 ceph-mon[77138]: pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:47 compute-2 ceph-mon[77138]: pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:48.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:48 compute-2 groupadd[168271]: group added to /etc/group: name=dnsmasq, GID=992
Nov 29 07:26:48 compute-2 groupadd[168271]: group added to /etc/gshadow: name=dnsmasq
Nov 29 07:26:48 compute-2 groupadd[168271]: new group: name=dnsmasq, GID=992
Nov 29 07:26:49 compute-2 useradd[168278]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 29 07:26:49 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 07:26:49 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 07:26:49 compute-2 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Nov 29 07:26:49 compute-2 ceph-mon[77138]: pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:50 compute-2 ceph-mon[77138]: pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:52.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:52 compute-2 podman[168293]: 2025-11-29 07:26:52.442212497 +0000 UTC m=+0.215415936 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 07:26:52 compute-2 groupadd[168294]: group added to /etc/group: name=clevis, GID=991
Nov 29 07:26:52 compute-2 groupadd[168294]: group added to /etc/gshadow: name=clevis
Nov 29 07:26:52 compute-2 groupadd[168294]: new group: name=clevis, GID=991
Nov 29 07:26:52 compute-2 useradd[168326]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 29 07:26:53 compute-2 usermod[168337]: add 'clevis' to group 'tss'
Nov 29 07:26:53 compute-2 usermod[168337]: add 'clevis' to shadow group 'tss'
Nov 29 07:26:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:53.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:54.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:56 compute-2 ceph-mon[77138]: pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:57 compute-2 ceph-mon[77138]: pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:57 compute-2 ceph-mon[77138]: pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:26:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:26:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:26:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:58.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:26:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:26:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:26:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:26:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:00 compute-2 polkitd[43483]: Reloading rules
Nov 29 07:27:00 compute-2 polkitd[43483]: Collecting garbage unconditionally...
Nov 29 07:27:00 compute-2 polkitd[43483]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:27:00 compute-2 polkitd[43483]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:27:00 compute-2 polkitd[43483]: Finished loading, compiling and executing 3 rules
Nov 29 07:27:00 compute-2 polkitd[43483]: Reloading rules
Nov 29 07:27:00 compute-2 polkitd[43483]: Collecting garbage unconditionally...
Nov 29 07:27:00 compute-2 polkitd[43483]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:27:00 compute-2 polkitd[43483]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:27:00 compute-2 polkitd[43483]: Finished loading, compiling and executing 3 rules
Nov 29 07:27:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:27:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:27:00 compute-2 sudo[168387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:00 compute-2 sudo[168387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:00 compute-2 sudo[168387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:00 compute-2 sudo[168428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:00 compute-2 sudo[168428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:00 compute-2 sudo[168428]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:01 compute-2 ceph-mon[77138]: pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:01.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:02 compute-2 groupadd[168578]: group added to /etc/group: name=ceph, GID=167
Nov 29 07:27:02 compute-2 groupadd[168578]: group added to /etc/gshadow: name=ceph
Nov 29 07:27:02 compute-2 groupadd[168578]: new group: name=ceph, GID=167
Nov 29 07:27:02 compute-2 useradd[168584]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 29 07:27:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:02.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:27:03.268 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:27:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:27:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:27:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:03.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:03 compute-2 ceph-mon[77138]: pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:03 compute-2 ceph-mon[77138]: pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:05 compute-2 sudo[168740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:05 compute-2 sudo[168740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:05 compute-2 sudo[168740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:05 compute-2 sudo[168796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:05 compute-2 sudo[168796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:05 compute-2 sudo[168796]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:05 compute-2 sudo[168860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:05 compute-2 sudo[168860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:05 compute-2 sudo[168860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:05 compute-2 sudo[168916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:27:05 compute-2 sudo[168916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:05.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:05 compute-2 sudo[168916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:06 compute-2 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 07:27:06 compute-2 sshd[1009]: Received signal 15; terminating.
Nov 29 07:27:06 compute-2 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 07:27:06 compute-2 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 07:27:06 compute-2 systemd[1]: sshd.service: Consumed 2.642s CPU time, read 564.0K from disk, written 4.0K to disk.
Nov 29 07:27:06 compute-2 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 07:27:06 compute-2 systemd[1]: Stopping sshd-keygen.target...
Nov 29 07:27:06 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:27:06 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:27:06 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:27:06 compute-2 systemd[1]: Reached target sshd-keygen.target.
Nov 29 07:27:06 compute-2 systemd[1]: Starting OpenSSH server daemon...
Nov 29 07:27:06 compute-2 sshd[169349]: Server listening on 0.0.0.0 port 22.
Nov 29 07:27:06 compute-2 sshd[169349]: Server listening on :: port 22.
Nov 29 07:27:06 compute-2 systemd[1]: Started OpenSSH server daemon.
Nov 29 07:27:06 compute-2 podman[169341]: 2025-11-29 07:27:06.308242168 +0000 UTC m=+0.074324800 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:27:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:07 compute-2 ceph-mon[77138]: pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:07.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:27:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:08.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:27:08 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 07:27:09 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:27:09 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:27:09 compute-2 ceph-mon[77138]: pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:09 compute-2 ceph-mon[77138]: pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:09 compute-2 systemd[1]: Reloading.
Nov 29 07:27:09 compute-2 systemd-rc-local-generator[169614]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:09 compute-2 systemd-sysv-generator[169618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:09 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:27:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:11.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:12 compute-2 ceph-mon[77138]: pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:13 compute-2 sudo[150610]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:13 compute-2 ceph-mon[77138]: pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:13 compute-2 ceph-mon[77138]: pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:13.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:14.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:27:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.521342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235521436, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1754, "num_deletes": 252, "total_data_size": 4260379, "memory_usage": 4323984, "flush_reason": "Manual Compaction"}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235542499, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2788969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12494, "largest_seqno": 14243, "table_properties": {"data_size": 2781720, "index_size": 4256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13972, "raw_average_key_size": 18, "raw_value_size": 2767247, "raw_average_value_size": 3679, "num_data_blocks": 192, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401032, "oldest_key_time": 1764401032, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 21342 microseconds, and 8476 cpu microseconds.
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.542652) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2788969 bytes OK
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.542726) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.545698) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.545764) EVENT_LOG_v1 {"time_micros": 1764401235545742, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.545805) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4252565, prev total WAL file size 4252565, number of live WAL files 2.
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.548371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2723KB)], [24(9043KB)]
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235548474, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12049382, "oldest_snapshot_seqno": -1}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4337 keys, 11506658 bytes, temperature: kUnknown
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235684428, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11506658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11471704, "index_size": 23004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 106550, "raw_average_key_size": 24, "raw_value_size": 11387397, "raw_average_value_size": 2625, "num_data_blocks": 979, "num_entries": 4337, "num_filter_entries": 4337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.684733) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11506658 bytes
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.685929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 84.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.4) write-amplify(4.1) OK, records in: 4859, records dropped: 522 output_compression: NoCompression
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.685951) EVENT_LOG_v1 {"time_micros": 1764401235685940, "job": 12, "event": "compaction_finished", "compaction_time_micros": 136081, "compaction_time_cpu_micros": 56277, "output_level": 6, "num_output_files": 1, "total_output_size": 11506658, "num_input_records": 4859, "num_output_records": 4337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235686695, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235688227, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.548184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.688291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.688297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.688299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.688301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:27:15.688302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:27:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:16 compute-2 ceph-mon[77138]: pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:16.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:17.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:17 compute-2 ceph-mon[77138]: pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:19.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:19 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:27:19 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:27:19 compute-2 systemd[1]: man-db-cache-update.service: Consumed 13.012s CPU time.
Nov 29 07:27:19 compute-2 systemd[1]: run-r0855d407161b44df82818ae13e08798e.service: Deactivated successfully.
Nov 29 07:27:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:20 compute-2 sudo[178050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:20 compute-2 sudo[178050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:20 compute-2 sudo[178050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:20 compute-2 sudo[178075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:20 compute-2 sudo[178075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:20 compute-2 sudo[178075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:21 compute-2 ceph-mon[77138]: pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:22 compute-2 podman[178101]: 2025-11-29 07:27:22.749114701 +0000 UTC m=+0.137966417 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:27:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:24 compute-2 ceph-mon[77138]: pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:24.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:25 compute-2 ceph-mon[77138]: pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:26 compute-2 ceph-mon[77138]: pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:26.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:26 compute-2 sudo[178204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:26 compute-2 sudo[178204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:26 compute-2 sudo[178204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:26 compute-2 sudo[178235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:27:26 compute-2 sudo[178235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:26 compute-2 sudo[178235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:26 compute-2 sudo[178304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiowizxzmtauzchzzwrqcvixnjjlrmbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401246.153767-975-273684716672096/AnsiballZ_systemd.py'
Nov 29 07:27:26 compute-2 sudo[178304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:27 compute-2 python3.9[178306]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:27:27 compute-2 systemd[1]: Reloading.
Nov 29 07:27:27 compute-2 systemd-rc-local-generator[178337]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:27 compute-2 systemd-sysv-generator[178341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:27:27 compute-2 ceph-mon[77138]: pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:27 compute-2 sudo[178304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:28 compute-2 sudo[178495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphryblcvefcdzuopxlyshtqzyblfdvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401247.7130916-975-260912667280895/AnsiballZ_systemd.py'
Nov 29 07:27:28 compute-2 sudo[178495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:28 compute-2 python3.9[178497]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:27:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:28.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:28 compute-2 systemd[1]: Reloading.
Nov 29 07:27:28 compute-2 systemd-rc-local-generator[178524]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:28 compute-2 systemd-sysv-generator[178529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:28 compute-2 sudo[178495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:29 compute-2 sudo[178686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmvdudlkgjsbzpadjlnxqzcmpqykcyqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401248.984834-975-170673213543114/AnsiballZ_systemd.py'
Nov 29 07:27:29 compute-2 sudo[178686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:29 compute-2 python3.9[178688]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:27:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:29 compute-2 systemd[1]: Reloading.
Nov 29 07:27:29 compute-2 systemd-sysv-generator[178717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:29 compute-2 ceph-mon[77138]: pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:29 compute-2 systemd-rc-local-generator[178713]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:30 compute-2 sudo[178686]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:30.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:30 compute-2 sudo[178876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npenpmbqdquvsrwmhwytxtjzziudmbbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401250.2873428-975-63403402905505/AnsiballZ_systemd.py'
Nov 29 07:27:30 compute-2 sudo[178876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:30 compute-2 python3.9[178878]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:27:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:31.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:32 compute-2 systemd[1]: Reloading.
Nov 29 07:27:32 compute-2 systemd-sysv-generator[178909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:32 compute-2 systemd-rc-local-generator[178905]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:32.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:32 compute-2 sudo[178876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:32 compute-2 ceph-mon[77138]: pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:33 compute-2 sudo[179068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzrcnvjetxxvjwbywibesbqaupdidbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401252.7621822-1063-256349065855150/AnsiballZ_systemd.py'
Nov 29 07:27:33 compute-2 sudo[179068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:33 compute-2 python3.9[179070]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:33 compute-2 systemd[1]: Reloading.
Nov 29 07:27:33 compute-2 systemd-rc-local-generator[179100]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:33 compute-2 systemd-sysv-generator[179105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:33 compute-2 ceph-mon[77138]: pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:33.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:33 compute-2 sudo[179068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:34.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:34 compute-2 sudo[179258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fubgvxpkzlhpznijjjjroqttzxisudpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401254.1544456-1063-14577910480783/AnsiballZ_systemd.py'
Nov 29 07:27:34 compute-2 sudo[179258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:34 compute-2 python3.9[179260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:34 compute-2 systemd[1]: Reloading.
Nov 29 07:27:35 compute-2 systemd-rc-local-generator[179290]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:35 compute-2 systemd-sysv-generator[179294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:35 compute-2 sudo[179258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:35 compute-2 sudo[179449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqqlxhacgfidjkzhbodpzbsvlxxsauk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401255.5023077-1063-111451159457282/AnsiballZ_systemd.py'
Nov 29 07:27:35 compute-2 sudo[179449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:36 compute-2 python3.9[179451]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:36 compute-2 systemd[1]: Reloading.
Nov 29 07:27:36 compute-2 systemd-rc-local-generator[179481]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:36 compute-2 systemd-sysv-generator[179486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:36.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:36 compute-2 sudo[179449]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:36 compute-2 podman[179491]: 2025-11-29 07:27:36.66588455 +0000 UTC m=+0.084325819 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:27:36 compute-2 ceph-mon[77138]: pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:37 compute-2 sudo[179661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppfmgrgkwmkdduheebvxrkqfuyxktfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401256.780668-1063-69047952994296/AnsiballZ_systemd.py'
Nov 29 07:27:37 compute-2 sudo[179661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:37 compute-2 python3.9[179663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:38.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:38 compute-2 sudo[179661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:38 compute-2 ceph-mon[77138]: pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:39 compute-2 sudo[179817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmzexccdgvjmgtmphfijnzykhdrsfnqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401258.7065396-1063-264134921484121/AnsiballZ_systemd.py'
Nov 29 07:27:39 compute-2 sudo[179817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:39 compute-2 python3.9[179819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:39 compute-2 systemd[1]: Reloading.
Nov 29 07:27:39 compute-2 systemd-sysv-generator[179851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:39 compute-2 systemd-rc-local-generator[179848]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:39.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:39 compute-2 sudo[179817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:40 compute-2 ceph-mon[77138]: pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:41 compute-2 sudo[179958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:41 compute-2 sudo[179958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:41 compute-2 sudo[179958]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:41 compute-2 sudo[180008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:41 compute-2 sudo[180008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:41 compute-2 sudo[180008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:41 compute-2 sudo[180058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwjwarqibhiygcfneykslpnxuidfejik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401260.772038-1171-93169019451705/AnsiballZ_systemd.py'
Nov 29 07:27:41 compute-2 sudo[180058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:41 compute-2 python3.9[180061]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:27:41 compute-2 ceph-mon[77138]: pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:41 compute-2 systemd[1]: Reloading.
Nov 29 07:27:41 compute-2 systemd-rc-local-generator[180092]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:27:41 compute-2 systemd-sysv-generator[180095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:27:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:41 compute-2 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 07:27:41 compute-2 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 07:27:41 compute-2 sudo[180058]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:42 compute-2 sudo[180252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmfxbmizqklnludqejghdfxilzsctvnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401262.1769657-1195-71950346247027/AnsiballZ_systemd.py'
Nov 29 07:27:42 compute-2 sudo[180252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:42 compute-2 python3.9[180254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:42 compute-2 sudo[180252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:43 compute-2 sudo[180408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvseftzefmqoizxgbazublpswwbdttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401263.2517924-1195-212723653731308/AnsiballZ_systemd.py'
Nov 29 07:27:43 compute-2 sudo[180408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:43 compute-2 python3.9[180410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:43 compute-2 sudo[180408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:44 compute-2 ceph-mon[77138]: pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:44 compute-2 sudo[180563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzuldopxjnxockubwiggqpsefbumkacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401264.1060205-1195-13075422292302/AnsiballZ_systemd.py'
Nov 29 07:27:44 compute-2 sudo[180563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:44 compute-2 python3.9[180565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:44 compute-2 sudo[180563]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:45 compute-2 sudo[180719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abvkfceaiaqncimfpcvbeofybdophnmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401264.9859798-1195-46167771947014/AnsiballZ_systemd.py'
Nov 29 07:27:45 compute-2 sudo[180719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:45 compute-2 python3.9[180721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:45 compute-2 sudo[180719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:46 compute-2 sudo[180874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlnrjesglpdtycndrjnszbkntmebhsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401265.866665-1195-278446517477493/AnsiballZ_systemd.py'
Nov 29 07:27:46 compute-2 sudo[180874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:46 compute-2 python3.9[180876]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:46 compute-2 sudo[180874]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:47 compute-2 sudo[181030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnbmvpzdhyedclckykbjihfzztpkzrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401266.7423046-1195-94914179465309/AnsiballZ_systemd.py'
Nov 29 07:27:47 compute-2 sudo[181030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:47 compute-2 python3.9[181032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:47 compute-2 sudo[181030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:47.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:47 compute-2 ceph-mon[77138]: pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:48 compute-2 sudo[181185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbsifvkodeekqmnidjwpvtqbctfikcqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401267.6960728-1195-280392197962749/AnsiballZ_systemd.py'
Nov 29 07:27:48 compute-2 sudo[181185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:48 compute-2 python3.9[181187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:48 compute-2 sudo[181185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:49 compute-2 sudo[181341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dduvzhssxwaryyyrndoglxdpsjjqpkou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401268.7927363-1195-38256764212164/AnsiballZ_systemd.py'
Nov 29 07:27:49 compute-2 sudo[181341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:49 compute-2 python3.9[181343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:49 compute-2 sudo[181341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:50 compute-2 ceph-mon[77138]: pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:50 compute-2 sudo[181496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwnzmxeliuwvjiyregwwsrkvthjpedl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401269.826877-1195-141435894370477/AnsiballZ_systemd.py'
Nov 29 07:27:50 compute-2 sudo[181496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:50.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:50 compute-2 python3.9[181498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:50 compute-2 sudo[181496]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:51 compute-2 ceph-mon[77138]: pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:51 compute-2 sudo[181652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvgzafbhmybksvoozxnciddyunroeyir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401270.7957087-1195-7299864888906/AnsiballZ_systemd.py'
Nov 29 07:27:51 compute-2 sudo[181652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:51 compute-2 python3.9[181654]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:51 compute-2 sudo[181652]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:52 compute-2 ceph-mon[77138]: pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:52 compute-2 sudo[181807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lctwjbwlohjpcpwrawbxnrolljldlvmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401271.7467265-1195-72701004552553/AnsiballZ_systemd.py'
Nov 29 07:27:52 compute-2 sudo[181807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:52 compute-2 python3.9[181809]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:52.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:52 compute-2 sudo[181807]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-2 sudo[181972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urfszjrmvarjviagbdnugutwmvcwapzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401272.6872928-1195-173586839446448/AnsiballZ_systemd.py'
Nov 29 07:27:53 compute-2 sudo[181972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:53 compute-2 podman[181936]: 2025-11-29 07:27:53.111831217 +0000 UTC m=+0.099303116 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:27:53 compute-2 python3.9[181982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:53 compute-2 sudo[181972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:27:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:53.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:27:54 compute-2 sudo[182144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmlforjfvwizkkzblwillrkxgovldbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401273.7074668-1195-6930387520930/AnsiballZ_systemd.py'
Nov 29 07:27:54 compute-2 sudo[182144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:54 compute-2 python3.9[182146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:54 compute-2 sudo[182144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:27:54 compute-2 ceph-mon[77138]: pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:55 compute-2 sudo[182300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jihjrffrnbbcuimwxjrfdfrgwvkggknc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401274.6750193-1195-96798791824941/AnsiballZ_systemd.py'
Nov 29 07:27:55 compute-2 sudo[182300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:27:55 compute-2 python3.9[182302]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:27:55 compute-2 sudo[182300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:55.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:56 compute-2 ceph-mon[77138]: pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:27:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:27:57 compute-2 sshd-session[182330]: Invalid user solana from 45.148.10.240 port 38690
Nov 29 07:27:57 compute-2 sshd-session[182330]: Connection closed by invalid user solana 45.148.10.240 port 38690 [preauth]
Nov 29 07:27:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:57.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:58 compute-2 ceph-mon[77138]: pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:27:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:27:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:27:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:27:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:27:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:59.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:27:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:00 compute-2 ceph-mon[77138]: pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:00.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:01 compute-2 sudo[182335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:01 compute-2 sudo[182335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:01 compute-2 sudo[182335]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:01 compute-2 sudo[182360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:01 compute-2 sudo[182360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:01 compute-2 sudo[182360]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:01.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:02 compute-2 ceph-mon[77138]: pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:03 compute-2 sudo[182511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xptkclqrbhbxgifthzzdlydfmesmaypb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401282.884596-1501-25861168472469/AnsiballZ_file.py'
Nov 29 07:28:03 compute-2 sudo[182511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:28:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:28:03.269 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:28:03.270 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:28:03 compute-2 python3.9[182513]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:03 compute-2 sudo[182511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-2 ceph-mon[77138]: pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:03 compute-2 sudo[182663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmijvlqbcerunzcyckffcpxrtzjhlon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401283.6221433-1501-208295173603362/AnsiballZ_file.py'
Nov 29 07:28:03 compute-2 sudo[182663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:04 compute-2 python3.9[182665]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:04 compute-2 sudo[182663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:04 compute-2 sudo[182815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xokctueccakypfnyyahwasnivjktpaqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401284.3922884-1501-266082010689393/AnsiballZ_file.py'
Nov 29 07:28:04 compute-2 sudo[182815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:04 compute-2 python3.9[182817]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:04 compute-2 sudo[182815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-2 sudo[182968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whhuldekrjtkpoxvgcrteoukwhyqqvsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401285.1157253-1501-94721781325172/AnsiballZ_file.py'
Nov 29 07:28:05 compute-2 sudo[182968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:05 compute-2 python3.9[182970]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:05 compute-2 sudo[182968]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:05.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:05 compute-2 ceph-mon[77138]: pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:06 compute-2 sudo[183120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfiynefbckwkakuwnqznxcnjwguskgml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401285.8014863-1501-45168807046067/AnsiballZ_file.py'
Nov 29 07:28:06 compute-2 sudo[183120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:06 compute-2 python3.9[183122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:06 compute-2 sudo[183120]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:06.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:06 compute-2 sudo[183272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtkcsfhermwhguhkxggkkaacwahmsib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401286.4383297-1501-12494714565294/AnsiballZ_file.py'
Nov 29 07:28:06 compute-2 sudo[183272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:06 compute-2 podman[183274]: 2025-11-29 07:28:06.817069283 +0000 UTC m=+0.051595167 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:28:06 compute-2 python3.9[183275]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:28:06 compute-2 sudo[183272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:07.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:07 compute-2 ceph-mon[77138]: pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:08 compute-2 sudo[183444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfbuulwqiiatviqkavccyvaqgdsgkvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401287.616191-1630-112155678999386/AnsiballZ_stat.py'
Nov 29 07:28:08 compute-2 sudo[183444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:08 compute-2 python3.9[183446]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:08 compute-2 sudo[183444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:08 compute-2 sudo[183569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikmnuwiszmohuyopajfpkgpgjrhfqqjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401287.616191-1630-112155678999386/AnsiballZ_copy.py'
Nov 29 07:28:08 compute-2 sudo[183569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:08 compute-2 python3.9[183571]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401287.616191-1630-112155678999386/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:09 compute-2 sudo[183569]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-2 sudo[183722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alhdeomnreherhpasamnoaodfpdwjthd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401289.1977658-1630-194589344126035/AnsiballZ_stat.py'
Nov 29 07:28:09 compute-2 sudo[183722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:09 compute-2 python3.9[183724]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:09 compute-2 sudo[183722]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:10 compute-2 sudo[183847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadavzhthqzoywbijhyotyzbkxlxthdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401289.1977658-1630-194589344126035/AnsiballZ_copy.py'
Nov 29 07:28:10 compute-2 sudo[183847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:10 compute-2 ceph-mon[77138]: pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:10 compute-2 python3.9[183849]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401289.1977658-1630-194589344126035/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:10 compute-2 sudo[183847]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:11 compute-2 sudo[183999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqitlkplkjpudzqnuxrioyvvurjtwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401290.5447066-1630-74511419467099/AnsiballZ_stat.py'
Nov 29 07:28:11 compute-2 sudo[183999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:11 compute-2 python3.9[184002]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:11 compute-2 sudo[183999]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:11 compute-2 ceph-mon[77138]: pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:11 compute-2 sudo[184125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eainhiavgrjloaogbaskgtfljwtzguln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401290.5447066-1630-74511419467099/AnsiballZ_copy.py'
Nov 29 07:28:11 compute-2 sudo[184125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:11.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:11 compute-2 python3.9[184127]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401290.5447066-1630-74511419467099/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:12 compute-2 sudo[184125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:28:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:28:12 compute-2 sudo[184277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvlocbwcujapmjnuecppqnvfnoqulrvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401292.2028444-1630-113451828642172/AnsiballZ_stat.py'
Nov 29 07:28:12 compute-2 sudo[184277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:12 compute-2 python3.9[184279]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:12 compute-2 sudo[184277]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:13 compute-2 sudo[184403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtlqyfzqgtixocbfkofwqsrlcaxlwxoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401292.2028444-1630-113451828642172/AnsiballZ_copy.py'
Nov 29 07:28:13 compute-2 sudo[184403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:13 compute-2 python3.9[184405]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401292.2028444-1630-113451828642172/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:13 compute-2 sudo[184403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:13.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:13 compute-2 sudo[184555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqmhbhvrkaulbmifbfkcncnojbfgjmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401293.646538-1630-116063872309673/AnsiballZ_stat.py'
Nov 29 07:28:13 compute-2 sudo[184555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:14 compute-2 python3.9[184557]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:14 compute-2 sudo[184555]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:14.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:14 compute-2 sudo[184680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqbdwhugcgcnyfctnozozqvwdjpiizmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401293.646538-1630-116063872309673/AnsiballZ_copy.py'
Nov 29 07:28:14 compute-2 sudo[184680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:14 compute-2 python3.9[184682]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401293.646538-1630-116063872309673/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:14 compute-2 sudo[184680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-2 ceph-mon[77138]: pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:15 compute-2 sudo[184833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwjgvctgkqrqirebetwqtkjoxasuasj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401295.0179021-1630-15088503867307/AnsiballZ_stat.py'
Nov 29 07:28:15 compute-2 sudo[184833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:15 compute-2 python3.9[184835]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:15 compute-2 sudo[184833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:15.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:15 compute-2 ceph-mon[77138]: pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:16 compute-2 sudo[184958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxtayvyqrsbpqxiftffqumdolhoyflh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401295.0179021-1630-15088503867307/AnsiballZ_copy.py'
Nov 29 07:28:16 compute-2 sudo[184958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:16 compute-2 python3.9[184960]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401295.0179021-1630-15088503867307/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:16 compute-2 sudo[184958]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:16.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:16 compute-2 sudo[185110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjbjlnpirokgvcfkgaakoiilcjdxxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401296.5415332-1630-43508372627122/AnsiballZ_stat.py'
Nov 29 07:28:16 compute-2 sudo[185110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:17 compute-2 python3.9[185112]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:17 compute-2 sudo[185110]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-2 sudo[185234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunaiqwsbnsqhtoltcsjutbuefcpkbgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401296.5415332-1630-43508372627122/AnsiballZ_copy.py'
Nov 29 07:28:17 compute-2 sudo[185234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:17.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:17 compute-2 ceph-mon[77138]: pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:17 compute-2 python3.9[185236]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.5415332-1630-43508372627122/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:17 compute-2 sudo[185234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:18 compute-2 sudo[185386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twtgclvrydtfikaznzsqyalfvtaoroyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401298.1111603-1630-206143125851674/AnsiballZ_stat.py'
Nov 29 07:28:18 compute-2 sudo[185386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:18 compute-2 python3.9[185388]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:19 compute-2 sudo[185386]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:19 compute-2 sudo[185512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njtlqbticewodpvjbvhgzzoxjwznowpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401298.1111603-1630-206143125851674/AnsiballZ_copy.py'
Nov 29 07:28:19 compute-2 sudo[185512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:19 compute-2 python3.9[185514]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401298.1111603-1630-206143125851674/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:19 compute-2 sudo[185512]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:19.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:19 compute-2 ceph-mon[77138]: pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:20 compute-2 sudo[185664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqzaiebqkdmjjdwhwnrqhbeeqcbcxhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401300.0044255-1969-244106656856280/AnsiballZ_command.py'
Nov 29 07:28:20 compute-2 sudo[185664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:20 compute-2 python3.9[185666]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 07:28:21 compute-2 sudo[185664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:21 compute-2 sudo[185693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:21 compute-2 sudo[185693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:21 compute-2 sudo[185693]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:21 compute-2 sudo[185722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:21 compute-2 sudo[185722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:21 compute-2 sudo[185722]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:21 compute-2 ceph-mon[77138]: pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:21 compute-2 sudo[185868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtmgryjqpphukebellytifwncqelfdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401301.5067787-1995-107517336323793/AnsiballZ_file.py'
Nov 29 07:28:21 compute-2 sudo[185868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:22 compute-2 python3.9[185870]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:22 compute-2 sudo[185868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:22 compute-2 sudo[186020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupufiirdrsgamwwarltljfhstdogjua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401302.4129612-1995-34601854018342/AnsiballZ_file.py'
Nov 29 07:28:22 compute-2 sudo[186020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:23 compute-2 python3.9[186022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:23 compute-2 sudo[186020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:23 compute-2 podman[186129]: 2025-11-29 07:28:23.723507993 +0000 UTC m=+0.123032782 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:28:23 compute-2 sudo[186193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cikjtslydpryysdmauxkhazihoqnckjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401303.2692149-1995-181471349828902/AnsiballZ_file.py'
Nov 29 07:28:23 compute-2 sudo[186193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:23 compute-2 python3.9[186201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:23 compute-2 sudo[186193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:23 compute-2 ceph-mon[77138]: pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:24 compute-2 sudo[186352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyadusncebwrcfjyrpzgfxpfpcdgjtlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401304.1386132-1995-194890200205779/AnsiballZ_file.py'
Nov 29 07:28:24 compute-2 sudo[186352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:24 compute-2 python3.9[186354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:24 compute-2 sudo[186352]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:25 compute-2 sudo[186505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhyosztkfmkrjzwzxcoelwrphyftaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401304.9924262-1995-225464099822331/AnsiballZ_file.py'
Nov 29 07:28:25 compute-2 sudo[186505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:25 compute-2 python3.9[186507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:25 compute-2 sudo[186505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:25.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:26 compute-2 sudo[186657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhzzwqzigecpdfycrxnogpazmnqvmug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401305.746876-1995-78460891537195/AnsiballZ_file.py'
Nov 29 07:28:26 compute-2 sudo[186657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:26 compute-2 python3.9[186659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:26 compute-2 sudo[186657]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:28:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:28:26 compute-2 sudo[186759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:26 compute-2 sudo[186759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:26 compute-2 sudo[186759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:26 compute-2 sudo[186811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:26 compute-2 sudo[186811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:26 compute-2 sudo[186857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkalzvcbcloxhffggjcdrhqtshgwlorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401306.5811062-1995-228466796376184/AnsiballZ_file.py'
Nov 29 07:28:26 compute-2 sudo[186811]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:26 compute-2 sudo[186857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:26 compute-2 ceph-mon[77138]: pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:27 compute-2 sudo[186862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:27 compute-2 sudo[186862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-2 sudo[186862]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-2 sudo[186888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:28:27 compute-2 sudo[186888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-2 python3.9[186861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:27 compute-2 sudo[186857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-2 sudo[186888]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-2 sudo[187093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qidppvszdwxuowkltbakkrcvxmqysmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401307.3613598-1995-74281769046296/AnsiballZ_file.py'
Nov 29 07:28:27 compute-2 sudo[187093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:27 compute-2 python3.9[187095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:28 compute-2 sudo[187093]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:28 compute-2 ceph-mon[77138]: pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:28.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:28 compute-2 sudo[187245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqbvdozlfsugfynjptapjoatcmrvpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401308.190844-1995-202981302265334/AnsiballZ_file.py'
Nov 29 07:28:28 compute-2 sudo[187245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:28 compute-2 python3.9[187247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:28 compute-2 sudo[187245]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:28:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:29 compute-2 sudo[187398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwqjpnjqxemnarumubyjviudwdxnbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401309.041904-1995-160872923176214/AnsiballZ_file.py'
Nov 29 07:28:29 compute-2 sudo[187398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:29 compute-2 python3.9[187400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:29 compute-2 sudo[187398]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:30 compute-2 sudo[187550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sghladqxxebhkhnwirminaytdrwcgbxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401309.8873067-1995-127722079517241/AnsiballZ_file.py'
Nov 29 07:28:30 compute-2 sudo[187550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:30 compute-2 python3.9[187552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:30 compute-2 sudo[187550]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:30 compute-2 sudo[187702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkgwfjrelxjttrhzrqiinjpwbmjjnqwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401310.627373-1995-139481307600419/AnsiballZ_file.py'
Nov 29 07:28:30 compute-2 sudo[187702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:31 compute-2 python3.9[187704]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:31 compute-2 sudo[187702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:31 compute-2 sudo[187855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucchpfaxkbppzpswoegeeveoidqdrqhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401311.3434966-1995-217386057537155/AnsiballZ_file.py'
Nov 29 07:28:31 compute-2 sudo[187855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:31.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:31 compute-2 python3.9[187857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:31 compute-2 sudo[187855]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:32 compute-2 sudo[188007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yytwnpjwdbfcoswujhzvsfbwqdluxday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401312.0847366-1995-269374169037895/AnsiballZ_file.py'
Nov 29 07:28:32 compute-2 sudo[188007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:32 compute-2 python3.9[188009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:32 compute-2 sudo[188007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-2 sudo[188160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltxwpqswjednrxecvtstqktldhsohko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401312.9760246-2292-153312584757254/AnsiballZ_stat.py'
Nov 29 07:28:33 compute-2 sudo[188160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:33 compute-2 python3.9[188162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:33 compute-2 sudo[188160]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:33 compute-2 sudo[188283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfanntqljolvvyxnjhtkrthbxepryfin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401312.9760246-2292-153312584757254/AnsiballZ_copy.py'
Nov 29 07:28:33 compute-2 sudo[188283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:34 compute-2 python3.9[188285]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401312.9760246-2292-153312584757254/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:34 compute-2 sudo[188283]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:28:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:35 compute-2 sudo[188436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfbqzbemvaqdupmznlwgmxztzvlygrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401314.733728-2292-156141590766893/AnsiballZ_stat.py'
Nov 29 07:28:35 compute-2 sudo[188436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:35 compute-2 python3.9[188438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:35 compute-2 sudo[188436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:35 compute-2 sudo[188559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlwsoltliulfpumzkaksdniiavuwkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401314.733728-2292-156141590766893/AnsiballZ_copy.py'
Nov 29 07:28:35 compute-2 sudo[188559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:36 compute-2 python3.9[188561]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401314.733728-2292-156141590766893/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:36 compute-2 sudo[188559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:36 compute-2 sudo[188711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galtyhlekrykkfcckuszitpuqohlthxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401316.4080591-2292-149029108322642/AnsiballZ_stat.py'
Nov 29 07:28:36 compute-2 sudo[188711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:37 compute-2 python3.9[188713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:37 compute-2 sudo[188711]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:37 compute-2 ceph-mon[77138]: pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:37 compute-2 sudo[188841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oofrdgyntoflhagwzuzravxsqjmsdmpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401316.4080591-2292-149029108322642/AnsiballZ_copy.py'
Nov 29 07:28:37 compute-2 sudo[188841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:37 compute-2 podman[188809]: 2025-11-29 07:28:37.518248075 +0000 UTC m=+0.095181739 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:28:37 compute-2 python3.9[188848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401316.4080591-2292-149029108322642/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:37 compute-2 sudo[188841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:38 compute-2 sudo[189005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwhsclomzrbyktdjupdcbhwxwmuepdpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401317.8657508-2292-136776067003068/AnsiballZ_stat.py'
Nov 29 07:28:38 compute-2 sudo[189005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:38 compute-2 python3.9[189007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:38 compute-2 sudo[189005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:38 compute-2 sudo[189128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjosoxmownvarnlqaitckzyywmareoei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401317.8657508-2292-136776067003068/AnsiballZ_copy.py'
Nov 29 07:28:38 compute-2 sudo[189128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:39 compute-2 python3.9[189130]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401317.8657508-2292-136776067003068/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:39 compute-2 sudo[189128]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:39 compute-2 sudo[189281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djrxgcbhpamflexdtmzttxawpqebdnww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401319.2845411-2292-6499158098963/AnsiballZ_stat.py'
Nov 29 07:28:39 compute-2 sudo[189281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:39 compute-2 python3.9[189283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:39 compute-2 sudo[189281]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:40 compute-2 sudo[189404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knolnyekzkldhhbnzxhrmttzrxnyyupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401319.2845411-2292-6499158098963/AnsiballZ_copy.py'
Nov 29 07:28:40 compute-2 sudo[189404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:40 compute-2 python3.9[189406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401319.2845411-2292-6499158098963/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:40 compute-2 sudo[189404]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:41 compute-2 sudo[189557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsayhosksqpegyqrcedwawpcjwawqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401320.6945794-2292-116792960821529/AnsiballZ_stat.py'
Nov 29 07:28:41 compute-2 sudo[189557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:41 compute-2 python3.9[189559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:41 compute-2 sudo[189557]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:41 compute-2 sudo[189607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:41 compute-2 sudo[189607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:41 compute-2 sudo[189607]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:41 compute-2 sudo[189639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:41 compute-2 sudo[189639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:41 compute-2 sudo[189639]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:41 compute-2 sudo[189730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobhvzobspibeqeiqkpqlbywnicefnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401320.6945794-2292-116792960821529/AnsiballZ_copy.py'
Nov 29 07:28:41 compute-2 sudo[189730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:42 compute-2 python3.9[189732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401320.6945794-2292-116792960821529/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:42 compute-2 sudo[189730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:42.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:42 compute-2 sudo[189882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbohudtuyytiqulkpaeqixgxqcstjkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401322.2357345-2292-134736792347292/AnsiballZ_stat.py'
Nov 29 07:28:42 compute-2 sudo[189882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:42 compute-2 python3.9[189884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:42 compute-2 sudo[189882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:43 compute-2 sudo[190006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djcimmquqiuyttffdmcjmesuhkdfgtwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401322.2357345-2292-134736792347292/AnsiballZ_copy.py'
Nov 29 07:28:43 compute-2 sudo[190006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:43 compute-2 python3.9[190008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401322.2357345-2292-134736792347292/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:43 compute-2 sudo[190006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:43 compute-2 sudo[190158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjnvgmlnbmxjqpsjriiwzmsavuorsaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401323.5507479-2292-198449770590429/AnsiballZ_stat.py'
Nov 29 07:28:43 compute-2 sudo[190158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:43.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:44 compute-2 python3.9[190160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:44 compute-2 sudo[190158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:44.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:44 compute-2 sudo[190281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhengrglfktpotiwlnttridcvkidsbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401323.5507479-2292-198449770590429/AnsiballZ_copy.py'
Nov 29 07:28:44 compute-2 sudo[190281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:44 compute-2 python3.9[190283]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401323.5507479-2292-198449770590429/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:44 compute-2 sudo[190281]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:45 compute-2 sudo[190434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmbdhehcfuxmrtzgtadzeifejigjhtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401325.0425973-2292-184841630307677/AnsiballZ_stat.py'
Nov 29 07:28:45 compute-2 sudo[190434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:45 compute-2 python3.9[190436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:45 compute-2 sudo[190434]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:45 compute-2 ceph-mon[77138]: pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:45 compute-2 ceph-mon[77138]: pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:45 compute-2 ceph-mon[77138]: pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:45 compute-2 ceph-mon[77138]: pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:45 compute-2 ceph-mon[77138]: pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:46 compute-2 sudo[190557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubwkzolbwzqewzerigufaaxlmqojjype ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401325.0425973-2292-184841630307677/AnsiballZ_copy.py'
Nov 29 07:28:46 compute-2 sudo[190557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:46 compute-2 python3.9[190559]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401325.0425973-2292-184841630307677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:46 compute-2 sudo[190557]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.012000385s ======
Nov 29 07:28:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.012000385s
Nov 29 07:28:46 compute-2 sudo[190709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzuwmbsaionaqjfmyzyfepaajnrfdzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401326.4063668-2292-242810675139386/AnsiballZ_stat.py'
Nov 29 07:28:46 compute-2 sudo[190709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:47 compute-2 python3.9[190711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:47 compute-2 sudo[190709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:47 compute-2 ceph-mon[77138]: pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:47 compute-2 ceph-mon[77138]: pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:47 compute-2 ceph-mon[77138]: pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:47 compute-2 sudo[190833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzaymetzxrmarqnuwgnkxvondkfkudb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401326.4063668-2292-242810675139386/AnsiballZ_copy.py'
Nov 29 07:28:47 compute-2 sudo[190833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:47 compute-2 python3.9[190835]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401326.4063668-2292-242810675139386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:47 compute-2 sudo[190833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:48 compute-2 sudo[190985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqebedzgrgjdzwbsnrafxcscamndlrwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401328.0371473-2292-1556310737486/AnsiballZ_stat.py'
Nov 29 07:28:48 compute-2 sudo[190985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:48 compute-2 ceph-mon[77138]: pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:48.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:48 compute-2 python3.9[190987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:48 compute-2 sudo[190985]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:49 compute-2 sudo[191109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijsnfrbwpizpsddtbcffrhrdditfvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401328.0371473-2292-1556310737486/AnsiballZ_copy.py'
Nov 29 07:28:49 compute-2 sudo[191109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:49 compute-2 python3.9[191111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401328.0371473-2292-1556310737486/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:49 compute-2 sudo[191109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:49 compute-2 sudo[191261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hanctgeuarmwmweiuwbucnxnpoizyjhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401329.5063925-2292-280205323852813/AnsiballZ_stat.py'
Nov 29 07:28:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:49 compute-2 sudo[191261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:49.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:50 compute-2 python3.9[191263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:50 compute-2 sudo[191261]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:50 compute-2 sudo[191384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mscblhfrguhrdevrrmwlyzcglnqveqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401329.5063925-2292-280205323852813/AnsiballZ_copy.py'
Nov 29 07:28:50 compute-2 sudo[191384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:50.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:50 compute-2 python3.9[191386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401329.5063925-2292-280205323852813/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:50 compute-2 sudo[191384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:51 compute-2 sudo[191538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvclkeonnmxvmefceavktlphfysqajnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401330.9076486-2292-168677374905475/AnsiballZ_stat.py'
Nov 29 07:28:51 compute-2 sudo[191538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:51 compute-2 python3.9[191540]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:51 compute-2 sudo[191538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:51 compute-2 sudo[191661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfnehylhmbyivgfmhxevnlfhudwywpoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401330.9076486-2292-168677374905475/AnsiballZ_copy.py'
Nov 29 07:28:51 compute-2 sudo[191661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:52 compute-2 python3.9[191663]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401330.9076486-2292-168677374905475/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:52 compute-2 sudo[191661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:52 compute-2 ceph-mon[77138]: pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:52.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:52 compute-2 sudo[191813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eznyjlzzqdiixrqiftimyhesizakkmby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401332.2514813-2292-9637865751241/AnsiballZ_stat.py'
Nov 29 07:28:52 compute-2 sudo[191813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:52 compute-2 python3.9[191815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:28:52 compute-2 sudo[191813]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-2 sudo[191937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awiebwwtuzbsrbgfiwmhrczmhexhuyfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401332.2514813-2292-9637865751241/AnsiballZ_copy.py'
Nov 29 07:28:53 compute-2 sudo[191937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:53 compute-2 python3.9[191939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401332.2514813-2292-9637865751241/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:28:53 compute-2 sudo[191937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:54.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:54 compute-2 podman[191964]: 2025-11-29 07:28:54.765850125 +0000 UTC m=+0.155400268 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:28:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:28:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:28:56 compute-2 python3.9[192116]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:28:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:56.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:28:57 compute-2 ceph-mon[77138]: pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:28:57 compute-2 sudo[192270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyjjbmaypsjawfohsocyzxogizxldjjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401337.1334279-2911-175895041127836/AnsiballZ_seboolean.py'
Nov 29 07:28:57 compute-2 sudo[192270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:57 compute-2 python3.9[192272]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 07:28:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:28:59 compute-2 sudo[192270]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:28:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:28:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:28:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:00 compute-2 ceph-mon[77138]: pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:00 compute-2 ceph-mon[77138]: pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:00 compute-2 ceph-mon[77138]: pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:01 compute-2 sudo[192321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:01 compute-2 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 07:29:01 compute-2 sudo[192321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:01 compute-2 sudo[192321]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:01 compute-2 sudo[192380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:01 compute-2 sudo[192380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:01 compute-2 sudo[192380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:01.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:02 compute-2 sudo[192478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhmgfmahbyfbbplrmjpgudzhmcukgaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401341.7806478-2934-181377389020069/AnsiballZ_copy.py'
Nov 29 07:29:02 compute-2 sudo[192478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:02 compute-2 ceph-mon[77138]: pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:02 compute-2 ceph-mon[77138]: pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:02 compute-2 python3.9[192480]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:02 compute-2 sudo[192478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:02 compute-2 sudo[192630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaxsupjxjapiciayhsnevqncygjkmiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401342.4545941-2934-172197695392186/AnsiballZ_copy.py'
Nov 29 07:29:02 compute-2 sudo[192630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:02 compute-2 python3.9[192632]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:03 compute-2 sudo[192630]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:03 compute-2 sudo[192683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:03 compute-2 sudo[192683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:03 compute-2 sudo[192683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:29:03.271 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:29:03.273 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:29:03.273 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:29:03 compute-2 sudo[192739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:29:03 compute-2 sudo[192739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:03 compute-2 sudo[192739]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:03 compute-2 sudo[192833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlfjonirmrgxuaoztjneiqwbivbaqvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401343.1567938-2934-268807687393723/AnsiballZ_copy.py'
Nov 29 07:29:03 compute-2 sudo[192833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:03 compute-2 python3.9[192835]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:03 compute-2 sudo[192833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:03 compute-2 ceph-mon[77138]: pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:29:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:29:04 compute-2 sudo[192985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhneopbglnpuusricykhvsmwahjiysva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401343.8579705-2934-108725799143269/AnsiballZ_copy.py'
Nov 29 07:29:04 compute-2 sudo[192985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:04 compute-2 python3.9[192987]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:04 compute-2 sudo[192985]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:04 compute-2 sudo[193137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqphvaddywqbffqlgzxplawlgexjdsdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401344.5194507-2934-262714840972111/AnsiballZ_copy.py'
Nov 29 07:29:04 compute-2 sudo[193137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:05 compute-2 python3.9[193139]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:05 compute-2 sudo[193137]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-2 sudo[193290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aipzsjcxujvtmrpkonjwqlgafmdmoxha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401345.400967-3042-252930002259181/AnsiballZ_copy.py'
Nov 29 07:29:05 compute-2 sudo[193290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:05 compute-2 python3.9[193292]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:05 compute-2 sudo[193290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:05 compute-2 auditd[705]: Audit daemon rotating log files
Nov 29 07:29:06 compute-2 ceph-mon[77138]: pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:06 compute-2 sudo[193442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkzireampdioqftjmgrhqrapkcweuln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401346.0566263-3042-123147603217865/AnsiballZ_copy.py'
Nov 29 07:29:06 compute-2 sudo[193442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:06 compute-2 python3.9[193444]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:06 compute-2 sudo[193442]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:07 compute-2 sudo[193595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcafffjemulvizrmjkgrtlrduhiyamhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401346.8064187-3042-66735553373328/AnsiballZ_copy.py'
Nov 29 07:29:07 compute-2 sudo[193595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:07 compute-2 python3.9[193597]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:07 compute-2 sudo[193595]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:07 compute-2 ceph-mon[77138]: pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:07 compute-2 podman[193651]: 2025-11-29 07:29:07.733161499 +0000 UTC m=+0.103297347 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:07 compute-2 sudo[193766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnnbvaaruipkoaxonyjwhwunqgsuwinv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401347.555117-3042-244371822637024/AnsiballZ_copy.py'
Nov 29 07:29:07 compute-2 sudo[193766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:08 compute-2 python3.9[193768]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:08 compute-2 sudo[193766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:08 compute-2 sudo[193918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywrlturgkpgezfkckfnlpdwkeziuczx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401348.2964866-3042-275629352265355/AnsiballZ_copy.py'
Nov 29 07:29:08 compute-2 sudo[193918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:08 compute-2 python3.9[193920]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:08 compute-2 sudo[193918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:09 compute-2 sudo[194071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxyirhuysdxpjxqivdfwtyeiknfymwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401349.1248899-3150-174808242078533/AnsiballZ_systemd.py'
Nov 29 07:29:09 compute-2 sudo[194071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:09 compute-2 python3.9[194073]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:29:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:09 compute-2 systemd[1]: Reloading.
Nov 29 07:29:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:09.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:09 compute-2 systemd-rc-local-generator[194096]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:09 compute-2 systemd-sysv-generator[194104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:10 compute-2 ceph-mon[77138]: pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:10 compute-2 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 07:29:10 compute-2 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 07:29:10 compute-2 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 07:29:10 compute-2 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 07:29:10 compute-2 systemd[1]: Starting libvirt logging daemon...
Nov 29 07:29:10 compute-2 systemd[1]: Started libvirt logging daemon.
Nov 29 07:29:10 compute-2 sudo[194071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:10 compute-2 sudo[194264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohpwkloegyfigyfinhcwxeagvxcdzrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401350.5827236-3150-238721623549099/AnsiballZ_systemd.py'
Nov 29 07:29:10 compute-2 sudo[194264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:11 compute-2 python3.9[194266]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:29:11 compute-2 systemd[1]: Reloading.
Nov 29 07:29:11 compute-2 systemd-rc-local-generator[194298]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:11 compute-2 systemd-sysv-generator[194303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:11 compute-2 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 07:29:11 compute-2 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 07:29:11 compute-2 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 07:29:11 compute-2 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 07:29:11 compute-2 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 07:29:11 compute-2 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 07:29:11 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:29:11 compute-2 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:29:11 compute-2 sudo[194264]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:11.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:12 compute-2 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 07:29:12 compute-2 sudo[194484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjctgvkjrflvzbndmihdbbxorrokoigc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401351.9896054-3150-106179440852277/AnsiballZ_systemd.py'
Nov 29 07:29:12 compute-2 sudo[194484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:12 compute-2 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 07:29:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:12 compute-2 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 07:29:12 compute-2 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 07:29:12 compute-2 python3.9[194486]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:29:12 compute-2 systemd[1]: Reloading.
Nov 29 07:29:12 compute-2 systemd-sysv-generator[194525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:12 compute-2 systemd-rc-local-generator[194521]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:13 compute-2 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 07:29:13 compute-2 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 07:29:13 compute-2 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 07:29:13 compute-2 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 07:29:13 compute-2 systemd[1]: Starting libvirt proxy daemon...
Nov 29 07:29:13 compute-2 systemd[1]: Started libvirt proxy daemon.
Nov 29 07:29:13 compute-2 sudo[194484]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:13 compute-2 sudo[194706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozhrgdevnotfqaksmalhlkqlrfdbvcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401353.3753588-3150-182944732277962/AnsiballZ_systemd.py'
Nov 29 07:29:13 compute-2 sudo[194706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:13 compute-2 setroubleshoot[194410]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1cac00a4-6b15-40b6-a245-9f4e7db07316
Nov 29 07:29:13 compute-2 setroubleshoot[194410]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:29:13 compute-2 setroubleshoot[194410]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1cac00a4-6b15-40b6-a245-9f4e7db07316
Nov 29 07:29:13 compute-2 setroubleshoot[194410]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:29:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:13.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:14 compute-2 python3.9[194708]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:29:14 compute-2 systemd[1]: Reloading.
Nov 29 07:29:14 compute-2 systemd-rc-local-generator[194728]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:14 compute-2 systemd-sysv-generator[194731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:14 compute-2 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 07:29:14 compute-2 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 07:29:14 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 07:29:14 compute-2 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 07:29:14 compute-2 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 07:29:14 compute-2 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 07:29:14 compute-2 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 07:29:14 compute-2 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 07:29:14 compute-2 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 07:29:14 compute-2 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 07:29:14 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:29:14 compute-2 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:29:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:14.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:14 compute-2 sudo[194706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-2 ceph-mon[77138]: pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:15 compute-2 sudo[194921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cedolmxnzxmeeekgmuhkypqxvuifavef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401354.7402854-3150-67378242929251/AnsiballZ_systemd.py'
Nov 29 07:29:15 compute-2 sudo[194921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:15 compute-2 python3.9[194923]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:29:15 compute-2 systemd[1]: Reloading.
Nov 29 07:29:15 compute-2 systemd-sysv-generator[194952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:15 compute-2 systemd-rc-local-generator[194949]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:15 compute-2 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 07:29:15 compute-2 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 07:29:15 compute-2 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 07:29:15 compute-2 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 07:29:15 compute-2 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 07:29:15 compute-2 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 07:29:15 compute-2 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:29:16 compute-2 systemd[1]: Started libvirt secret daemon.
Nov 29 07:29:16 compute-2 sudo[194921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:16.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:17 compute-2 sudo[195133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjuklnhtzgkwbzltmnvhihdumlcrbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401357.3356237-3261-85122422162552/AnsiballZ_file.py'
Nov 29 07:29:17 compute-2 sudo[195133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:17 compute-2 ceph-mon[77138]: pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:17 compute-2 ceph-mon[77138]: pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:17 compute-2 python3.9[195135]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:17 compute-2 sudo[195133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:18.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:18 compute-2 sudo[195285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrwrlrewrsbgjkanrlonywtlmrkmilsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401358.1878161-3286-7765172356759/AnsiballZ_find.py'
Nov 29 07:29:18 compute-2 sudo[195285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:18 compute-2 python3.9[195287]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:29:18 compute-2 sudo[195285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:19 compute-2 sudo[195438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqhevrtnqcqmpcswewrqxxvbjwjswyyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401359.087465-3309-16056055377732/AnsiballZ_command.py'
Nov 29 07:29:19 compute-2 sudo[195438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:19 compute-2 python3.9[195440]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:19 compute-2 sudo[195438]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:19.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:20 compute-2 ceph-mon[77138]: pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:20 compute-2 python3.9[195594]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:29:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:21.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:22 compute-2 sudo[195746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:22 compute-2 sudo[195746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:22 compute-2 sudo[195746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:22 compute-2 python3.9[195745]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:22 compute-2 sudo[195771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:22 compute-2 sudo[195771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:22 compute-2 sudo[195771]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:22 compute-2 python3.9[195916]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401361.4210808-3367-255024369240926/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3de32f8e861874afb18756e58a543ac33a4e4294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:23 compute-2 ceph-mon[77138]: pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:23 compute-2 sudo[196067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yykqgybmquckjfkzgwhrmssqccfikgjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401363.012959-3411-30824677975699/AnsiballZ_command.py'
Nov 29 07:29:23 compute-2 sudo[196067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:23 compute-2 python3.9[196069]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 38a37ed2-442a-5e0d-a69a-881fdd186450
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:23 compute-2 polkitd[43483]: Registered Authentication Agent for unix-process:196071:433029 (system bus name :1.1940 [pkttyagent --process 196071 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:29:23 compute-2 polkitd[43483]: Unregistered Authentication Agent for unix-process:196071:433029 (system bus name :1.1940, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:29:23 compute-2 polkitd[43483]: Registered Authentication Agent for unix-process:196070:433028 (system bus name :1.1941 [pkttyagent --process 196070 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:29:23 compute-2 polkitd[43483]: Unregistered Authentication Agent for unix-process:196070:433028 (system bus name :1.1941, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:29:23 compute-2 sudo[196067]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:23 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 07:29:23 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.120s CPU time.
Nov 29 07:29:23 compute-2 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 07:29:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:24 compute-2 ceph-mon[77138]: pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:24 compute-2 ceph-mon[77138]: pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:25 compute-2 podman[196205]: 2025-11-29 07:29:25.147226097 +0000 UTC m=+0.214853114 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:29:25 compute-2 python3.9[196242]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:25 compute-2 sudo[196411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eavhhwxvqeehsmhxftarigwioyornukl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401365.5413675-3459-171261930181464/AnsiballZ_command.py'
Nov 29 07:29:25 compute-2 sudo[196411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:26 compute-2 sudo[196411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:26 compute-2 sudo[196564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-figbxsiecmaetndtcjitvmzahokuwofl ; FSID=38a37ed2-442a-5e0d-a69a-881fdd186450 KEY=AQC1myppAAAAABAAOniUNx/sXGx0vIXKyfUNbA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401366.4640114-3483-160675526369045/AnsiballZ_command.py'
Nov 29 07:29:26 compute-2 sudo[196564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:27 compute-2 polkitd[43483]: Registered Authentication Agent for unix-process:196568:433375 (system bus name :1.1944 [pkttyagent --process 196568 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:29:27 compute-2 polkitd[43483]: Unregistered Authentication Agent for unix-process:196568:433375 (system bus name :1.1944, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:29:27 compute-2 sudo[196564]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:27.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:29 compute-2 sudo[196724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nznblxufsdpvhznroibrhbrycnwajoyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401368.8008244-3507-146059247791478/AnsiballZ_copy.py'
Nov 29 07:29:29 compute-2 sudo[196724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:29 compute-2 python3.9[196726]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:29 compute-2 sudo[196724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:29 compute-2 ceph-mon[77138]: pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:29.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:30 compute-2 sudo[196876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqkuygqliarhouxyruuexvvfduksmxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401369.760731-3532-125516514652968/AnsiballZ_stat.py'
Nov 29 07:29:30 compute-2 sudo[196876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:30 compute-2 python3.9[196878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:30 compute-2 sudo[196876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:30 compute-2 sudo[196999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtlkqhzwifiglxemhtcimegezxhrkrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401369.760731-3532-125516514652968/AnsiballZ_copy.py'
Nov 29 07:29:30 compute-2 sudo[196999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:31 compute-2 python3.9[197001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401369.760731-3532-125516514652968/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:31 compute-2 sudo[196999]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:31 compute-2 ceph-mon[77138]: pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:31 compute-2 ceph-mon[77138]: pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:32 compute-2 sudo[197152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdfmgshlhkgqmfvskujfofnlqjwsipru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401371.672179-3580-81072130919417/AnsiballZ_file.py'
Nov 29 07:29:32 compute-2 sudo[197152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:32 compute-2 ceph-mon[77138]: pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:32 compute-2 python3.9[197154]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:32 compute-2 sudo[197152]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:33 compute-2 sudo[197305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvbrbmjptwgbdxqeosiuoqrlfkuwcne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401372.997448-3603-205533888916323/AnsiballZ_stat.py'
Nov 29 07:29:33 compute-2 sudo[197305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:33 compute-2 python3.9[197307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:33 compute-2 sudo[197305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:33 compute-2 sudo[197383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlgvqowlaoyczxsanztpxwbqajdupuwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401372.997448-3603-205533888916323/AnsiballZ_file.py'
Nov 29 07:29:33 compute-2 sudo[197383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:34 compute-2 python3.9[197385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:34 compute-2 sudo[197383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:34 compute-2 sudo[197535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwtzoleprmcsgilsrkkcwbxydvhedaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401374.45607-3639-264293186256680/AnsiballZ_stat.py'
Nov 29 07:29:34 compute-2 sudo[197535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:35 compute-2 python3.9[197537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:35 compute-2 sudo[197535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.208721) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375208975, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1530, "num_deletes": 503, "total_data_size": 3035607, "memory_usage": 3072472, "flush_reason": "Manual Compaction"}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375227701, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1183243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14248, "largest_seqno": 15773, "table_properties": {"data_size": 1178410, "index_size": 1781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15117, "raw_average_key_size": 19, "raw_value_size": 1166021, "raw_average_value_size": 1479, "num_data_blocks": 81, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401235, "oldest_key_time": 1764401235, "file_creation_time": 1764401375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 19245 microseconds, and 11397 cpu microseconds.
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.227928) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1183243 bytes OK
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.228073) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.230352) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.230408) EVENT_LOG_v1 {"time_micros": 1764401375230368, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.230432) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3027670, prev total WAL file size 3027670, number of live WAL files 2.
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.232778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1155KB)], [27(10MB)]
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375233007, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12689901, "oldest_snapshot_seqno": -1}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4157 keys, 7563562 bytes, temperature: kUnknown
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375326171, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7563562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7534730, "index_size": 17330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 104078, "raw_average_key_size": 25, "raw_value_size": 7458324, "raw_average_value_size": 1794, "num_data_blocks": 724, "num_entries": 4157, "num_filter_entries": 4157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764401375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.326759) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7563562 bytes
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.328375) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.8 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(17.1) write-amplify(6.4) OK, records in: 5125, records dropped: 968 output_compression: NoCompression
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.328398) EVENT_LOG_v1 {"time_micros": 1764401375328385, "job": 14, "event": "compaction_finished", "compaction_time_micros": 93478, "compaction_time_cpu_micros": 47903, "output_level": 6, "num_output_files": 1, "total_output_size": 7563562, "num_input_records": 5125, "num_output_records": 4157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375328784, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375331458, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.232606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.331493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.331497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.331499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.331501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:29:35.331503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:35 compute-2 sudo[197614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wepbptvywqoauwworfplvbddollahdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401374.45607-3639-264293186256680/AnsiballZ_file.py'
Nov 29 07:29:35 compute-2 sudo[197614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:35 compute-2 python3.9[197616]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.37174450 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:35 compute-2 sudo[197614]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:36 compute-2 sudo[197766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkeiffgpiodqecefxtolkitdxnuqpxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401376.0189533-3676-174087757385561/AnsiballZ_stat.py'
Nov 29 07:29:36 compute-2 sudo[197766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:29:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:29:36 compute-2 python3.9[197768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:36 compute-2 sudo[197766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:36 compute-2 sudo[197844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofatgkuhjhaqoscuaxqliwldatayqcss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401376.0189533-3676-174087757385561/AnsiballZ_file.py'
Nov 29 07:29:36 compute-2 sudo[197844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:37 compute-2 python3.9[197846]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:37 compute-2 sudo[197844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:38 compute-2 podman[197872]: 2025-11-29 07:29:38.686511469 +0000 UTC m=+0.078067086 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 07:29:39 compute-2 sudo[198018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsnmsfxfzovjuppnqhdreysxclrerqvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401379.370601-3715-9317275081482/AnsiballZ_command.py'
Nov 29 07:29:39 compute-2 sudo[198018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:40 compute-2 python3.9[198020]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:40 compute-2 sudo[198018]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:40 compute-2 ceph-mon[77138]: pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:40 compute-2 sudo[198171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmizdamoktlacsdbmxhnqfcdtcilkyuj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401380.392797-3739-88224820908559/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:29:40 compute-2 sudo[198171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:41 compute-2 python3[198173]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:29:41 compute-2 sudo[198171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:41 compute-2 sudo[198324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-resfqwxqjulmdmqekqjvxcagseofguio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401381.5357723-3763-87306881692670/AnsiballZ_stat.py'
Nov 29 07:29:41 compute-2 sudo[198324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:42 compute-2 python3.9[198326]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:42 compute-2 sudo[198324]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:42 compute-2 sudo[198328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:42 compute-2 sudo[198328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:42 compute-2 sudo[198328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:42 compute-2 sudo[198355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:42 compute-2 sudo[198355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:42 compute-2 sudo[198355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:42 compute-2 sudo[198452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vclecgrrfvudesskwrouihkroxuaqito ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401381.5357723-3763-87306881692670/AnsiballZ_file.py'
Nov 29 07:29:42 compute-2 sudo[198452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:42 compute-2 python3.9[198454]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:42 compute-2 sudo[198452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:43 compute-2 ceph-mon[77138]: pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:43 compute-2 ceph-mon[77138]: pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:43 compute-2 ceph-mon[77138]: pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:43 compute-2 ceph-mon[77138]: pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:45 compute-2 ceph-mon[77138]: pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:46 compute-2 sudo[198606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhshlatgamtdgopsraqfbbmjcdywjlxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401383.2452989-3800-46890197872825/AnsiballZ_stat.py'
Nov 29 07:29:46 compute-2 sudo[198606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:46 compute-2 python3.9[198608]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:47 compute-2 sudo[198606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:47 compute-2 ceph-mon[77138]: pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:47 compute-2 sudo[198685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfirnyivmykvgrurwakwnrroiqztoakz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401383.2452989-3800-46890197872825/AnsiballZ_file.py'
Nov 29 07:29:47 compute-2 sudo[198685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:47 compute-2 python3.9[198687]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:47 compute-2 sudo[198685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:48.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:48 compute-2 ceph-mon[77138]: pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:48.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:48 compute-2 sudo[198837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdsaiywnqveauxxxwdxevcephwajzqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401387.9353523-3834-41768120421398/AnsiballZ_stat.py'
Nov 29 07:29:48 compute-2 sudo[198837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:48 compute-2 python3.9[198839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:48 compute-2 sudo[198837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-2 sudo[198916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lajwhibjfttlftkmrmtangbsyxjrvusr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401387.9353523-3834-41768120421398/AnsiballZ_file.py'
Nov 29 07:29:49 compute-2 sudo[198916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:49 compute-2 python3.9[198918]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:49 compute-2 sudo[198916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-2 ceph-mon[77138]: pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:49 compute-2 sshd-session[198943]: Invalid user solana from 45.148.10.240 port 59622
Nov 29 07:29:50 compute-2 sshd-session[198943]: Connection closed by invalid user solana 45.148.10.240 port 59622 [preauth]
Nov 29 07:29:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:50 compute-2 sudo[199070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojiuejavojkjigkzptxjltczsnaixyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401389.65072-3871-190523749200315/AnsiballZ_stat.py'
Nov 29 07:29:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:50 compute-2 sudo[199070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:50 compute-2 python3.9[199072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:50 compute-2 sudo[199070]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:50 compute-2 sudo[199148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrbqvoxcpfwwifkebtmsnfabggiaekue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401389.65072-3871-190523749200315/AnsiballZ_file.py'
Nov 29 07:29:50 compute-2 sudo[199148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:51 compute-2 python3.9[199150]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:51 compute-2 sudo[199148]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:51 compute-2 sudo[199301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nabmuqrqxzsyilwnrdqbftiawmcunwdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401391.3221862-3907-261010900667769/AnsiballZ_stat.py'
Nov 29 07:29:51 compute-2 sudo[199301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:52.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:52 compute-2 python3.9[199303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:29:52 compute-2 ceph-mon[77138]: pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:52 compute-2 sudo[199301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:52 compute-2 sudo[199426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufqaobsqtouvcmeocmnvvgjnbcrzkoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401391.3221862-3907-261010900667769/AnsiballZ_copy.py'
Nov 29 07:29:52 compute-2 sudo[199426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:52 compute-2 python3.9[199428]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401391.3221862-3907-261010900667769/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:52 compute-2 sudo[199426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:53 compute-2 sudo[199579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarqueustjalxcwzecgnrswrtfemwfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401393.0543292-3952-83752721347065/AnsiballZ_file.py'
Nov 29 07:29:53 compute-2 sudo[199579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:53 compute-2 python3.9[199581]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:53 compute-2 sudo[199579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:54 compute-2 sudo[199732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-downzlzxqcgsnbshrqguliyybcewxzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401394.2148683-3976-180686201685559/AnsiballZ_command.py'
Nov 29 07:29:54 compute-2 sudo[199732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:54 compute-2 python3.9[199734]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:54 compute-2 sudo[199732]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:29:55 compute-2 ceph-mon[77138]: pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:55 compute-2 podman[199763]: 2025-11-29 07:29:55.782742018 +0000 UTC m=+0.184938296 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:29:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:56 compute-2 sudo[199915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqoigmlkpwqjultdxhtyyfgakomxyrhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401395.7858064-4000-48274687356680/AnsiballZ_blockinfile.py'
Nov 29 07:29:56 compute-2 sudo[199915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:56 compute-2 python3.9[199917]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:56 compute-2 sudo[199915]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:29:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:29:57 compute-2 ceph-mon[77138]: pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:57 compute-2 sudo[200068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teruyfwkxngbojuclphcukhdzlwdhita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401396.9879084-4027-15417455287047/AnsiballZ_command.py'
Nov 29 07:29:57 compute-2 sudo[200068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:57 compute-2 python3.9[200070]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:57 compute-2 sudo[200068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:58.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:58 compute-2 sudo[200221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhdunyqqqazlaqufvfksyvrhahucwbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401397.8580956-4051-75725779497988/AnsiballZ_stat.py'
Nov 29 07:29:58 compute-2 sudo[200221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:58 compute-2 ceph-mon[77138]: pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:29:58 compute-2 python3.9[200223]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:29:58 compute-2 sudo[200221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:29:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:29:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:29:59 compute-2 ceph-mon[77138]: pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:00.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:00 compute-2 sudo[200376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgaunwpxqohyckuxhyvyfycvulvtxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401399.67736-4074-82539877224503/AnsiballZ_command.py'
Nov 29 07:30:00 compute-2 sudo[200376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:00 compute-2 python3.9[200378]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:00 compute-2 sudo[200376]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:01 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:30:01 compute-2 sudo[200532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkarlgnhtjzmcxkscpqtqcqnsclxyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401401.0827434-4100-203190434737806/AnsiballZ_file.py'
Nov 29 07:30:01 compute-2 sudo[200532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:01 compute-2 python3.9[200534]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:01 compute-2 sudo[200532]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:02.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:02 compute-2 sudo[200685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlrsuccjnhqtgshtqlzmryylobdbfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401402.032587-4123-26260380265014/AnsiballZ_stat.py'
Nov 29 07:30:02 compute-2 sudo[200685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:02 compute-2 sudo[200684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:02 compute-2 sudo[200684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:02 compute-2 sudo[200684]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:02 compute-2 sudo[200712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:02 compute-2 sudo[200712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:02 compute-2 sudo[200712]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:02.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:02 compute-2 python3.9[200698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:02 compute-2 sudo[200685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:02 compute-2 ceph-mon[77138]: pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:03 compute-2 sudo[200858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kptmdwgugaohjcwiifhfxnqamtfxsngj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401402.032587-4123-26260380265014/AnsiballZ_copy.py'
Nov 29 07:30:03 compute-2 sudo[200858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:30:03.272 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:30:03.273 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:30:03.273 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:30:03 compute-2 python3.9[200860]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401402.032587-4123-26260380265014/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:03 compute-2 sudo[200858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-2 sudo[200861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:03 compute-2 sudo[200861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-2 sudo[200861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-2 sudo[200910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:03 compute-2 sudo[200910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-2 sudo[200910]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-2 sudo[200935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:03 compute-2 sudo[200935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-2 sudo[200935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-2 sudo[200967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:30:03 compute-2 sudo[200967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:04.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:04 compute-2 sudo[201125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmbevsnahrlcdfwyxysihpvryzualog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401403.641441-4169-133878891139098/AnsiballZ_stat.py'
Nov 29 07:30:04 compute-2 sudo[201125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:04 compute-2 ceph-mon[77138]: pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:04 compute-2 sudo[200967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-2 python3.9[201129]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:04 compute-2 sudo[201125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:04 compute-2 sudo[201264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufaawjsixtsliwwxncgvrwltyzzsvrtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401403.641441-4169-133878891139098/AnsiballZ_copy.py'
Nov 29 07:30:04 compute-2 sudo[201264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:04 compute-2 python3.9[201266]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401403.641441-4169-133878891139098/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:04 compute-2 sudo[201264]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:05 compute-2 sudo[201417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchnzgiuiwzlyyxgvtdlvdtvcmfpqajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401405.179505-4213-1847512814824/AnsiballZ_stat.py'
Nov 29 07:30:05 compute-2 sudo[201417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:05 compute-2 python3.9[201419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:05 compute-2 sudo[201417]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:06 compute-2 sudo[201540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppirldqynledrstlqzcpfrkdhgqjften ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401405.179505-4213-1847512814824/AnsiballZ_copy.py'
Nov 29 07:30:06 compute-2 sudo[201540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:06 compute-2 python3.9[201542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401405.179505-4213-1847512814824/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:06 compute-2 sudo[201540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:06 compute-2 ceph-mon[77138]: pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:06.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:07 compute-2 sudo[201693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyijbtbhxcqgmlqnptbtefrhjqozknv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401406.7310414-4258-118802686496586/AnsiballZ_systemd.py'
Nov 29 07:30:07 compute-2 sudo[201693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:07 compute-2 python3.9[201695]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:30:07 compute-2 systemd[1]: Reloading.
Nov 29 07:30:07 compute-2 systemd-rc-local-generator[201723]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:07 compute-2 systemd-sysv-generator[201726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:07 compute-2 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 07:30:07 compute-2 sudo[201693]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:08 compute-2 ceph-mon[77138]: pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:08 compute-2 sudo[201884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjdevafsllcqojcitmqgdqfqyjggpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401408.259649-4282-232822813485762/AnsiballZ_systemd.py'
Nov 29 07:30:08 compute-2 sudo[201884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:08 compute-2 podman[201886]: 2025-11-29 07:30:08.852968918 +0000 UTC m=+0.075860740 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:30:09 compute-2 python3.9[201887]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:30:09 compute-2 systemd[1]: Reloading.
Nov 29 07:30:09 compute-2 systemd-rc-local-generator[201928]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:09 compute-2 systemd-sysv-generator[201935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:09 compute-2 systemd[1]: Reloading.
Nov 29 07:30:09 compute-2 systemd-sysv-generator[201973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:09 compute-2 systemd-rc-local-generator[201967]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:09 compute-2 sudo[201884]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:10.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:10 compute-2 ceph-mon[77138]: pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:11 compute-2 sshd-session[143928]: Connection closed by 192.168.122.30 port 37982
Nov 29 07:30:11 compute-2 sshd-session[143925]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:30:11 compute-2 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 07:30:11 compute-2 systemd[1]: session-48.scope: Consumed 3min 58.470s CPU time.
Nov 29 07:30:11 compute-2 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Nov 29 07:30:11 compute-2 systemd-logind[787]: Removed session 48.
Nov 29 07:30:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:12 compute-2 ceph-mon[77138]: pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:13 compute-2 ceph-mon[77138]: pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:14.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:14 compute-2 sudo[202005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:14 compute-2 sudo[202005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:14 compute-2 sudo[202005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:15 compute-2 sudo[202030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:30:15 compute-2 sudo[202030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:15 compute-2 sudo[202030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:30:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:30:15 compute-2 ceph-mon[77138]: pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:16.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:16 compute-2 sshd-session[202056]: Accepted publickey for zuul from 192.168.122.30 port 37640 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:30:16 compute-2 systemd-logind[787]: New session 49 of user zuul.
Nov 29 07:30:16 compute-2 systemd[1]: Started Session 49 of User zuul.
Nov 29 07:30:16 compute-2 sshd-session[202056]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:30:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:17 compute-2 python3.9[202210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:30:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:18.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:18 compute-2 ceph-mon[77138]: pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:19 compute-2 python3.9[202365]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:30:19 compute-2 network[202382]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:30:19 compute-2 network[202383]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:30:19 compute-2 network[202384]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:30:19 compute-2 ceph-mon[77138]: pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:21 compute-2 ceph-mon[77138]: pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:22 compute-2 sudo[202479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:22 compute-2 sudo[202479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:22.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:22 compute-2 sudo[202479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:22 compute-2 sudo[202509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:22 compute-2 sudo[202509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:22 compute-2 sudo[202509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:24.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:24 compute-2 ceph-mon[77138]: pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:25 compute-2 ceph-mon[77138]: pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:26.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:26 compute-2 sudo[202718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmpjahbpchakflysnaxethexshhibvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401426.0899851-108-54639300058192/AnsiballZ_setup.py'
Nov 29 07:30:26 compute-2 sudo[202718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:26 compute-2 podman[202681]: 2025-11-29 07:30:26.528850612 +0000 UTC m=+0.133160065 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:30:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:26 compute-2 python3.9[202724]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:30:27 compute-2 sudo[202718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-2 sudo[202817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfoubtbcacfwnvqmvlkhemidcbkidcpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401426.0899851-108-54639300058192/AnsiballZ_dnf.py'
Nov 29 07:30:27 compute-2 sudo[202817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:27 compute-2 python3.9[202819]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:30:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:28.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:28 compute-2 ceph-mon[77138]: pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:29 compute-2 ceph-mon[77138]: pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:30.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:32.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:32 compute-2 ceph-mon[77138]: pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:30:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:32.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:30:33 compute-2 ceph-mon[77138]: pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:33 compute-2 sudo[202817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:34.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:34 compute-2 sudo[202973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzzinrdthyywggwpdwctcoxwfzavmsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401433.986642-145-142340469890375/AnsiballZ_stat.py'
Nov 29 07:30:34 compute-2 sudo[202973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:34 compute-2 python3.9[202975]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:30:34 compute-2 sudo[202973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:35 compute-2 sudo[203126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkgvstarkdbnvxhgxvrhyzmbdyilymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401435.2415116-176-245286734111782/AnsiballZ_command.py'
Nov 29 07:30:35 compute-2 sudo[203126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:36 compute-2 ceph-mon[77138]: pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:36.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:36 compute-2 python3.9[203128]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:36 compute-2 sudo[203126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:36.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:36 compute-2 sudo[203279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhxgqroqunngyfcrkwgqduckdhsiphl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401436.5382867-205-240264790981113/AnsiballZ_stat.py'
Nov 29 07:30:36 compute-2 sudo[203279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:37 compute-2 python3.9[203281]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:30:37 compute-2 sudo[203279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:37 compute-2 sudo[203432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbannohrbmbxbmxhimsiginafabtlmji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401437.4141698-229-658050288846/AnsiballZ_command.py'
Nov 29 07:30:37 compute-2 sudo[203432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:38 compute-2 python3.9[203434]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:38 compute-2 sudo[203432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:38.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:38 compute-2 sudo[203585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhjhforamfppxuvxpkghkbohkwentjmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401438.374681-253-132460314812172/AnsiballZ_stat.py'
Nov 29 07:30:38 compute-2 sudo[203585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:38 compute-2 python3.9[203587]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:38 compute-2 sudo[203585]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:39 compute-2 ceph-mon[77138]: pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:39 compute-2 podman[203665]: 2025-11-29 07:30:39.692043271 +0000 UTC m=+0.077824022 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:30:39 compute-2 sudo[203728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgyvwcyugzgonlzuioqxsibergntxubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401438.374681-253-132460314812172/AnsiballZ_copy.py'
Nov 29 07:30:39 compute-2 sudo[203728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:39 compute-2 python3.9[203730]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401438.374681-253-132460314812172/.source.iscsi _original_basename=.c8iznajd follow=False checksum=97bb9f42ab221174b873b941c56b2dec3237cd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:39 compute-2 sudo[203728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:40.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:40 compute-2 sudo[203880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whsflhfdgumghqkmkxodvwdmkdowgpoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401440.189503-297-172941399943422/AnsiballZ_file.py'
Nov 29 07:30:40 compute-2 sudo[203880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:40 compute-2 python3.9[203882]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:40 compute-2 sudo[203880]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:41 compute-2 sudo[204033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgdzchtxhrqmfbkuluawkciracugvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401441.2065334-321-154401620542605/AnsiballZ_lineinfile.py'
Nov 29 07:30:41 compute-2 sudo[204033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:41 compute-2 python3.9[204035]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:41 compute-2 sudo[204033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:42.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:30:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:30:42 compute-2 sudo[204112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:42 compute-2 sudo[204112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:42 compute-2 sudo[204112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:42 compute-2 sudo[204137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:42 compute-2 sudo[204137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:42 compute-2 sudo[204137]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:43 compute-2 sudo[204236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyjpllrashlbxbabzadfgquggqmpuxtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401442.4634764-348-71781890624840/AnsiballZ_systemd_service.py'
Nov 29 07:30:43 compute-2 sudo[204236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:43 compute-2 python3.9[204238]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:30:43 compute-2 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 07:30:43 compute-2 sudo[204236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:30:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:30:44 compute-2 ceph-mon[77138]: pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:30:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:30:44 compute-2 sudo[204392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refdacqaiqsvocfzoncyikwjbwecaeef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401444.2683344-373-190321948411499/AnsiballZ_systemd_service.py'
Nov 29 07:30:44 compute-2 sudo[204392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:45 compute-2 python3.9[204394]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:30:45 compute-2 systemd[1]: Reloading.
Nov 29 07:30:45 compute-2 systemd-rc-local-generator[204429]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:45 compute-2 systemd-sysv-generator[204432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:45 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:30:45 compute-2 systemd[1]: Starting Open-iSCSI...
Nov 29 07:30:45 compute-2 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 07:30:45 compute-2 systemd[1]: Started Open-iSCSI.
Nov 29 07:30:45 compute-2 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 07:30:45 compute-2 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 07:30:45 compute-2 sudo[204392]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:46.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:46 compute-2 ceph-mon[77138]: pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:46 compute-2 ceph-mon[77138]: pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:46.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:47 compute-2 sudo[204595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcjvcwypziclxytxevktcdiutmcmzhyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401447.2212162-406-22356322846830/AnsiballZ_service_facts.py'
Nov 29 07:30:47 compute-2 sudo[204595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:47 compute-2 ceph-mon[77138]: pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:47 compute-2 python3.9[204597]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:30:47 compute-2 network[204614]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:30:47 compute-2 network[204615]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:30:47 compute-2 network[204616]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:30:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:49 compute-2 ceph-mon[77138]: pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:51 compute-2 ceph-mon[77138]: pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:52 compute-2 ceph-mon[77138]: pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:52 compute-2 sudo[204595]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:53 compute-2 ceph-mon[77138]: pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:53 compute-2 sudo[204889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldnedqbepilwcrwjgjslrcvcngfejowr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401453.48314-436-182343978839670/AnsiballZ_file.py'
Nov 29 07:30:53 compute-2 sudo[204889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:54 compute-2 python3.9[204891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:30:54 compute-2 sudo[204889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:30:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:30:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:54.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:54 compute-2 sudo[205041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztuonuzzibpfavncuspibcqqswnepst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401454.3293967-459-36428479866706/AnsiballZ_modprobe.py'
Nov 29 07:30:54 compute-2 sudo[205041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:55 compute-2 python3.9[205043]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 07:30:55 compute-2 sudo[205041]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:30:55 compute-2 sudo[205198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibxasvcoobupnqgpfcrvkfcjmluvrgwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401455.3226225-483-9045586346903/AnsiballZ_stat.py'
Nov 29 07:30:55 compute-2 sudo[205198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:56.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:56 compute-2 python3.9[205200]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:56 compute-2 sudo[205198]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:30:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:30:56 compute-2 sudo[205332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqnwhvzeuvvuvtyzbmcqqbfjwbusqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401455.3226225-483-9045586346903/AnsiballZ_copy.py'
Nov 29 07:30:56 compute-2 sudo[205332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:56 compute-2 podman[205295]: 2025-11-29 07:30:56.73809553 +0000 UTC m=+0.102982304 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:30:56 compute-2 python3.9[205342]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401455.3226225-483-9045586346903/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:56 compute-2 sudo[205332]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:57 compute-2 ceph-mon[77138]: pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:30:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:58.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:30:58 compute-2 sudo[205503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sygsrlmdoincdgakjeuexmqfjroxtqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401457.817372-531-53252361673494/AnsiballZ_lineinfile.py'
Nov 29 07:30:58 compute-2 sudo[205503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:58 compute-2 python3.9[205505]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:58 compute-2 sudo[205503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:30:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:30:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:30:59 compute-2 sudo[205656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ychsjceethgnuddeghmkkhplcpzgdvht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401458.6436365-556-65828410115676/AnsiballZ_systemd.py'
Nov 29 07:30:59 compute-2 sudo[205656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:59 compute-2 ceph-mon[77138]: pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:30:59 compute-2 python3.9[205658]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:30:59 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:30:59 compute-2 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:30:59 compute-2 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:30:59 compute-2 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:30:59 compute-2 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:30:59 compute-2 sudo[205656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:00.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:00 compute-2 sudo[205812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuglfudxmzdkpohvulpnsfkuduqjyels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401460.0651612-580-66448175521959/AnsiballZ_file.py'
Nov 29 07:31:00 compute-2 sudo[205812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:00 compute-2 ceph-mon[77138]: pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:00 compute-2 python3.9[205814]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:00 compute-2 sudo[205812]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:01 compute-2 sudo[205965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxpbqawopvwexrcbzonldqkrzaesqolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401461.1113708-606-188665307306152/AnsiballZ_stat.py'
Nov 29 07:31:01 compute-2 sudo[205965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:01 compute-2 python3.9[205967]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:31:01 compute-2 sudo[205965]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:01 compute-2 ceph-mon[77138]: pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:02.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:02 compute-2 sudo[206117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrylddntzmswpsoqzoyzrupbfnycpqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401461.9852674-635-194853255741211/AnsiballZ_stat.py'
Nov 29 07:31:02 compute-2 sudo[206117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:02 compute-2 python3.9[206119]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:31:02 compute-2 sudo[206117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:02.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:03 compute-2 sudo[206167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:03 compute-2 sudo[206167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:03 compute-2 sudo[206167]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:03 compute-2 sudo[206222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:03 compute-2 sudo[206222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:03 compute-2 sudo[206222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:31:03.273 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:31:03.275 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:31:03.275 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:31:03 compute-2 sudo[206320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvvrfzirtnntxbvhvwyirxttudcuwki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401462.9074097-658-54992472142180/AnsiballZ_stat.py'
Nov 29 07:31:03 compute-2 sudo[206320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:03 compute-2 python3.9[206322]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:03 compute-2 sudo[206320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:04.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:04 compute-2 sudo[206443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfalphbmxnrlsyuzziagjuxszjarkip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401462.9074097-658-54992472142180/AnsiballZ_copy.py'
Nov 29 07:31:04 compute-2 sudo[206443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:04 compute-2 python3.9[206445]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401462.9074097-658-54992472142180/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:04 compute-2 sudo[206443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:05 compute-2 ceph-mon[77138]: pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:05 compute-2 sudo[206595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npauxlrfottsgndmmwycyjnjwkkcjofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401464.6480289-702-2202258106603/AnsiballZ_command.py'
Nov 29 07:31:05 compute-2 sudo[206595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:05 compute-2 python3.9[206598]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:31:05 compute-2 sudo[206595]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:06 compute-2 sudo[206749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkxwowfirbgwcculklsmvitnbziejbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401465.630116-727-23836948703749/AnsiballZ_lineinfile.py'
Nov 29 07:31:06 compute-2 sudo[206749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:06.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:06 compute-2 python3.9[206751]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:06 compute-2 sudo[206749]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:06.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:07 compute-2 sudo[206902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwukjffwpytjuafsqdrurqvsygmhxzdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401466.593175-750-161710615627857/AnsiballZ_replace.py'
Nov 29 07:31:07 compute-2 sudo[206902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:07 compute-2 ceph-mon[77138]: pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:07 compute-2 python3.9[206904]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:07 compute-2 sudo[206902]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:07 compute-2 sudo[207054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinweiwujglfvflfocczrcaiukosbghi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401467.5668046-775-16866425174206/AnsiballZ_replace.py'
Nov 29 07:31:07 compute-2 sudo[207054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:08 compute-2 python3.9[207056]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:08 compute-2 sudo[207054]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:08 compute-2 ceph-mon[77138]: pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:08 compute-2 sudo[207206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtraedbcvixtbpzzruzzaptavhppkowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401468.5167046-802-150332692163750/AnsiballZ_lineinfile.py'
Nov 29 07:31:08 compute-2 sudo[207206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:09 compute-2 python3.9[207208]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:09 compute-2 sudo[207206]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:09 compute-2 sudo[207359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvikowzeoslbgdcqfmutlkfzxosqotg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401469.3447094-802-17394899323253/AnsiballZ_lineinfile.py'
Nov 29 07:31:09 compute-2 sudo[207359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:09 compute-2 podman[207361]: 2025-11-29 07:31:09.925077339 +0000 UTC m=+0.109963454 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:31:10 compute-2 python3.9[207362]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:10 compute-2 sudo[207359]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:10 compute-2 ceph-mon[77138]: pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:10 compute-2 sudo[207530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkbzzdkmwyoddznzlqkwstegibyqzrpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401470.2289605-802-212238217815735/AnsiballZ_lineinfile.py'
Nov 29 07:31:10 compute-2 sudo[207530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:10.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:10 compute-2 python3.9[207532]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:10 compute-2 sudo[207530]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:11 compute-2 sudo[207683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfifigbfuafcepawrskzrjyzgzpmfwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401471.003686-802-264449545825578/AnsiballZ_lineinfile.py'
Nov 29 07:31:11 compute-2 sudo[207683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:11 compute-2 python3.9[207685]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:11 compute-2 sudo[207683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:11 compute-2 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 07:31:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:12.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:13 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 07:31:13 compute-2 sudo[207838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkfxmkgbflfyytvshtosxlwyukfrtmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401473.0990462-889-232257971493164/AnsiballZ_stat.py'
Nov 29 07:31:13 compute-2 sudo[207838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:13 compute-2 python3.9[207840]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:31:13 compute-2 sudo[207838]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:14 compute-2 ceph-mon[77138]: pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:14 compute-2 sudo[207992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otnpflrfxlufddbilevqrklngaeeaeyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401474.0508895-913-45435045705430/AnsiballZ_file.py'
Nov 29 07:31:14 compute-2 sudo[207992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:14 compute-2 python3.9[207994]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:14 compute-2 sudo[207992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:15 compute-2 sudo[208020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:15 compute-2 sudo[208020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:15 compute-2 sudo[208020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:15 compute-2 sudo[208046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:15 compute-2 sudo[208046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:15 compute-2 sudo[208046]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:15 compute-2 sudo[208103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:15 compute-2 sudo[208103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:15 compute-2 sudo[208103]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:15 compute-2 sudo[208147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:31:15 compute-2 sudo[208147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:15 compute-2 sudo[208269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgnkbrtosiujgvvptilturgkipecpqga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401475.2643657-940-82678139231066/AnsiballZ_file.py'
Nov 29 07:31:15 compute-2 sudo[208269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:15 compute-2 python3.9[208273]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:15 compute-2 sudo[208269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:16 compute-2 podman[208317]: 2025-11-29 07:31:16.088141861 +0000 UTC m=+0.081276710 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:31:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:16 compute-2 podman[208317]: 2025-11-29 07:31:16.233058865 +0000 UTC m=+0.226193724 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:31:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:17 compute-2 podman[208546]: 2025-11-29 07:31:17.491275512 +0000 UTC m=+0.097560473 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:31:17 compute-2 sudo[208651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqlgwpkcjrpkvlqrshjzavldbezbkvuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401477.29684-964-114132344325255/AnsiballZ_stat.py'
Nov 29 07:31:17 compute-2 sudo[208651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:17 compute-2 ceph-mon[77138]: pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:17 compute-2 podman[208546]: 2025-11-29 07:31:17.786724417 +0000 UTC m=+0.393009278 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:31:18 compute-2 python3.9[208653]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:18 compute-2 sudo[208651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:18 compute-2 podman[208686]: 2025-11-29 07:31:18.183401751 +0000 UTC m=+0.079365231 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4)
Nov 29 07:31:18 compute-2 podman[208686]: 2025-11-29 07:31:18.1967117 +0000 UTC m=+0.092675170 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 29 07:31:18 compute-2 sudo[208147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:18 compute-2 sudo[208790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielazuvoyfcgwysjlbesirvlhxqpspmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401477.29684-964-114132344325255/AnsiballZ_file.py'
Nov 29 07:31:18 compute-2 sudo[208790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:18 compute-2 python3.9[208792]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:18 compute-2 sudo[208790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:19 compute-2 sudo[208943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikuidniybihikqgiehghynnlenqfjtvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401478.8774953-964-132895695906245/AnsiballZ_stat.py'
Nov 29 07:31:19 compute-2 sudo[208943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:19 compute-2 python3.9[208945]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:19 compute-2 sudo[208943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:19 compute-2 sudo[209021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uslqemjeirussvfgapuueavofiidwohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401478.8774953-964-132895695906245/AnsiballZ_file.py'
Nov 29 07:31:19 compute-2 sudo[209021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:20 compute-2 python3.9[209023]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:20 compute-2 sudo[209021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:20.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:22 compute-2 sudo[209174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifdonxmxmjjkyawgwnbzosuwctkgtky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401481.6238046-1034-196985964400095/AnsiballZ_file.py'
Nov 29 07:31:22 compute-2 sudo[209174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:22.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:22 compute-2 python3.9[209176]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:22 compute-2 sudo[209174]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:23 compute-2 sudo[209326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndvhvevfgpmwzndmwbbpkbozjzpbrsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401482.59424-1057-127701884710529/AnsiballZ_stat.py'
Nov 29 07:31:23 compute-2 sudo[209326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:23 compute-2 ceph-mon[77138]: pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:23 compute-2 ceph-mon[77138]: pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:23 compute-2 sudo[209330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-2 sudo[209330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-2 sudo[209330]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-2 python3.9[209329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:23 compute-2 sudo[209326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-2 sudo[209355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-2 sudo[209355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-2 sudo[209355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-2 sudo[209455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cafajyjnatamicofoxirgmooujgtmfuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401482.59424-1057-127701884710529/AnsiballZ_file.py'
Nov 29 07:31:23 compute-2 sudo[209455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:23 compute-2 python3.9[209457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:23 compute-2 sudo[209455]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:24 compute-2 sudo[209607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmruwxddqfrellwgbqlaxnfhuytlkcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401484.2035985-1093-178731693647170/AnsiballZ_stat.py'
Nov 29 07:31:24 compute-2 sudo[209607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:24 compute-2 python3.9[209609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:24 compute-2 sudo[209607]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:25 compute-2 ceph-mon[77138]: pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:25 compute-2 ceph-mon[77138]: pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:25 compute-2 ceph-mon[77138]: pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:25 compute-2 sudo[209613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:25 compute-2 sudo[209613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:25 compute-2 sudo[209613]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:25 compute-2 sudo[209638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:25 compute-2 sudo[209638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:25 compute-2 sudo[209638]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:26 compute-2 sudo[209663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:26 compute-2 sudo[209663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:26 compute-2 sudo[209663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:26 compute-2 sudo[209688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:31:26 compute-2 sudo[209688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:26 compute-2 sudo[209688]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:26 compute-2 ceph-mon[77138]: pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:27 compute-2 sudo[209830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjmdtjongkawsdntrjwcadfnyszcugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401484.2035985-1093-178731693647170/AnsiballZ_file.py'
Nov 29 07:31:27 compute-2 sudo[209830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:27 compute-2 podman[209791]: 2025-11-29 07:31:27.086540552 +0000 UTC m=+0.119353851 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:31:27 compute-2 python3.9[209840]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:27 compute-2 sudo[209830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 07:31:27 compute-2 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 07:31:27 compute-2 sudo[209998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdrugpllfourbelzpcdwpawnkvqkwwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401487.527783-1129-116574385041648/AnsiballZ_systemd.py'
Nov 29 07:31:27 compute-2 sudo[209998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:28 compute-2 python3.9[210000]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:31:28 compute-2 systemd[1]: Reloading.
Nov 29 07:31:28 compute-2 systemd-rc-local-generator[210028]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:31:28 compute-2 systemd-sysv-generator[210031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:31:28 compute-2 ceph-mon[77138]: pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:31:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:28 compute-2 sudo[209998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:28.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:29 compute-2 sudo[210188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dliovnhpyqvqjsqijfmmjokmkqugiiep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401488.9526987-1153-249987920929330/AnsiballZ_stat.py'
Nov 29 07:31:29 compute-2 sudo[210188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:29 compute-2 python3.9[210190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:29 compute-2 sudo[210188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:30 compute-2 sudo[210266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspkiasgcnjhvecsehaxwpjojfwcdcnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401488.9526987-1153-249987920929330/AnsiballZ_file.py'
Nov 29 07:31:30 compute-2 sudo[210266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:30 compute-2 python3.9[210268]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:30 compute-2 sudo[210266]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:30 compute-2 ceph-mon[77138]: pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:30 compute-2 sudo[210418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfugfhroyvjhzgqxiosgeflosrbuztmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401490.5515583-1189-177177444431021/AnsiballZ_stat.py'
Nov 29 07:31:30 compute-2 sudo[210418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:31 compute-2 python3.9[210420]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:31 compute-2 sudo[210418]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:31 compute-2 sudo[210497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irntwofmgxxlmofzenpjbjndzttgyiau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401490.5515583-1189-177177444431021/AnsiballZ_file.py'
Nov 29 07:31:31 compute-2 sudo[210497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:31 compute-2 python3.9[210499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:31 compute-2 sudo[210497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:31 compute-2 ceph-mon[77138]: pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:31:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 3047 writes, 16K keys, 3047 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 3046 writes, 3046 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1172 writes, 5501 keys, 1172 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s
                                           Interval WAL: 1171 writes, 1171 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     65.5      0.29              0.07         7    0.041       0      0       0.0       0.0
                                             L6      1/0    7.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.7     98.4     80.1      0.64              0.20         6    0.107     26K   3307       0.0       0.0
                                            Sum      1/0    7.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     67.8     75.5      0.93              0.26        13    0.072     26K   3307       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     79.9     80.0      0.43              0.16         6    0.072     14K   2015       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     98.4     80.1      0.64              0.20         6    0.107     26K   3307       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.9      0.20              0.07         6    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.9 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 2.67 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(115,2.42 MB,0.795249%) FilterBlock(13,83.98 KB,0.0269789%) IndexBlock(13,169.55 KB,0.0544648%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:31:32 compute-2 sudo[210649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqyacjkxtfilmmhegssovwrnyotufjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401492.0708673-1225-122778131746923/AnsiballZ_systemd.py'
Nov 29 07:31:32 compute-2 sudo[210649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:32.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:32 compute-2 python3.9[210651]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:31:32 compute-2 systemd[1]: Reloading.
Nov 29 07:31:32 compute-2 systemd-rc-local-generator[210679]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:31:32 compute-2 systemd-sysv-generator[210683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:31:33 compute-2 systemd[1]: Starting Create netns directory...
Nov 29 07:31:33 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:31:33 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:31:33 compute-2 systemd[1]: Finished Create netns directory.
Nov 29 07:31:34 compute-2 sudo[210649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:34 compute-2 ceph-mon[77138]: pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:34.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:35 compute-2 sudo[210844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lofkemqpjzylvkndudjlgsxjqeyjhjpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401494.874536-1254-170989802557688/AnsiballZ_file.py'
Nov 29 07:31:35 compute-2 sudo[210844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:35 compute-2 python3.9[210846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:35 compute-2 sudo[210844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:36 compute-2 sudo[210996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqplohfzooreclheuapucmrqxtkcddye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401495.7837448-1279-70653468620442/AnsiballZ_stat.py'
Nov 29 07:31:36 compute-2 sudo[210996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:36 compute-2 python3.9[210998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:36 compute-2 sudo[210996]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:36 compute-2 ceph-mon[77138]: pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000033s ======
Nov 29 07:31:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:36.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Nov 29 07:31:36 compute-2 sudo[211119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqeowaumgibwwkhqahaeltrpncrnfbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401495.7837448-1279-70653468620442/AnsiballZ_copy.py'
Nov 29 07:31:36 compute-2 sudo[211119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:37 compute-2 python3.9[211121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401495.7837448-1279-70653468620442/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:37 compute-2 sudo[211119]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:38 compute-2 ceph-mon[77138]: pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:38 compute-2 sudo[211272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjtmeikxevnccozzdxehprhqwwbogko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401497.7276504-1331-208385660105/AnsiballZ_file.py'
Nov 29 07:31:38 compute-2 sudo[211272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:38 compute-2 python3.9[211274]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:31:38 compute-2 sudo[211272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:38.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:39 compute-2 sudo[211425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgznaszrdqkbzameolyipfixrdfdhttw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401498.7588978-1353-64953397556918/AnsiballZ_stat.py'
Nov 29 07:31:39 compute-2 sudo[211425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:39 compute-2 python3.9[211427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:39 compute-2 sudo[211425]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:39 compute-2 sudo[211548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idufwdwgeloqzfvaczcpazhxwadnpkim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401498.7588978-1353-64953397556918/AnsiballZ_copy.py'
Nov 29 07:31:39 compute-2 sudo[211548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:40 compute-2 podman[211550]: 2025-11-29 07:31:40.082628008 +0000 UTC m=+0.083748029 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:31:40 compute-2 python3.9[211551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401498.7588978-1353-64953397556918/.source.json _original_basename=.o6i10y14 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:40 compute-2 sudo[211548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:40 compute-2 ceph-mon[77138]: pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:40 compute-2 sudo[211719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymejvgiutapyhfaoucxtfwdzeczksrtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401500.5041623-1398-75550541065349/AnsiballZ_file.py'
Nov 29 07:31:40 compute-2 sudo[211719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:41 compute-2 python3.9[211721]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:41 compute-2 sudo[211719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:41 compute-2 sudo[211872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgvhdzvuplmwiziwokdpcxwfbmsrylk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401501.350392-1422-8952385209760/AnsiballZ_stat.py'
Nov 29 07:31:41 compute-2 sudo[211872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:41 compute-2 sudo[211872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:42 compute-2 ceph-mon[77138]: pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:42 compute-2 sudo[211995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdkgwhwyqruztkszbsvukafpmwyadsma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401501.350392-1422-8952385209760/AnsiballZ_copy.py'
Nov 29 07:31:42 compute-2 sudo[211995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:42 compute-2 sudo[211995]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:43 compute-2 sudo[212148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jifmlsyqlynglbjmqxltpwxopjulsbjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401503.220341-1474-56643936082354/AnsiballZ_container_config_data.py'
Nov 29 07:31:43 compute-2 sudo[212148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:43 compute-2 python3.9[212150]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 07:31:43 compute-2 sudo[212148]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:44.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:45 compute-2 sudo[212301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxobwhtvgjmnhdbrnotdyuhwjqpdnud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401505.2152467-1501-73338290683769/AnsiballZ_container_config_hash.py'
Nov 29 07:31:45 compute-2 sudo[212301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:45 compute-2 python3.9[212303]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:31:45 compute-2 sudo[212301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:31:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:46.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:31:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:48.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:50.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:31:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:50.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:31:51 compute-2 sshd-session[212331]: Invalid user sol from 45.148.10.240 port 41428
Nov 29 07:31:51 compute-2 sshd-session[212331]: Connection closed by invalid user sol 45.148.10.240 port 41428 [preauth]
Nov 29 07:31:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:52.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos active c 1005..1671) lease_timeout -- calling new election
Nov 29 07:31:52 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:31:52 compute-2 ceph-mon[77138]: paxos.1).electionLogic(28) init, last seen epoch 28
Nov 29 07:31:52 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:31:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:54.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:54 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:31:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:56.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:57 compute-2 podman[212340]: 2025-11-29 07:31:57.755594089 +0000 UTC m=+0.155739727 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 07:31:57 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:31:58 compute-2 sudo[212488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwithbbeunxtefsyjfvrteqnduywugqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401517.629399-1528-179782527432151/AnsiballZ_podman_container_info.py'
Nov 29 07:31:58 compute-2 sudo[212488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:58 compute-2 python3.9[212490]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:31:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:31:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:58.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:31:58 compute-2 sudo[212488]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:58 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:31:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:31:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:31:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:31:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:31:58 compute-2 sudo[212542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:58 compute-2 sudo[212542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:58 compute-2 sudo[212542]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:58 compute-2 sudo[212567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:58 compute-2 sudo[212567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:58 compute-2 sudo[212567]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:59 compute-2 sudo[212593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:59 compute-2 sudo[212593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:59 compute-2 sudo[212593]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:59 compute-2 sudo[212618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:31:59 compute-2 sudo[212618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:59 compute-2 sudo[212618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:31:59 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:31:59 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:31:59 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:31:59 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:31:59 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:31:59 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 22m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:31:59 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:31:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:31:59 compute-2 ceph-mon[77138]: pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:00 compute-2 sudo[212768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whiabkfdfpknedfxlewknvzkjxztkeam ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401519.662869-1566-85167656346833/AnsiballZ_edpm_container_manage.py'
Nov 29 07:32:00 compute-2 sudo[212768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:00 compute-2 python3[212770]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:32:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:00.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:02 compute-2 podman[212783]: 2025-11-29 07:32:02.054883903 +0000 UTC m=+1.480436557 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:32:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:02 compute-2 ceph-mon[77138]: pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:02 compute-2 podman[212841]: 2025-11-29 07:32:02.231044631 +0000 UTC m=+0.031863095 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:32:02 compute-2 podman[212841]: 2025-11-29 07:32:02.502028495 +0000 UTC m=+0.302846959 container create 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:32:02 compute-2 python3[212770]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:32:02 compute-2 sudo[212768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:02.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:32:03.274 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:32:03.275 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:32:03.275 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:32:03 compute-2 sudo[213030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdnzcgkymsdqrufhdeonzrxklxwotxtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401523.0628066-1591-85012152471814/AnsiballZ_stat.py'
Nov 29 07:32:03 compute-2 sudo[213030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:03 compute-2 python3.9[213032]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:32:03 compute-2 sudo[213030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:03 compute-2 ceph-mon[77138]: pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:04 compute-2 sudo[213184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvppoxtnbixgfrfwdslxcngkknlzdnwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401524.2089818-1617-75611266578907/AnsiballZ_file.py'
Nov 29 07:32:04 compute-2 sudo[213184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:04.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:04 compute-2 python3.9[213186]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:04 compute-2 sudo[213184]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:05 compute-2 sudo[213261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghakhwwszyncbgsfqjcavruwtkbgnpba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401524.2089818-1617-75611266578907/AnsiballZ_stat.py'
Nov 29 07:32:05 compute-2 sudo[213261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:05 compute-2 python3.9[213263]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:32:05 compute-2 sudo[213261]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:05 compute-2 ceph-mon[77138]: pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:06 compute-2 sudo[213412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkwmcnmbzprbaipuopcasotnfxmbnem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401525.5442467-1617-106170438343969/AnsiballZ_copy.py'
Nov 29 07:32:06 compute-2 sudo[213412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:06 compute-2 python3.9[213414]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401525.5442467-1617-106170438343969/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:06 compute-2 sudo[213412]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:06 compute-2 sudo[213488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjjifqqpnntxeejgkhwiuqzvufxjpwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401525.5442467-1617-106170438343969/AnsiballZ_systemd.py'
Nov 29 07:32:06 compute-2 sudo[213488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:06.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:06 compute-2 python3.9[213490]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:32:07 compute-2 systemd[1]: Reloading.
Nov 29 07:32:07 compute-2 systemd-rc-local-generator[213517]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:07 compute-2 systemd-sysv-generator[213522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:07 compute-2 sudo[213488]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:07 compute-2 sudo[213600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czebjitwyzkptrsbumqhocbtxxukwrpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401525.5442467-1617-106170438343969/AnsiballZ_systemd.py'
Nov 29 07:32:07 compute-2 sudo[213600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:08 compute-2 python3.9[213602]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:08 compute-2 systemd[1]: Reloading.
Nov 29 07:32:08 compute-2 systemd-rc-local-generator[213630]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:08 compute-2 systemd-sysv-generator[213635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:08 compute-2 ceph-mon[77138]: pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:08 compute-2 systemd[1]: Starting multipathd container...
Nov 29 07:32:08 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:32:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff139cd29075d2c46196ef85312ece696b123646cd0040760d8e4e186c3aaaf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff139cd29075d2c46196ef85312ece696b123646cd0040760d8e4e186c3aaaf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:08 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706.
Nov 29 07:32:08 compute-2 podman[213642]: 2025-11-29 07:32:08.627897667 +0000 UTC m=+0.167994562 container init 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 07:32:08 compute-2 multipathd[213657]: + sudo -E kolla_set_configs
Nov 29 07:32:08 compute-2 podman[213642]: 2025-11-29 07:32:08.660666159 +0000 UTC m=+0.200763014 container start 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:32:08 compute-2 podman[213642]: multipathd
Nov 29 07:32:08 compute-2 systemd[1]: Started multipathd container.
Nov 29 07:32:08 compute-2 sudo[213664]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:32:08 compute-2 sudo[213664]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:32:08 compute-2 sudo[213664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:32:08 compute-2 sudo[213600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-2 podman[213663]: 2025-11-29 07:32:08.755355672 +0000 UTC m=+0.074477218 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 07:32:08 compute-2 systemd[1]: 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-549c1531f7250f15.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:32:08 compute-2 systemd[1]: 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-549c1531f7250f15.service: Failed with result 'exit-code'.
Nov 29 07:32:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:08.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:08 compute-2 multipathd[213657]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:32:08 compute-2 multipathd[213657]: INFO:__main__:Validating config file
Nov 29 07:32:08 compute-2 multipathd[213657]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:32:08 compute-2 multipathd[213657]: INFO:__main__:Writing out command to execute
Nov 29 07:32:08 compute-2 sudo[213664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-2 multipathd[213657]: ++ cat /run_command
Nov 29 07:32:08 compute-2 multipathd[213657]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:32:08 compute-2 multipathd[213657]: + ARGS=
Nov 29 07:32:08 compute-2 multipathd[213657]: + sudo kolla_copy_cacerts
Nov 29 07:32:08 compute-2 sudo[213704]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:32:08 compute-2 sudo[213704]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:32:08 compute-2 sudo[213704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:32:08 compute-2 sudo[213704]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-2 multipathd[213657]: + [[ ! -n '' ]]
Nov 29 07:32:08 compute-2 multipathd[213657]: + . kolla_extend_start
Nov 29 07:32:08 compute-2 multipathd[213657]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:32:08 compute-2 multipathd[213657]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:32:08 compute-2 multipathd[213657]: + umask 0022
Nov 29 07:32:08 compute-2 multipathd[213657]: + exec /usr/sbin/multipathd -d
Nov 29 07:32:08 compute-2 multipathd[213657]: 4495.551164 | --------start up--------
Nov 29 07:32:08 compute-2 multipathd[213657]: 4495.551352 | read /etc/multipath.conf
Nov 29 07:32:08 compute-2 multipathd[213657]: 4495.560994 | path checkers start up
Nov 29 07:32:10 compute-2 python3.9[213847]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:32:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:10 compute-2 sudo[214009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voyxnimtvnzmyfdofetvgiwxpkxbrxzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401530.2739491-1725-100247173778400/AnsiballZ_command.py'
Nov 29 07:32:10 compute-2 sudo[214009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:10 compute-2 podman[213973]: 2025-11-29 07:32:10.690360923 +0000 UTC m=+0.078689859 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Nov 29 07:32:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:10 compute-2 python3.9[214019]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:11 compute-2 sudo[214009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:14 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:32:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:14.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:15 compute-2 sudo[214185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enmypgbytejjxbbuhrduqpomnjvizpjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401534.700152-1750-160562828003387/AnsiballZ_systemd.py'
Nov 29 07:32:15 compute-2 sudo[214185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:15 compute-2 python3.9[214187]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:15 compute-2 systemd[1]: Stopping multipathd container...
Nov 29 07:32:15 compute-2 multipathd[213657]: 4502.348094 | exit (signal)
Nov 29 07:32:15 compute-2 multipathd[213657]: 4502.349565 | --------shut down-------
Nov 29 07:32:15 compute-2 systemd[1]: libpod-7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706.scope: Deactivated successfully.
Nov 29 07:32:15 compute-2 podman[214191]: 2025-11-29 07:32:15.675896369 +0000 UTC m=+0.120096698 container stop 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:32:15 compute-2 podman[214191]: 2025-11-29 07:32:15.705586967 +0000 UTC m=+0.149787316 container died 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Nov 29 07:32:15 compute-2 systemd[1]: 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-549c1531f7250f15.timer: Deactivated successfully.
Nov 29 07:32:15 compute-2 systemd[1]: Stopped /usr/bin/podman healthcheck run 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706.
Nov 29 07:32:15 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-userdata-shm.mount: Deactivated successfully.
Nov 29 07:32:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-3ff139cd29075d2c46196ef85312ece696b123646cd0040760d8e4e186c3aaaf-merged.mount: Deactivated successfully.
Nov 29 07:32:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:15 compute-2 podman[214191]: 2025-11-29 07:32:15.834724665 +0000 UTC m=+0.278925014 container cleanup 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:32:15 compute-2 podman[214191]: multipathd
Nov 29 07:32:15 compute-2 ceph-mon[77138]: pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:15 compute-2 podman[214221]: multipathd
Nov 29 07:32:15 compute-2 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 07:32:15 compute-2 systemd[1]: Stopped multipathd container.
Nov 29 07:32:15 compute-2 systemd[1]: Starting multipathd container...
Nov 29 07:32:16 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:32:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff139cd29075d2c46196ef85312ece696b123646cd0040760d8e4e186c3aaaf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff139cd29075d2c46196ef85312ece696b123646cd0040760d8e4e186c3aaaf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:16 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706.
Nov 29 07:32:16 compute-2 podman[214234]: 2025-11-29 07:32:16.175984276 +0000 UTC m=+0.188952970 container init 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:32:16 compute-2 multipathd[214250]: + sudo -E kolla_set_configs
Nov 29 07:32:16 compute-2 sudo[214256]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:32:16 compute-2 sudo[214256]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:32:16 compute-2 sudo[214256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:32:16 compute-2 podman[214234]: 2025-11-29 07:32:16.253905543 +0000 UTC m=+0.266874207 container start 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 07:32:16 compute-2 podman[214234]: multipathd
Nov 29 07:32:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:16 compute-2 systemd[1]: Started multipathd container.
Nov 29 07:32:16 compute-2 sudo[214185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:16 compute-2 multipathd[214250]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:32:16 compute-2 multipathd[214250]: INFO:__main__:Validating config file
Nov 29 07:32:16 compute-2 multipathd[214250]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:32:16 compute-2 multipathd[214250]: INFO:__main__:Writing out command to execute
Nov 29 07:32:16 compute-2 sudo[214256]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:16 compute-2 multipathd[214250]: ++ cat /run_command
Nov 29 07:32:16 compute-2 multipathd[214250]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:32:16 compute-2 multipathd[214250]: + ARGS=
Nov 29 07:32:16 compute-2 multipathd[214250]: + sudo kolla_copy_cacerts
Nov 29 07:32:16 compute-2 podman[214257]: 2025-11-29 07:32:16.370044945 +0000 UTC m=+0.096760897 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd)
Nov 29 07:32:16 compute-2 sudo[214279]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:32:16 compute-2 sudo[214279]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:32:16 compute-2 sudo[214279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:32:16 compute-2 sudo[214279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:16 compute-2 systemd[1]: 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-3ceb6c2a774a4769.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:32:16 compute-2 multipathd[214250]: + [[ ! -n '' ]]
Nov 29 07:32:16 compute-2 multipathd[214250]: + . kolla_extend_start
Nov 29 07:32:16 compute-2 multipathd[214250]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:32:16 compute-2 multipathd[214250]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:32:16 compute-2 multipathd[214250]: + umask 0022
Nov 29 07:32:16 compute-2 multipathd[214250]: + exec /usr/sbin/multipathd -d
Nov 29 07:32:16 compute-2 systemd[1]: 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706-3ceb6c2a774a4769.service: Failed with result 'exit-code'.
Nov 29 07:32:16 compute-2 multipathd[214250]: 4503.105587 | --------start up--------
Nov 29 07:32:16 compute-2 multipathd[214250]: 4503.105612 | read /etc/multipath.conf
Nov 29 07:32:16 compute-2 multipathd[214250]: 4503.113472 | path checkers start up
Nov 29 07:32:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:17 compute-2 ceph-mon[77138]: pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:17 compute-2 ceph-mon[77138]: pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:17 compute-2 ceph-mon[77138]: pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:18.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:18 compute-2 sudo[214441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maayxfkmycilbuxcrjnxpihyfubicfck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401538.1116843-1774-235834902887293/AnsiballZ_file.py'
Nov 29 07:32:18 compute-2 sudo[214441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:18 compute-2 python3.9[214443]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:18 compute-2 sudo[214441]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:32:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5786 writes, 24K keys, 5786 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5786 writes, 918 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 435 writes, 678 keys, 435 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 435 writes, 207 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:32:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:18 compute-2 ceph-mon[77138]: pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:19 compute-2 sudo[214468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:19 compute-2 sudo[214468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:19 compute-2 sudo[214468]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:19 compute-2 sudo[214494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:19 compute-2 sudo[214494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:19 compute-2 sudo[214494]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:19 compute-2 sudo[214644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhrhfyblvjucyyutridwowciinqbijsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401539.4296167-1810-133783163476157/AnsiballZ_file.py'
Nov 29 07:32:19 compute-2 sudo[214644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:20 compute-2 ceph-mon[77138]: pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:20 compute-2 python3.9[214646]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:32:20 compute-2 sudo[214644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:20 compute-2 sudo[214796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmydergcvlvbuxkzirodwmvuwghrsyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401540.4062223-1833-218239005844516/AnsiballZ_modprobe.py'
Nov 29 07:32:20 compute-2 sudo[214796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:21 compute-2 python3.9[214798]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 07:32:21 compute-2 kernel: Key type psk registered
Nov 29 07:32:21 compute-2 sudo[214796]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:21 compute-2 sudo[214962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcgkquvkqwwlbqqdfgarausxvktvkqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401541.4105837-1858-228099182622234/AnsiballZ_stat.py'
Nov 29 07:32:21 compute-2 sudo[214962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:21 compute-2 python3.9[214964]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:21 compute-2 sudo[214962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:22 compute-2 ceph-mon[77138]: pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:22.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:22 compute-2 sudo[215085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sepnlffxernkavqnbuwvzwbhkmebmyne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401541.4105837-1858-228099182622234/AnsiballZ_copy.py'
Nov 29 07:32:22 compute-2 sudo[215085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:22 compute-2 python3.9[215087]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401541.4105837-1858-228099182622234/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:22 compute-2 sudo[215085]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:22.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:23 compute-2 sudo[215238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uylaupcumqhwfcujoarhwhvhkrsuthlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401543.0715146-1906-45693530977053/AnsiballZ_lineinfile.py'
Nov 29 07:32:23 compute-2 sudo[215238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:23 compute-2 python3.9[215240]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:23 compute-2 sudo[215238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:24 compute-2 ceph-mon[77138]: pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:24.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:24 compute-2 sudo[215390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaphyqdwjzoqaylcuqpfndswapjbaxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401544.0674767-1931-207524586362999/AnsiballZ_systemd.py'
Nov 29 07:32:24 compute-2 sudo[215390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:24.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:24 compute-2 python3.9[215392]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:24 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:32:24 compute-2 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:32:24 compute-2 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:32:24 compute-2 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:32:24 compute-2 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:32:24 compute-2 sudo[215390]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:25 compute-2 ceph-mon[77138]: pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:25 compute-2 sudo[215547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geffwlcrazaavuqbtmvfksyxrxflkghw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401545.3147879-1953-32530681702390/AnsiballZ_dnf.py'
Nov 29 07:32:25 compute-2 sudo[215547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:25 compute-2 python3.9[215549]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:32:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:26.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:28 compute-2 podman[215555]: 2025-11-29 07:32:28.738857913 +0000 UTC m=+0.132956809 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:32:28 compute-2 systemd[1]: Reloading.
Nov 29 07:32:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:28.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:28 compute-2 systemd-sysv-generator[215615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:28 compute-2 systemd-rc-local-generator[215611]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:29 compute-2 systemd[1]: Reloading.
Nov 29 07:32:29 compute-2 systemd-sysv-generator[215650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:29 compute-2 systemd-rc-local-generator[215647]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:29 compute-2 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 07:32:29 compute-2 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 07:32:29 compute-2 lvm[215693]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:32:29 compute-2 lvm[215693]: VG ceph_vg0 finished
Nov 29 07:32:29 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:32:29 compute-2 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:32:29 compute-2 systemd[1]: Reloading.
Nov 29 07:32:29 compute-2 systemd-rc-local-generator[215744]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:30 compute-2 systemd-sysv-generator[215748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:30 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:32:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:30 compute-2 sudo[215547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:31 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:32:31 compute-2 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:32:31 compute-2 systemd[1]: man-db-cache-update.service: Consumed 2.168s CPU time.
Nov 29 07:32:31 compute-2 systemd[1]: run-r943e1a3b132a439dafef985604fc6967.service: Deactivated successfully.
Nov 29 07:32:31 compute-2 ceph-mon[77138]: pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:32.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:32.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:34.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:37 compute-2 ceph-mon[77138]: pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:37 compute-2 ceph-mon[77138]: pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:38.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:32:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:39 compute-2 sudo[216912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:39 compute-2 sudo[216912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:39 compute-2 sudo[216912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:39 compute-2 sudo[216937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:39 compute-2 sudo[216937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:39 compute-2 sudo[216937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:41 compute-2 ceph-mon[77138]: pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:41 compute-2 ceph-mon[77138]: pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:41 compute-2 ceph-mon[77138]: pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:41 compute-2 sudo[217103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswzjouisuaijekouqqssktzohifxqgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401561.242944-1978-238433221147229/AnsiballZ_systemd_service.py'
Nov 29 07:32:41 compute-2 sudo[217103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:41 compute-2 podman[217062]: 2025-11-29 07:32:41.726623958 +0000 UTC m=+0.120153468 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 07:32:41 compute-2 python3.9[217110]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:42 compute-2 systemd[1]: Stopping Open-iSCSI...
Nov 29 07:32:42 compute-2 iscsid[204436]: iscsid shutting down.
Nov 29 07:32:42 compute-2 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 07:32:42 compute-2 systemd[1]: Stopped Open-iSCSI.
Nov 29 07:32:42 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:32:42 compute-2 systemd[1]: Starting Open-iSCSI...
Nov 29 07:32:42 compute-2 systemd[1]: Started Open-iSCSI.
Nov 29 07:32:42 compute-2 sudo[217103]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:42.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:43 compute-2 python3.9[217264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:32:43 compute-2 ceph-mon[77138]: pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:43 compute-2 ceph-mon[77138]: pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:44 compute-2 sudo[217419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwtgtrmtpljdkberzazobnvpmzkgewbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401563.7609751-2029-137059932481182/AnsiballZ_file.py'
Nov 29 07:32:44 compute-2 sudo[217419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:44.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:44 compute-2 python3.9[217421]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:44 compute-2 sudo[217419]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:44 compute-2 ceph-mon[77138]: pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:45 compute-2 sudo[217572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udckzoyhqkfqmaqlsszpxpqzykxcemee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401565.0339575-2063-226454325155766/AnsiballZ_systemd_service.py'
Nov 29 07:32:45 compute-2 sudo[217572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:45 compute-2 python3.9[217574]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:32:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:45 compute-2 systemd[1]: Reloading.
Nov 29 07:32:45 compute-2 systemd-sysv-generator[217605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:45 compute-2 systemd-rc-local-generator[217601]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:46 compute-2 sudo[217572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:46 compute-2 ceph-mon[77138]: pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:46 compute-2 podman[217634]: 2025-11-29 07:32:46.708409371 +0000 UTC m=+0.101400342 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:32:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:47 compute-2 python3.9[217781]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:32:47 compute-2 network[217798]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:32:47 compute-2 network[217799]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:32:47 compute-2 network[217800]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:32:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:49 compute-2 ceph-mon[77138]: pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:50 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 07:32:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:50 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 07:32:50 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 07:32:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 07:32:52 compute-2 ceph-mon[77138]: pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:32:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 07:32:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 07:32:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:52.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:52 compute-2 sudo[218075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joqpnisavosrxlmuryrwmfubfbbcrfgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401572.2636874-2120-21113161762835/AnsiballZ_systemd_service.py'
Nov 29 07:32:52 compute-2 sudo[218075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:52 compute-2 python3.9[218077]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:53 compute-2 sudo[218075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:53 compute-2 sudo[218229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqdkdkwabndzpdyckaufnkfsemtvxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401573.2388165-2120-122315806455985/AnsiballZ_systemd_service.py'
Nov 29 07:32:53 compute-2 sudo[218229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:53 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 07:32:53 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 07:32:54 compute-2 ceph-mon[77138]: pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:54 compute-2 python3.9[218231]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:54 compute-2 sudo[218229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:54.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:54 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 07:32:54 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 07:32:54 compute-2 sudo[218382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gscrtjlcpwguhijpklhgkeoyrtqykbel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401574.402834-2120-91353836221155/AnsiballZ_systemd_service.py'
Nov 29 07:32:54 compute-2 sudo[218382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:55 compute-2 python3.9[218384]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:55 compute-2 sudo[218382]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:55 compute-2 sudo[218536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgskkpizbgbslhavkcmoxhmxvavaates ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401575.315239-2120-48438000889694/AnsiballZ_systemd_service.py'
Nov 29 07:32:55 compute-2 sudo[218536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:32:56 compute-2 python3.9[218538]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:56 compute-2 sudo[218536]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:32:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:56.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:32:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:58 compute-2 sudo[218690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuaaqnigzlaauxfhvrvscpkmxaqywrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401576.2639222-2120-152179791679221/AnsiballZ_systemd_service.py'
Nov 29 07:32:58 compute-2 sudo[218690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:58 compute-2 python3.9[218692]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:32:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:32:58 compute-2 ceph-mon[77138]: pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:32:58 compute-2 sudo[218690]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:32:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:32:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:32:59 compute-2 sudo[218844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ildbkxjfbvyqfmsyiczjaheewfjgzzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401578.5718591-2120-157330129046573/AnsiballZ_systemd_service.py'
Nov 29 07:32:59 compute-2 sudo[218844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:59 compute-2 podman[218846]: 2025-11-29 07:32:59.309039096 +0000 UTC m=+0.169255994 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:32:59 compute-2 sudo[218867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:59 compute-2 sudo[218867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:59 compute-2 sudo[218899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:59 compute-2 sudo[218922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:59 compute-2 python3.9[218847]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:59 compute-2 sudo[218947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218947]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:59 compute-2 sudo[218975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218975]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:59 compute-2 sudo[218998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:32:59 compute-2 sudo[218998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:59 compute-2 sudo[218998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:00 compute-2 sudo[219194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxkzsjudwsotjodccdmzwxsqfsxqcvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401579.6914394-2120-130474385049671/AnsiballZ_systemd_service.py'
Nov 29 07:33:00 compute-2 sudo[219194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:00 compute-2 ceph-mon[77138]: pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 07:33:00 compute-2 ceph-mon[77138]: pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 07:33:00 compute-2 ceph-mon[77138]: pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 07:33:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:00 compute-2 python3.9[219196]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:33:00 compute-2 sudo[219194]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:00.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:00 compute-2 sudo[219347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eundgnwrkhmmqpfztczypaltmhrjcwpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401580.5679224-2120-75742581317822/AnsiballZ_systemd_service.py'
Nov 29 07:33:00 compute-2 sudo[219347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:01 compute-2 python3.9[219349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:33:01 compute-2 sudo[219347]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:01 compute-2 sudo[219369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:01 compute-2 sudo[219369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:01 compute-2 sudo[219369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:01 compute-2 sudo[219401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:33:01 compute-2 sudo[219401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:01 compute-2 sudo[219401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:01 compute-2 sudo[219426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:01 compute-2 sudo[219426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:01 compute-2 sudo[219426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:01 compute-2 sudo[219451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:33:01 compute-2 sudo[219451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:33:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:33:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:33:02 compute-2 sudo[219451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:02 compute-2 sudo[219632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncdflpbjitdrnswkkhnuxjxdyecbfyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401581.9148169-2296-238103827640881/AnsiballZ_file.py'
Nov 29 07:33:02 compute-2 sudo[219632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:33:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:02.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:33:02 compute-2 python3.9[219634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:02 compute-2 sudo[219632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:02.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:03 compute-2 sudo[219784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtlsbxyanosfxswsjiperuekipkdezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401582.706188-2296-131814089275049/AnsiballZ_file.py'
Nov 29 07:33:03 compute-2 sudo[219784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:03 compute-2 python3.9[219787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:33:03.276 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:33:03.278 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:33:03.279 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:33:03 compute-2 sudo[219784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:03 compute-2 sudo[219937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyhlawoudgpdvculgnvhuhzduxoznfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401583.4937756-2296-128738888504935/AnsiballZ_file.py'
Nov 29 07:33:03 compute-2 sudo[219937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:03 compute-2 ceph-mon[77138]: pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 07:33:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:33:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:33:04 compute-2 python3.9[219939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:04 compute-2 sudo[219937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:04.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:04 compute-2 sudo[220089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgxanlqbhrxceypzrpdqshmgnzorwaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401584.31463-2296-89437158556224/AnsiballZ_file.py'
Nov 29 07:33:04 compute-2 sudo[220089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:04 compute-2 python3.9[220091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:04 compute-2 sudo[220089]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:05 compute-2 sudo[220242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfnyohtzpopdelaitinjxzjbocnxecy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401585.1350858-2296-228128395675165/AnsiballZ_file.py'
Nov 29 07:33:05 compute-2 sudo[220242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:05 compute-2 python3.9[220244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:05 compute-2 sudo[220242]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:06 compute-2 sudo[220394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdtallnctygnvepscoxtddiaufigfbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401585.912633-2296-206164358511116/AnsiballZ_file.py'
Nov 29 07:33:06 compute-2 sudo[220394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:06.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:06 compute-2 python3.9[220396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:06 compute-2 sudo[220394]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:06.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:07 compute-2 sudo[220547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfymcviajkqkqguebgkdpqqmrgxvnwol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401586.7341962-2296-81883461384332/AnsiballZ_file.py'
Nov 29 07:33:07 compute-2 sudo[220547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:07 compute-2 python3.9[220549]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:07 compute-2 sudo[220547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:33:07 compute-2 ceph-mon[77138]: pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:33:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:33:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:33:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:07 compute-2 sudo[220699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccckucdhhmpihbkpuwgnwydepgqmorq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401587.4659333-2296-16343149299695/AnsiballZ_file.py'
Nov 29 07:33:07 compute-2 sudo[220699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:08 compute-2 python3.9[220701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:08 compute-2 sudo[220699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:08.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:08 compute-2 sudo[220851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lctrgizavorivkmcjuzwkgykhaelsgmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401588.4230816-2468-83946914983908/AnsiballZ_file.py'
Nov 29 07:33:08 compute-2 sudo[220851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:08.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:08 compute-2 ceph-mon[77138]: pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 07:33:08 compute-2 ceph-mon[77138]: pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 07:33:09 compute-2 python3.9[220853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:09 compute-2 sudo[220851]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:09 compute-2 sudo[221004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkrsbljrbfuxknngwhyswuorpbfjnlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401589.197742-2468-135523281637475/AnsiballZ_file.py'
Nov 29 07:33:09 compute-2 sudo[221004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:09 compute-2 python3.9[221006]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:09 compute-2 sudo[221004]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:10.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:10 compute-2 sudo[221156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbupzirnsuftyjlgmaswdlipreaomusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401590.0635514-2468-238487176994989/AnsiballZ_file.py'
Nov 29 07:33:10 compute-2 sudo[221156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:10 compute-2 python3.9[221158]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:10 compute-2 sudo[221156]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:10.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:11 compute-2 sudo[221309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqbasmxvjgolqblnoercencypzpinwnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401590.7965264-2468-159629659612457/AnsiballZ_file.py'
Nov 29 07:33:11 compute-2 sudo[221309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:11 compute-2 python3.9[221311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:11 compute-2 sudo[221309]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:11 compute-2 ceph-mon[77138]: pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 07:33:11 compute-2 sudo[221474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuajpasyhczgzvrrjofnippvdpuuoyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401591.570666-2468-162997676042847/AnsiballZ_file.py'
Nov 29 07:33:11 compute-2 sudo[221474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:11 compute-2 podman[221435]: 2025-11-29 07:33:11.994467896 +0000 UTC m=+0.102466966 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:33:12 compute-2 python3.9[221480]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:12 compute-2 sudo[221474]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:12 compute-2 ceph-mon[77138]: pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 07:33:12 compute-2 sudo[221632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofszcicqwtjcsieylxadvvutrryxpplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401592.3745816-2468-64866998699440/AnsiballZ_file.py'
Nov 29 07:33:12 compute-2 sudo[221632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:12.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:13 compute-2 python3.9[221634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:13 compute-2 sudo[221632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:13 compute-2 sudo[221785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfwswggsksvuqzllmwlrgirosibinfms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401593.232083-2468-193992713616060/AnsiballZ_file.py'
Nov 29 07:33:13 compute-2 sudo[221785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:13 compute-2 python3.9[221787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:13 compute-2 sudo[221785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:14 compute-2 ceph-mon[77138]: pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 07:33:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:14 compute-2 sudo[221937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjuasweltsdjvbgutwpxqbbcwwkemjwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401594.0310822-2468-257840268111637/AnsiballZ_file.py'
Nov 29 07:33:14 compute-2 sudo[221937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:14 compute-2 python3.9[221939]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:14 compute-2 sudo[221937]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:14.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:15 compute-2 sudo[222090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rngkbmmnzfsucjitjdyfuluadjphxzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401595.137847-2642-4999029879115/AnsiballZ_command.py'
Nov 29 07:33:15 compute-2 sudo[222090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:15 compute-2 python3.9[222092]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:15 compute-2 sudo[222090]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:16.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:16 compute-2 python3.9[222244]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:33:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:17 compute-2 sudo[222414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqscsgyqhpogmqaloolpfqygezfjgeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401597.3019786-2696-209337219205726/AnsiballZ_systemd_service.py'
Nov 29 07:33:17 compute-2 sudo[222414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:17 compute-2 podman[222362]: 2025-11-29 07:33:17.72910593 +0000 UTC m=+0.129018526 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 07:33:18 compute-2 python3.9[222417]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:33:18 compute-2 systemd[1]: Reloading.
Nov 29 07:33:18 compute-2 systemd-rc-local-generator[222443]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:33:18 compute-2 systemd-sysv-generator[222448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:33:18 compute-2 sudo[222414]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:18.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:19 compute-2 sudo[222478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:19 compute-2 sudo[222478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:19 compute-2 sudo[222478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:19 compute-2 sudo[222503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:19 compute-2 sudo[222503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:19 compute-2 sudo[222503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:20 compute-2 sudo[222653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkdjaybiuxgjmtymlgfzgvljplbgqhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401600.126276-2720-166988922951492/AnsiballZ_command.py'
Nov 29 07:33:20 compute-2 sudo[222653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:20 compute-2 python3.9[222655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:20 compute-2 sudo[222653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:20.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:21 compute-2 sudo[222807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujtrkwujkexunhztsgswnuylwhaxiau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401600.9678175-2720-280341096113129/AnsiballZ_command.py'
Nov 29 07:33:21 compute-2 sudo[222807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:21 compute-2 python3.9[222809]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:21 compute-2 ceph-mon[77138]: pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 07:33:21 compute-2 sudo[222807]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:22 compute-2 sudo[222960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycgqtcemylusgxacdyroapgkxjafiigm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401601.7944577-2720-206799126881093/AnsiballZ_command.py'
Nov 29 07:33:22 compute-2 sudo[222960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:22 compute-2 python3.9[222962]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:22 compute-2 sudo[222960]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:22 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:22.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:22 compute-2 sudo[223113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcowdkbuutsamsjbtxozjhycjeqsweh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401602.5542822-2720-214080235748587/AnsiballZ_command.py'
Nov 29 07:33:22 compute-2 sudo[223113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:23 compute-2 python3.9[223115]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:23 compute-2 sudo[223113]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:23 compute-2 sudo[223267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfspgzwswhqajnhmapcbavzjjlwwmoit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401603.2757444-2720-40921724179567/AnsiballZ_command.py'
Nov 29 07:33:23 compute-2 sudo[223267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:23 compute-2 python3.9[223269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:23 compute-2 sudo[223267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:24 compute-2 sudo[223420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndixwffenfeucipjohfqlmbtasqfjzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401604.0569718-2720-49322276885899/AnsiballZ_command.py'
Nov 29 07:33:24 compute-2 sudo[223420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:24 compute-2 python3.9[223422]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:24 compute-2 sudo[223420]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:24.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:25 compute-2 sudo[223574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqgkwknrtsbcxxkiawzpgbvxafuzgqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401604.905241-2720-194984592855000/AnsiballZ_command.py'
Nov 29 07:33:25 compute-2 sudo[223574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:25 compute-2 python3.9[223576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:25 compute-2 sudo[223574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:26 compute-2 sudo[223727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfankihvlhqnuwktcvtqtooickaxdqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401605.6744356-2720-225345275041361/AnsiballZ_command.py'
Nov 29 07:33:26 compute-2 sudo[223727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:26 compute-2 python3.9[223729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:26 compute-2 sudo[223727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:26.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:28 compute-2 sudo[223881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haqwqyctasygnphdcskqbzcnjhaxziab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401607.7095122-2927-261627683990054/AnsiballZ_file.py'
Nov 29 07:33:28 compute-2 sudo[223881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:28 compute-2 python3.9[223883]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:28 compute-2 sudo[223881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:28.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:28 compute-2 sudo[224033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptjfnflppcridguxwjooeliaeincyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401608.5065641-2927-241232537791974/AnsiballZ_file.py'
Nov 29 07:33:28 compute-2 sudo[224033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:28.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:29 compute-2 python3.9[224035]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:29 compute-2 sudo[224033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:29 compute-2 sudo[224201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofsjeacligewblmsoijpiwncoexdedgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401609.3019772-2927-142154461344661/AnsiballZ_file.py'
Nov 29 07:33:29 compute-2 sudo[224201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:29 compute-2 podman[224159]: 2025-11-29 07:33:29.755734636 +0000 UTC m=+0.146866183 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 07:33:29 compute-2 python3.9[224206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:29 compute-2 sudo[224201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:30 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:30.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:32.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:32.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 1005..1735) lease_timeout -- calling new election
Nov 29 07:33:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:34.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:36.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:38.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:39 compute-2 sudo[224242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:39 compute-2 sudo[224242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:39 compute-2 sudo[224242]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:39 compute-2 sudo[224267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:39 compute-2 sudo[224267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:39 compute-2 sudo[224267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:40.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:40 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc MDS connection to Monitors appears to be laggy; 17.9556s since last acked beacon
Nov 29 07:33:40 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:33:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:42.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:42 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:42 compute-2 podman[224293]: 2025-11-29 07:33:42.663545709 +0000 UTC m=+0.059966026 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:33:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:43 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:33:43 compute-2 ceph-mon[77138]: paxos.1).electionLogic(32) init, last seen epoch 32
Nov 29 07:33:43 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:33:43 compute-2 sudo[224438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jypqpgyztxpjkerdvmuhtfaozyaepshs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401623.3915322-2993-220440254372660/AnsiballZ_file.py'
Nov 29 07:33:43 compute-2 sudo[224438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:44 compute-2 python3.9[224440]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:44 compute-2 sudo[224438]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:44 compute-2 sudo[224590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkbfqhjvzfxtmdvvranxmhzchczjlbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401624.303094-2993-101491089488726/AnsiballZ_file.py'
Nov 29 07:33:44 compute-2 sudo[224590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:44 compute-2 python3.9[224592]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:44 compute-2 sudo[224590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:45 compute-2 sudo[224743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjkikglpbvldknaatptaqngzshimrsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401625.1039643-2993-9433830423518/AnsiballZ_file.py'
Nov 29 07:33:45 compute-2 sudo[224743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:45 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:33:45 compute-2 python3.9[224745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:45 compute-2 sudo[224743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:46 compute-2 sudo[224895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhmjdtqayfsfowjodzcolwaloowqdubq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401625.8383067-2993-230407620685142/AnsiballZ_file.py'
Nov 29 07:33:46 compute-2 sudo[224895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:46 compute-2 python3.9[224897]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:46 compute-2 sudo[224895]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:33:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:46.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:47 compute-2 sudo[225047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmkdffsuwrbgvzdrowbgdddwuuvayeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401626.6940143-2993-250910415026403/AnsiballZ_file.py'
Nov 29 07:33:47 compute-2 sudo[225047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:47 compute-2 python3.9[225050]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:47 compute-2 sudo[225047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:47 compute-2 sudo[225213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yneobjzcvqguvlrcstwkxdvzgqtvjxuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401627.50654-2993-157533514059413/AnsiballZ_file.py'
Nov 29 07:33:47 compute-2 sudo[225213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:47 compute-2 podman[225174]: 2025-11-29 07:33:47.906903571 +0000 UTC m=+0.077945058 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:33:48 compute-2 python3.9[225220]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:48 compute-2 sudo[225213]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:48 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:33:48 compute-2 sudo[225370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwjfhteqodjdjrcxpkdhihkarldadzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401628.305094-2993-54694129825062/AnsiballZ_file.py'
Nov 29 07:33:48 compute-2 sudo[225370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:48 compute-2 python3.9[225372]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:48.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:48 compute-2 sudo[225370]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:49 compute-2 ceph-mon[77138]: pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:33:49 compute-2 ceph-mon[77138]: pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:33:49 compute-2 ceph-mon[77138]: pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:33:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:33:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:50.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:50 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:33:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc  MDS is no longer laggy
Nov 29 07:33:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:50.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:52.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v854: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v855: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 15 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v857: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v858: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v859: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v860: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v861: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v862: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v863: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:33:53 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v864: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v865: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:33:53 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:33:53 compute-2 ceph-mon[77138]: pgmap v866: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:33:53 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:33:53 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:33:53 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:33:53 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 24m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:33:53 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:33:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:33:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:33:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:33:56 compute-2 sudo[225526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkdcwyqkkzctyooikbteaauhvptdhmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401635.5256817-3317-115510657481643/AnsiballZ_getent.py'
Nov 29 07:33:56 compute-2 sudo[225526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:56 compute-2 python3.9[225528]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 07:33:56 compute-2 sudo[225526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:56 compute-2 ceph-mon[77138]: pgmap v867: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:33:56 compute-2 ceph-mon[77138]: pgmap v868: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:33:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:56.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:56 compute-2 sudo[225681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjpuylbpgnursggnyenazcscwiuujhxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401636.5452726-3342-81459380722135/AnsiballZ_group.py'
Nov 29 07:33:56 compute-2 sudo[225681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:57 compute-2 sshd-session[225629]: Invalid user sol from 45.148.10.240 port 47572
Nov 29 07:33:57 compute-2 python3.9[225683]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:33:57 compute-2 sshd-session[225629]: Connection closed by invalid user sol 45.148.10.240 port 47572 [preauth]
Nov 29 07:33:57 compute-2 groupadd[225685]: group added to /etc/group: name=nova, GID=42436
Nov 29 07:33:57 compute-2 groupadd[225685]: group added to /etc/gshadow: name=nova
Nov 29 07:33:57 compute-2 groupadd[225685]: new group: name=nova, GID=42436
Nov 29 07:33:57 compute-2 sudo[225681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:33:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:33:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:33:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:33:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:58.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:33:59 compute-2 ceph-mon[77138]: pgmap v869: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:34:00 compute-2 sudo[225818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:00 compute-2 sudo[225872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqmqykzpprhxeplhhunkkpasivljtfkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401639.5398815-3366-210551437722957/AnsiballZ_user.py'
Nov 29 07:34:00 compute-2 sudo[225872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:00 compute-2 sudo[225818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:00 compute-2 sudo[225818]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:00 compute-2 sudo[225881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:00 compute-2 sudo[225881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:00 compute-2 sudo[225881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:00 compute-2 podman[225815]: 2025-11-29 07:34:00.161918968 +0000 UTC m=+0.147246245 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 07:34:00 compute-2 python3.9[225877]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:34:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:00 compute-2 useradd[225918]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 29 07:34:00 compute-2 useradd[225918]: add 'nova' to group 'libvirt'
Nov 29 07:34:00 compute-2 useradd[225918]: add 'nova' to shadow group 'libvirt'
Nov 29 07:34:00 compute-2 sudo[225872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:02.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:34:03.277 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:34:03.279 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:34:03.279 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:34:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:04.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:06 compute-2 ceph-mon[77138]: pgmap v870: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 07:34:06 compute-2 ceph-mon[77138]: pgmap v871: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 07:34:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:06.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:08.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:10 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:10.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:12.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:12.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:13 compute-2 podman[225956]: 2025-11-29 07:34:13.664390139 +0000 UTC m=+0.064660233 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:34:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:14.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:14.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:16.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v872: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v873: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v874: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v875: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v876: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:34:17 compute-2 ceph-mon[77138]: pgmap v877: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:34:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:18 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:18 compute-2 podman[225977]: 2025-11-29 07:34:18.672567407 +0000 UTC m=+0.071590909 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:34:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:18.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:20.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:22.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:22 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:22.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:24.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:26 compute-2 sshd-session[226001]: Accepted publickey for zuul from 192.168.122.30 port 33366 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 07:34:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:26 compute-2 systemd-logind[787]: New session 50 of user zuul.
Nov 29 07:34:26 compute-2 systemd[1]: Started Session 50 of User zuul.
Nov 29 07:34:26 compute-2 sshd-session[226001]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:34:26 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:26 compute-2 sshd-session[226004]: Received disconnect from 192.168.122.30 port 33366:11: disconnected by user
Nov 29 07:34:26 compute-2 sshd-session[226004]: Disconnected from user zuul 192.168.122.30 port 33366
Nov 29 07:34:26 compute-2 sshd-session[226001]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:34:26 compute-2 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 07:34:26 compute-2 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Nov 29 07:34:26 compute-2 systemd-logind[787]: Removed session 50.
Nov 29 07:34:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:26.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:27 compute-2 python3.9[226155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:28 compute-2 python3.9[226276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401666.9559004-3441-245256092462701/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:28 compute-2 python3.9[226426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:29 compute-2 python3.9[226503]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:30 compute-2 python3.9[226653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:30 compute-2 podman[226748]: 2025-11-29 07:34:30.507402687 +0000 UTC m=+0.108523366 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 07:34:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:30.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:30 compute-2 python3.9[226789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401669.5373304-3441-12985900491404/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:30.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:31 compute-2 python3.9[226951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:31 compute-2 python3.9[227072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401670.8582306-3441-274903584211952/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:32 compute-2 python3.9[227222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:32.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:33 compute-2 python3.9[227344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401672.1283715-3441-74764569452787/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:34 compute-2 python3.9[227494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:34 compute-2 python3.9[227615]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401673.5808988-3441-276709167756196/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:34.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:35 compute-2 sudo[227766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlfwbwghdzhtywrwsmopkoaazdcycwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401675.3010185-3689-101286101899236/AnsiballZ_file.py'
Nov 29 07:34:35 compute-2 sudo[227766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:34:35 compute-2 python3.9[227768]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:35 compute-2 sudo[227766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:36 compute-2 sudo[227918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igmgmuyxxxhzakchzspqkzarokstexdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401676.190665-3714-201134246965767/AnsiballZ_copy.py'
Nov 29 07:34:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:36 compute-2 sudo[227918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:36 compute-2 python3.9[227920]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:36 compute-2 sudo[227918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 1005..1747) lease_timeout -- calling new election
Nov 29 07:34:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:38.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:42 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:44.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:44 compute-2 podman[227949]: 2025-11-29 07:34:44.691272798 +0000 UTC m=+0.085484256 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 07:34:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:45 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc MDS connection to Monitors appears to be laggy; 18.9746s since last acked beacon
Nov 29 07:34:45 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:46 compute-2 sshd-session[227969]: banner exchange: Connection from 1.226.83.48 port 42696: invalid format
Nov 29 07:34:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:46.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:34:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:48.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:34:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:49 compute-2 podman[227972]: 2025-11-29 07:34:49.667469841 +0000 UTC m=+0.069589878 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 07:34:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:50.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:50 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:53.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:54.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:54 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:55 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:56.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:34:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:34:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:34:58 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:34:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:34:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:34:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:00 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:00 compute-2 podman[227998]: 2025-11-29 07:35:00.710047365 +0000 UTC m=+0.108636830 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:35:00 compute-2 ceph-mon[77138]: mon.compute-2@1(probing) e3 get_health_metrics reporting 1 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:00 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2[77134]: 2025-11-29T07:35:00.806+0000 7f6a0f01c640 -1 mon.compute-2@1(probing) e3 get_health_metrics reporting 1 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:01.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:01 compute-2 anacron[34256]: Job `cron.daily' started
Nov 29 07:35:01 compute-2 anacron[34256]: Job `cron.daily' terminated
Nov 29 07:35:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:02.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:02 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:35:03.279 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:35:03.281 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:35:03.281 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:35:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:05 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:05 compute-2 ceph-mon[77138]: mon.compute-2@1(probing) e3 get_health_metrics reporting 2 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:05 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2[77134]: 2025-11-29T07:35:05.807+0000 7f6a0f01c640 -1 mon.compute-2@1(probing) e3 get_health_metrics reporting 2 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:06 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:35:06 compute-2 ceph-mon[77138]: paxos.1).electionLogic(36) init, last seen epoch 36
Nov 29 07:35:06 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:06.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:06 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:07 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:07 compute-2 sudo[228155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhjcuflrsqptvpirepsgcdgyvjeqetp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.1192853-3738-269367651198218/AnsiballZ_stat.py'
Nov 29 07:35:07 compute-2 sudo[228155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:07 compute-2 python3.9[228157]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:07 compute-2 sudo[228155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:08 compute-2 sudo[228307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgxxfvlfnemjufdlnmihfbjdkffosvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.9146261-3764-203258518372319/AnsiballZ_stat.py'
Nov 29 07:35:08 compute-2 sudo[228307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:08 compute-2 python3.9[228309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:08 compute-2 sudo[228307]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:08.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:09 compute-2 sudo[228430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjrthqsgcypnzxfqareufjiwyzrtxbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.9146261-3764-203258518372319/AnsiballZ_copy.py'
Nov 29 07:35:09 compute-2 sudo[228430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:09 compute-2 python3.9[228433]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764401707.9146261-3764-203258518372319/.source _original_basename=.brw8556w follow=False checksum=9813f0f40e53c8d51e97d50aa561bc0a09704c52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 07:35:09 compute-2 sudo[228430]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:10 compute-2 python3.9[228585]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:10 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:10.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:10 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 get_health_metrics reporting 3 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:10 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2[77134]: 2025-11-29T07:35:10.806+0000 7f6a0f01c640 -1 mon.compute-2@1(peon) e3 get_health_metrics reporting 3 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:11 compute-2 python3.9[228738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:11 compute-2 python3.9[228859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401710.741499-3840-224427020854638/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:12.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:12 compute-2 python3.9[229009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:35:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:35:13 compute-2 python3.9[229131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401712.1956685-3886-206654648354425/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:14 compute-2 sudo[229282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpyxjjtpiimcyofeynaczvrztwobktyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401714.076819-3936-166599928009205/AnsiballZ_container_config_data.py'
Nov 29 07:35:14 compute-2 sudo[229282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:14.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:14 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:14 compute-2 python3.9[229284]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 07:35:14 compute-2 sudo[229282]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:15 compute-2 podman[229409]: 2025-11-29 07:35:15.406718438 +0000 UTC m=+0.056764147 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 07:35:15 compute-2 sudo[229452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjhwsniokzwlnciefuzojcrjcssccfqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401715.0797338-3963-138297134695450/AnsiballZ_container_config_hash.py'
Nov 29 07:35:15 compute-2 sudo[229452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:15 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:15 compute-2 python3.9[229456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:35:15 compute-2 sudo[229452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:15 compute-2 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2[77134]: 2025-11-29T07:35:15.806+0000 7f6a0f01c640 -1 mon.compute-2@1(peon) e3 get_health_metrics reporting 4 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 get_health_metrics reporting 4 slow ops, oldest is mdsbeacon(24133/cephfs.compute-2.fwjrvc up:active fs=cephfs seq=328 v7)
Nov 29 07:35:16 compute-2 sudo[229606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awktexzgwdeecyktlmdymlrzdupgmzfz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401716.263903-3992-160547633250564/AnsiballZ_edpm_container_manage.py'
Nov 29 07:35:16 compute-2 sudo[229606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:16.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:16 compute-2 python3[229608]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:35:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:18.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:18 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:18 compute-2 ceph-mon[77138]: pgmap v878: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:18 compute-2 ceph-mon[77138]: pgmap v879: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:18 compute-2 ceph-mon[77138]: pgmap v880: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:35:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:19 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc  MDS is no longer laggy
Nov 29 07:35:19 compute-2 sudo[229649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:19 compute-2 sudo[229649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:19 compute-2 sudo[229649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:19 compute-2 sudo[229680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:19 compute-2 sudo[229680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:19 compute-2 podman[229673]: 2025-11-29 07:35:19.930051102 +0000 UTC m=+0.100099253 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:35:19 compute-2 sudo[229680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:20 compute-2 sudo[229718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:20 compute-2 sudo[229718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:20 compute-2 sudo[229718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:20 compute-2 sudo[229743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:35:20 compute-2 sudo[229743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:20 compute-2 sudo[229743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:20.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v881: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v882: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v883: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v884: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v885: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v886: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v887: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v888: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v889: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v890: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v891: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v892: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v893: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v894: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v895: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v896: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v897: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v898: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v899: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v900: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v901: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v902: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v903: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v904: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:35:20 compute-2 ceph-mon[77138]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v905: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v906: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v907: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v908: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v909: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v910: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:35:20 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:35:20 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 26m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:35:20 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:35:20 compute-2 ceph-mon[77138]: pgmap v911: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:35:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:35:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:21.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:22.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:23.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:24.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:25.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:35:26 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:26.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:27.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:28.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:28 compute-2 ceph-mon[77138]: Health check failed: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops. (SLOW_OPS)
Nov 29 07:35:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:35:28 compute-2 ceph-mon[77138]: pgmap v912: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:29.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:29 compute-2 podman[229623]: 2025-11-29 07:35:29.604283696 +0000 UTC m=+12.647190105 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:35:29 compute-2 podman[229831]: 2025-11-29 07:35:29.805641246 +0000 UTC m=+0.066713778 container create 78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125)
Nov 29 07:35:29 compute-2 podman[229831]: 2025-11-29 07:35:29.76773921 +0000 UTC m=+0.028811762 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:35:29 compute-2 python3[229608]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 07:35:29 compute-2 sudo[229606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:30 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:35:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:31.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:31 compute-2 podman[229894]: 2025-11-29 07:35:31.701330554 +0000 UTC m=+0.102140557 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 07:35:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:33.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:34.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:35.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:35 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc MDS connection to Monitors appears to be laggy; 16.9725s since last acked beacon
Nov 29 07:35:35 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:35:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:38 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 1005..1751) lease_timeout -- calling new election
Nov 29 07:35:39 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:35:39 compute-2 ceph-mon[77138]: paxos.1).electionLogic(38) init, last seen epoch 38
Nov 29 07:35:39 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:40 compute-2 sudo[229925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:40 compute-2 sudo[229925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:40 compute-2 sudo[229925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:40 compute-2 sudo[229950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:40 compute-2 sudo[229950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:40 compute-2 sudo[229950]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:40 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:42 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:42.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:43 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:44.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:45 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:45 compute-2 podman[229978]: 2025-11-29 07:35:45.644294415 +0000 UTC m=+0.053000619 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:35:46 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:46.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:47.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:49.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:49 compute-2 ceph-mon[77138]: paxos.1).electionLogic(39) init, last seen epoch 39, mid-election, bumping
Nov 29 07:35:49 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:50 compute-2 ceph-mds[83773]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:50 compute-2 podman[229999]: 2025-11-29 07:35:50.653695159 +0000 UTC m=+0.062315151 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 07:35:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:50 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 handle_timecheck drop unexpected msg
Nov 29 07:35:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:51.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:51 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:35:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:53.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v916: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v917: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v918: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v919: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v920: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v921: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v922: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v923: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v924: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v925: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v926: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:35:53 compute-2 ceph-mon[77138]: pgmap v927: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:53 compute-2 ceph-mon[77138]: Health detail: HEALTH_WARN 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 07:35:53 compute-2 ceph-mon[77138]: [WRN] SLOW_OPS: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:35:53 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:35:53 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:35:53 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:35:53 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 26m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:35:53 compute-2 ceph-mon[77138]: Health detail: HEALTH_WARN 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 07:35:53 compute-2 ceph-mon[77138]: [WRN] SLOW_OPS: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 07:35:54 compute-2 sudo[230146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aklnhvcqjdqfoikbcqlgsvrajqnnjzlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401753.7504358-4017-109980971455622/AnsiballZ_stat.py'
Nov 29 07:35:54 compute-2 sudo[230146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:54 compute-2 python3.9[230148]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:54 compute-2 sudo[230146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:54 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:35:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:54 compute-2 ceph-mon[77138]: pgmap v928: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:54 compute-2 ceph-mon[77138]: Health check cleared: SLOW_OPS (was: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.)
Nov 29 07:35:54 compute-2 ceph-mon[77138]: Cluster is now healthy
Nov 29 07:35:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:55.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:55 compute-2 sudo[230301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzaapdmilqncmsagyjpvrhwjhkkalldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401755.0520256-4053-248661944825021/AnsiballZ_container_config_data.py'
Nov 29 07:35:55 compute-2 sudo[230301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:55 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc  MDS is no longer laggy
Nov 29 07:35:55 compute-2 python3.9[230303]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 07:35:55 compute-2 sudo[230301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:35:55 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 29 07:35:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:55.986184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:35:55 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 29 07:35:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401755986738, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2216, "num_deletes": 251, "total_data_size": 5368450, "memory_usage": 5458840, "flush_reason": "Manual Compaction"}
Nov 29 07:35:55 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756011204, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3461324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15779, "largest_seqno": 17989, "table_properties": {"data_size": 3452285, "index_size": 5469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20879, "raw_average_key_size": 21, "raw_value_size": 3433360, "raw_average_value_size": 3510, "num_data_blocks": 241, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401375, "oldest_key_time": 1764401375, "file_creation_time": 1764401755, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 25103 microseconds, and 10383 cpu microseconds.
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.011271) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3461324 bytes OK
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.011368) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.014134) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.014208) EVENT_LOG_v1 {"time_micros": 1764401756014196, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.014239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5358663, prev total WAL file size 5358663, number of live WAL files 2.
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.016022) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3380KB)], [30(7386KB)]
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756016141, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 11024886, "oldest_snapshot_seqno": -1}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4613 keys, 8884370 bytes, temperature: kUnknown
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756087053, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 8884370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8851904, "index_size": 19812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115189, "raw_average_key_size": 24, "raw_value_size": 8766805, "raw_average_value_size": 1900, "num_data_blocks": 828, "num_entries": 4613, "num_filter_entries": 4613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764401756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.087459) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8884370 bytes
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.089289) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.1 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5135, records dropped: 522 output_compression: NoCompression
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.089336) EVENT_LOG_v1 {"time_micros": 1764401756089301, "job": 16, "event": "compaction_finished", "compaction_time_micros": 71100, "compaction_time_cpu_micros": 34955, "output_level": 6, "num_output_files": 1, "total_output_size": 8884370, "num_input_records": 5135, "num_output_records": 4613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756090115, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756091691, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.015915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.091847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.091855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.091858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.091860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:35:56.091862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:35:56 compute-2 sudo[230453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clcuexyawmjyrhkjgrqkeexcfaywfqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401756.034104-4080-108138788340679/AnsiballZ_container_config_hash.py'
Nov 29 07:35:56 compute-2 sudo[230453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:56 compute-2 python3.9[230455]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:35:56 compute-2 sudo[230453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:35:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:35:57 compute-2 ceph-mon[77138]: pgmap v929: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:35:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:35:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:57.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:35:57 compute-2 sudo[230606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phkmzxizuubngfmjzroowlhhmrjotdia ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401757.0977342-4109-117664901985979/AnsiballZ_edpm_container_manage.py'
Nov 29 07:35:57 compute-2 sudo[230606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:57 compute-2 python3[230608]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:35:57 compute-2 podman[230641]: 2025-11-29 07:35:57.987703978 +0000 UTC m=+0.110071434 container create 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=edpm, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 07:35:57 compute-2 podman[230641]: 2025-11-29 07:35:57.907599682 +0000 UTC m=+0.029967168 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:35:57 compute-2 python3[230608]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 07:35:58 compute-2 sudo[230606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:58.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:59 compute-2 sshd-session[230704]: Invalid user solana from 45.148.10.240 port 43268
Nov 29 07:35:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:35:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:35:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:35:59 compute-2 sshd-session[230704]: Connection closed by invalid user solana 45.148.10.240 port 43268 [preauth]
Nov 29 07:35:59 compute-2 sudo[230833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhgupdonsommjohxfbspicoezpnawtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401758.9355402-4133-185857046125671/AnsiballZ_stat.py'
Nov 29 07:35:59 compute-2 sudo[230833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:59 compute-2 python3.9[230835]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:59 compute-2 sudo[230833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:59 compute-2 ceph-mon[77138]: pgmap v930: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Nov 29 07:36:00 compute-2 sudo[230988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdokqbhxxzhywmzrguckbkynxfhzutsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401759.9905674-4161-77053516786746/AnsiballZ_file.py'
Nov 29 07:36:00 compute-2 sudo[230988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:00 compute-2 python3.9[230990]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:00 compute-2 sudo[230988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:00 compute-2 ceph-mon[77138]: pgmap v931: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 07:36:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:01 compute-2 sudo[231162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxizqmaqyznvrdugnjuchrlyjvbpbon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401760.5859365-4161-166241775200774/AnsiballZ_copy.py'
Nov 29 07:36:01 compute-2 sudo[231162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:01 compute-2 sudo[231119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:01 compute-2 sudo[231119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:01 compute-2 sudo[231119]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:01 compute-2 sudo[231168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:01 compute-2 sudo[231168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:01 compute-2 sudo[231168]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:01 compute-2 python3.9[231165]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401760.5859365-4161-166241775200774/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:01 compute-2 sudo[231162]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:01 compute-2 sudo[231266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqvtbuihwmhncyhnoxnfclraitmkpina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401760.5859365-4161-166241775200774/AnsiballZ_systemd.py'
Nov 29 07:36:01 compute-2 sudo[231266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:01 compute-2 python3.9[231268]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:36:01 compute-2 systemd[1]: Reloading.
Nov 29 07:36:02 compute-2 ceph-mon[77138]: pgmap v932: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 29 07:36:02 compute-2 systemd-rc-local-generator[231311]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:02 compute-2 systemd-sysv-generator[231318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:02 compute-2 podman[231270]: 2025-11-29 07:36:02.058352311 +0000 UTC m=+0.137058809 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:36:02 compute-2 sudo[231266]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:02 compute-2 sudo[231405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbnxecwbhxbitgymjdplleajereuybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401760.5859365-4161-166241775200774/AnsiballZ_systemd.py'
Nov 29 07:36:02 compute-2 sudo[231405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:02.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:02 compute-2 python3.9[231407]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:02 compute-2 systemd[1]: Reloading.
Nov 29 07:36:03 compute-2 systemd-rc-local-generator[231439]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:03 compute-2 systemd-sysv-generator[231442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:36:03.281 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:36:03.283 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:36:03.283 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:03 compute-2 systemd[1]: Starting nova_compute container...
Nov 29 07:36:03 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:36:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:03 compute-2 podman[231449]: 2025-11-29 07:36:03.505464055 +0000 UTC m=+0.112015486 container init 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:36:03 compute-2 podman[231449]: 2025-11-29 07:36:03.514024152 +0000 UTC m=+0.120575573 container start 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:36:03 compute-2 podman[231449]: nova_compute
Nov 29 07:36:03 compute-2 nova_compute[231463]: + sudo -E kolla_set_configs
Nov 29 07:36:03 compute-2 systemd[1]: Started nova_compute container.
Nov 29 07:36:03 compute-2 sudo[231405]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Validating config file
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying service configuration files
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Writing out command to execute
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:03 compute-2 nova_compute[231463]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:36:03 compute-2 nova_compute[231463]: ++ cat /run_command
Nov 29 07:36:03 compute-2 nova_compute[231463]: + CMD=nova-compute
Nov 29 07:36:03 compute-2 nova_compute[231463]: + ARGS=
Nov 29 07:36:03 compute-2 nova_compute[231463]: + sudo kolla_copy_cacerts
Nov 29 07:36:03 compute-2 nova_compute[231463]: + [[ ! -n '' ]]
Nov 29 07:36:03 compute-2 nova_compute[231463]: + . kolla_extend_start
Nov 29 07:36:03 compute-2 nova_compute[231463]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:36:03 compute-2 nova_compute[231463]: Running command: 'nova-compute'
Nov 29 07:36:03 compute-2 nova_compute[231463]: + umask 0022
Nov 29 07:36:03 compute-2 nova_compute[231463]: + exec nova-compute
Nov 29 07:36:04 compute-2 ceph-mon[77138]: pgmap v933: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Nov 29 07:36:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:05.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:05 compute-2 python3.9[231626]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:36:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:05 compute-2 ceph-mon[77138]: pgmap v934: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 29 07:36:05 compute-2 nova_compute[231463]: 2025-11-29 07:36:05.985 231467 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:05 compute-2 nova_compute[231463]: 2025-11-29 07:36:05.985 231467 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:05 compute-2 nova_compute[231463]: 2025-11-29 07:36:05.986 231467 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:05 compute-2 nova_compute[231463]: 2025-11-29 07:36:05.986 231467 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:36:06 compute-2 nova_compute[231463]: 2025-11-29 07:36:06.192 231467 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:36:06 compute-2 nova_compute[231463]: 2025-11-29 07:36:06.213 231467 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:36:06 compute-2 nova_compute[231463]: 2025-11-29 07:36:06.214 231467 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:36:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:06.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:06 compute-2 python3.9[231780]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.040 231467 INFO nova.virt.driver [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.045298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767045427, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 255, "total_data_size": 309981, "memory_usage": 318400, "flush_reason": "Manual Compaction"}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767050702, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 204831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17995, "largest_seqno": 18343, "table_properties": {"data_size": 202670, "index_size": 325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4808, "raw_average_key_size": 16, "raw_value_size": 198473, "raw_average_value_size": 677, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401756, "oldest_key_time": 1764401756, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5471 microseconds, and 2893 cpu microseconds.
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.050796) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 204831 bytes OK
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.050834) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.055257) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.055286) EVENT_LOG_v1 {"time_micros": 1764401767055276, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.055354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 307560, prev total WAL file size 307560, number of live WAL files 2.
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.056059) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(200KB)], [33(8676KB)]
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767056240, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9089201, "oldest_snapshot_seqno": -1}
Nov 29 07:36:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4388 keys, 8745069 bytes, temperature: kUnknown
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767139216, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 8745069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8714189, "index_size": 18802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 111798, "raw_average_key_size": 25, "raw_value_size": 8632988, "raw_average_value_size": 1967, "num_data_blocks": 772, "num_entries": 4388, "num_filter_entries": 4388, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.139567) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8745069 bytes
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.142260) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.4 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.5 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(87.1) write-amplify(42.7) OK, records in: 4906, records dropped: 518 output_compression: NoCompression
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.142305) EVENT_LOG_v1 {"time_micros": 1764401767142283, "job": 18, "event": "compaction_finished", "compaction_time_micros": 83068, "compaction_time_cpu_micros": 21970, "output_level": 6, "num_output_files": 1, "total_output_size": 8745069, "num_input_records": 4906, "num_output_records": 4388, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767142604, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767146865, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.055935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.146980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.146990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.146994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.146998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:36:07.147003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.332 231467 INFO nova.compute.provider_config [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.509 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.510 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.510 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.511 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.511 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.512 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.512 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.512 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.513 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.513 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.514 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.514 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.514 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.514 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.515 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.515 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.515 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.516 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.516 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.517 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.517 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.518 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.518 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.519 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.519 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.520 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.520 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.520 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.521 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.521 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.521 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.522 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.522 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.522 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.523 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.523 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.523 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.524 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.524 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.524 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.525 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.525 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.526 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.526 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.526 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.527 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.527 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.527 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.528 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.528 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.528 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.529 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.529 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.529 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.530 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.530 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.531 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.531 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.531 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.532 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.532 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.532 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.533 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.533 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.534 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.534 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.534 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.534 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.535 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.535 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.535 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.536 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.536 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.537 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.537 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.537 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.538 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.538 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.539 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.539 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.540 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.540 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.540 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.541 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.541 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.542 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.542 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.542 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.543 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.543 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.543 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.544 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.544 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.544 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.545 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.545 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.545 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.546 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.546 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.546 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.547 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.547 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.548 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.548 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.548 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.549 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.549 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.550 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.550 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.550 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.551 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.551 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.551 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.552 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.552 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.552 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.553 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.553 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.553 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.554 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.554 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.554 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.555 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.555 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.556 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.556 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.557 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.557 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.558 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.558 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.559 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.559 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.559 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.559 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.560 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.560 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.560 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.560 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.560 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.561 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.561 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.561 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.561 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.562 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.562 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.562 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.562 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.562 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.563 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.563 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.563 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.563 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.563 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.564 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.564 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.564 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.564 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.564 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.565 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.565 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.565 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.565 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.566 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.566 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.566 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.566 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.567 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.567 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.567 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.567 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.567 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.568 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.568 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.568 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.568 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.568 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.569 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.569 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.569 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.569 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.570 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.570 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.570 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.570 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.570 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.571 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.571 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.571 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.571 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.571 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.572 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.572 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.572 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.572 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.573 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.573 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.573 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.573 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.573 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.574 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.574 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.574 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.574 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.574 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.575 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.575 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.575 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.575 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.575 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.576 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.576 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.576 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.576 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.576 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.577 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.577 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.577 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.577 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.578 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.578 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.578 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.578 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.578 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.579 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.579 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.579 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.579 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.580 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.580 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.580 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.580 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.580 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.581 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.581 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.581 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.581 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.582 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.582 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.582 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.583 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.583 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.583 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.583 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.584 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.584 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.584 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.585 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.585 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.585 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.585 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.585 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.586 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.586 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.586 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.586 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.587 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.587 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.587 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.587 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.587 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.588 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.588 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.588 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.588 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.588 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.589 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.589 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.589 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.589 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.590 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.590 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.590 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.590 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.590 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.591 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.591 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.591 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.591 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.591 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.592 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.592 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.592 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.592 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.592 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.593 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.593 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.593 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.593 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.594 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.594 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.594 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.594 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.595 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.596 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.597 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.598 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.599 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.600 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.601 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.602 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.603 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.604 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.605 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.605 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.605 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.605 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.605 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.606 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.607 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.608 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.609 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.610 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.611 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.612 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.613 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.614 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.615 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.616 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.617 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.618 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.619 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.620 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.621 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.622 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.623 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.624 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 WARNING oslo_config.cfg [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:36:07 compute-2 nova_compute[231463]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:36:07 compute-2 nova_compute[231463]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:36:07 compute-2 nova_compute[231463]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:36:07 compute-2 nova_compute[231463]: ).  Its value may be silently ignored in the future.
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.625 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.626 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.627 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.628 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.629 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.629 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.629 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.629 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.629 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.630 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.631 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.632 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.633 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.634 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.635 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.636 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.637 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.638 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.639 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.640 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.641 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.642 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.643 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.644 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.645 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.646 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.647 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.648 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.649 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.649 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.649 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.649 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.649 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.650 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.650 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.650 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.650 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.650 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.651 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.651 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.651 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.651 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.651 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.652 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.653 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.654 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.655 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.655 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.655 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.655 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.655 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.656 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.657 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.658 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.659 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.660 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.661 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.662 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.663 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.664 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.665 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.666 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.667 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.668 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.669 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.670 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.671 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.672 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.673 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.674 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.675 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.676 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.677 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.678 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.679 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.680 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.681 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.682 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.683 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.684 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.684 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.684 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.684 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.684 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.685 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.685 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.685 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.685 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.685 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.686 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.687 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.688 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.689 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.690 231467 DEBUG oslo_service.service [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:36:07 compute-2 nova_compute[231463]: 2025-11-29 07:36:07.692 231467 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:36:07 compute-2 python3.9[231931]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.209 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.210 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.210 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.211 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:36:08 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:36:08 compute-2 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.312 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fac6453d670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.317 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fac6453d670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.318 231467 INFO nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Connection event '1' reason 'None'
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.427 231467 WARNING nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Nov 29 07:36:08 compute-2 nova_compute[231463]: 2025-11-29 07:36:08.427 231467 DEBUG nova.virt.libvirt.volume.mount [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:36:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:08.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:09 compute-2 sudo[232141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkuuaenqadremandycrnxnrtzehbczr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401768.4348233-4340-166144739803857/AnsiballZ_podman_container.py'
Nov 29 07:36:09 compute-2 sudo[232141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.228 231467 INFO nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]: 
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <host>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <uuid>841b8909-9838-4df3-bf7c-bb9b0c2a4d0c</uuid>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <arch>x86_64</arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model>EPYC-Rome-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <vendor>AMD</vendor>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <microcode version='16777317'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='x2apic'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='tsc-deadline'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='osxsave'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='hypervisor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='tsc_adjust'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='spec-ctrl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='stibp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='arch-capabilities'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='cmp_legacy'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='topoext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='virt-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='lbrv'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='tsc-scale'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='vmcb-clean'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='pause-filter'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='pfthreshold'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='svme-addr-chk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='rdctl-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='mds-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature name='pschange-mc-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <pages unit='KiB' size='4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <pages unit='KiB' size='2048'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <power_management>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <suspend_mem/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </power_management>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <iommu support='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <migration_features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <live/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <uri_transports>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <uri_transport>tcp</uri_transport>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <uri_transport>rdma</uri_transport>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </uri_transports>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </migration_features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <topology>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <cells num='1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <cell id='0'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <memory unit='KiB'>7864324</memory>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <distances>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <sibling id='0' value='10'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           </distances>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           <cpus num='8'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:           </cpus>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         </cell>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </cells>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </topology>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <cache>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </cache>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <secmodel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model>selinux</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <doi>0</doi>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </secmodel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <secmodel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model>dac</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <doi>0</doi>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </secmodel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </host>
Nov 29 07:36:09 compute-2 nova_compute[231463]: 
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <guest>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <os_type>hvm</os_type>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <arch name='i686'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <wordsize>32</wordsize>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <domain type='qemu'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <domain type='kvm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <pae/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <nonpae/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <acpi default='on' toggle='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <apic default='on' toggle='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <cpuselection/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <deviceboot/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <externalSnapshot/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </guest>
Nov 29 07:36:09 compute-2 nova_compute[231463]: 
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <guest>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <os_type>hvm</os_type>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <arch name='x86_64'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <wordsize>64</wordsize>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <domain type='qemu'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <domain type='kvm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <acpi default='on' toggle='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <apic default='on' toggle='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <cpuselection/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <deviceboot/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <externalSnapshot/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </guest>
Nov 29 07:36:09 compute-2 nova_compute[231463]: 
Nov 29 07:36:09 compute-2 nova_compute[231463]: </capabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]: 
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.236 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:36:09 compute-2 ceph-mon[77138]: pgmap v935: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.269 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:36:09 compute-2 nova_compute[231463]: <domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <domain>kvm</domain>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <arch>i686</arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <vcpu max='240'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <iothreads supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <os supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='firmware'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <loader supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>rom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pflash</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='readonly'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>yes</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='secure'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </loader>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </os>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='maximumMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <vendor>AMD</vendor>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='succor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='custom' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-128'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-256'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-512'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <memoryBacking supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='sourceType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>anonymous</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>memfd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </memoryBacking>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <disk supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='diskDevice'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>disk</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cdrom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>floppy</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>lun</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ide</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>fdc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>sata</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </disk>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <graphics supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vnc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egl-headless</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </graphics>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <video supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='modelType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vga</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cirrus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>none</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>bochs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ramfb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </video>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hostdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='mode'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>subsystem</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='startupPolicy'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>mandatory</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>requisite</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>optional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='subsysType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pci</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='capsType'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='pciBackend'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hostdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <rng supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>random</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </rng>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <filesystem supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='driverType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>path</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>handle</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtiofs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </filesystem>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <tpm supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-tis</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-crb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emulator</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>external</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendVersion'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>2.0</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </tpm>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <redirdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </redirdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <channel supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </channel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <crypto supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </crypto>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <interface supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>passt</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </interface>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <panic supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>isa</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>hyperv</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </panic>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <console supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>null</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dev</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pipe</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stdio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>udp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tcp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu-vdagent</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </console>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <gic supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <genid supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backup supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <async-teardown supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <ps2 supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sev supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sgx supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hyperv supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='features'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>relaxed</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vapic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>spinlocks</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vpindex</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>runtime</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>synic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stimer</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reset</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vendor_id</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>frequencies</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reenlightenment</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tlbflush</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ipi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>avic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emsr_bitmap</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>xmm_input</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hyperv>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <launchSecurity supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='sectype'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tdx</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </launchSecurity>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]: </domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.278 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:36:09 compute-2 nova_compute[231463]: <domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <domain>kvm</domain>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <arch>i686</arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <vcpu max='4096'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <iothreads supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <os supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='firmware'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <loader supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>rom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pflash</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='readonly'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>yes</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='secure'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </loader>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </os>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='maximumMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <vendor>AMD</vendor>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='succor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='custom' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:09 compute-2 python3.9[232144]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-128'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-256'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-512'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <memoryBacking supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='sourceType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>anonymous</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>memfd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </memoryBacking>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <disk supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='diskDevice'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>disk</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cdrom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>floppy</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>lun</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>fdc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>sata</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </disk>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <graphics supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vnc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egl-headless</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </graphics>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <video supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='modelType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vga</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cirrus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>none</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>bochs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ramfb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </video>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hostdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='mode'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>subsystem</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='startupPolicy'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>mandatory</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>requisite</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>optional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='subsysType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pci</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='capsType'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='pciBackend'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hostdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <rng supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>random</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </rng>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <filesystem supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='driverType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>path</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>handle</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtiofs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </filesystem>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <tpm supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-tis</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-crb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emulator</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>external</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendVersion'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>2.0</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </tpm>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <redirdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </redirdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <channel supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </channel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <crypto supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </crypto>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <interface supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>passt</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </interface>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <panic supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>isa</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>hyperv</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </panic>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <console supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>null</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dev</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pipe</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stdio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>udp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tcp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu-vdagent</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </console>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <gic supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <genid supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backup supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <async-teardown supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <ps2 supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sev supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sgx supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hyperv supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='features'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>relaxed</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vapic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>spinlocks</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vpindex</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>runtime</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>synic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stimer</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reset</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vendor_id</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>frequencies</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reenlightenment</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tlbflush</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ipi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>avic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emsr_bitmap</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>xmm_input</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hyperv>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <launchSecurity supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='sectype'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tdx</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </launchSecurity>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]: </domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.323 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.329 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:36:09 compute-2 nova_compute[231463]: <domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <domain>kvm</domain>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <arch>x86_64</arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <vcpu max='240'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <iothreads supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <os supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='firmware'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <loader supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>rom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pflash</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='readonly'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>yes</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='secure'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </loader>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </os>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='maximumMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <vendor>AMD</vendor>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='succor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='custom' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 sudo[232141]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-128'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-256'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-512'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <memoryBacking supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='sourceType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>anonymous</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>memfd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </memoryBacking>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <disk supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='diskDevice'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>disk</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cdrom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>floppy</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>lun</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ide</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>fdc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>sata</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </disk>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <graphics supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vnc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egl-headless</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </graphics>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <video supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='modelType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vga</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cirrus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>none</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>bochs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ramfb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </video>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hostdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='mode'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>subsystem</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='startupPolicy'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>mandatory</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>requisite</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>optional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='subsysType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pci</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='capsType'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='pciBackend'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hostdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <rng supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>random</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </rng>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <filesystem supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='driverType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>path</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>handle</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtiofs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </filesystem>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <tpm supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-tis</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-crb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emulator</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>external</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendVersion'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>2.0</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </tpm>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <redirdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </redirdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <channel supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </channel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <crypto supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </crypto>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <interface supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>passt</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </interface>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <panic supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>isa</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>hyperv</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </panic>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <console supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>null</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dev</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pipe</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stdio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>udp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tcp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu-vdagent</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </console>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <gic supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <genid supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backup supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <async-teardown supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <ps2 supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sev supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sgx supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hyperv supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='features'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>relaxed</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vapic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>spinlocks</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vpindex</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>runtime</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>synic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stimer</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reset</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vendor_id</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>frequencies</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reenlightenment</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tlbflush</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ipi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>avic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emsr_bitmap</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>xmm_input</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hyperv>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <launchSecurity supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='sectype'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tdx</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </launchSecurity>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]: </domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.431 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:36:09 compute-2 nova_compute[231463]: <domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <domain>kvm</domain>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <arch>x86_64</arch>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <vcpu max='4096'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <iothreads supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <os supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='firmware'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>efi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <loader supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>rom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pflash</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='readonly'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>yes</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='secure'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>yes</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>no</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </loader>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </os>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='maximumMigratable'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>on</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>off</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <vendor>AMD</vendor>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='succor'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <mode name='custom' supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Denverton-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='auto-ibrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amd-psfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='stibp-always-on'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='EPYC-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-128'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-256'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx10-512'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='prefetchiti'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Haswell-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512er'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512pf'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fma4'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tbm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xop'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='amx-tile'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-bf16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-fp16'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bitalg'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrc'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fzrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='la57'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='taa-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xfd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ifma'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cmpccxadd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fbsdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='fsrs'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ibrs-all'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mcdt-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pbrsb-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='psdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='serialize'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vaes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='hle'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='rtm'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512bw'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512cd'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512dq'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512f'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='avx512vl'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='invpcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pcid'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='pku'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='mpx'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='core-capability'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='split-lock-detect'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='cldemote'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='erms'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='gfni'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdir64b'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='movdiri'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='xsaves'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='athlon-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='core2duo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='coreduo-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='n270-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='ss'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <blockers model='phenom-v1'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnow'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <feature name='3dnowext'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </blockers>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </mode>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <memoryBacking supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <enum name='sourceType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>anonymous</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <value>memfd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </memoryBacking>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <disk supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='diskDevice'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>disk</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cdrom</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>floppy</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>lun</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>fdc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>sata</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </disk>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <graphics supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vnc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egl-headless</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </graphics>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <video supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='modelType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vga</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>cirrus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>none</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>bochs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ramfb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </video>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hostdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='mode'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>subsystem</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='startupPolicy'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>mandatory</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>requisite</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>optional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='subsysType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pci</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>scsi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='capsType'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='pciBackend'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hostdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <rng supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtio-non-transitional</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>random</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>egd</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </rng>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <filesystem supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='driverType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>path</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>handle</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>virtiofs</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </filesystem>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <tpm supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-tis</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tpm-crb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emulator</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>external</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendVersion'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>2.0</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </tpm>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <redirdev supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='bus'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>usb</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </redirdev>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <channel supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </channel>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <crypto supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendModel'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>builtin</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </crypto>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <interface supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='backendType'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>default</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>passt</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </interface>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <panic supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='model'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>isa</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>hyperv</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </panic>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <console supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='type'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>null</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vc</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pty</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dev</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>file</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>pipe</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stdio</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>udp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tcp</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>unix</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>qemu-vdagent</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>dbus</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </console>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </devices>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <features>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <gic supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <genid supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <backup supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <async-teardown supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <ps2 supported='yes'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sev supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <sgx supported='no'/>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <hyperv supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='features'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>relaxed</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vapic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>spinlocks</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vpindex</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>runtime</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>synic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>stimer</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reset</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>vendor_id</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>frequencies</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>reenlightenment</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tlbflush</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>ipi</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>avic</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>emsr_bitmap</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>xmm_input</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </defaults>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </hyperv>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     <launchSecurity supported='yes'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       <enum name='sectype'>
Nov 29 07:36:09 compute-2 nova_compute[231463]:         <value>tdx</value>
Nov 29 07:36:09 compute-2 nova_compute[231463]:       </enum>
Nov 29 07:36:09 compute-2 nova_compute[231463]:     </launchSecurity>
Nov 29 07:36:09 compute-2 nova_compute[231463]:   </features>
Nov 29 07:36:09 compute-2 nova_compute[231463]: </domainCapabilities>
Nov 29 07:36:09 compute-2 nova_compute[231463]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.535 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.536 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.536 231467 DEBUG nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.536 231467 INFO nova.virt.libvirt.host [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Secure Boot support detected
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.539 231467 INFO nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.539 231467 INFO nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.552 231467 DEBUG nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 07:36:09 compute-2 nova_compute[231463]:   <model>Nehalem</model>
Nov 29 07:36:09 compute-2 nova_compute[231463]: </cpu>
Nov 29 07:36:09 compute-2 nova_compute[231463]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.554 231467 DEBUG nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.603 231467 INFO nova.virt.node [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Determined node identity 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from /var/lib/nova/compute_id
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.636 231467 WARNING nova.compute.manager [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Compute nodes ['77f31ad1-818f-4610-8dd1-3fbcd25133f2'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.694 231467 INFO nova.compute.manager [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.739 231467 WARNING nova.compute.manager [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.739 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.739 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.740 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.740 231467 DEBUG nova.compute.resource_tracker [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:36:09 compute-2 nova_compute[231463]: 2025-11-29 07:36:09.740 231467 DEBUG oslo_concurrency.processutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:36:10 compute-2 sudo[232338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jegcctbmcwvvllfutkquptbvmnzotfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401769.8344972-4365-201718840343339/AnsiballZ_systemd.py'
Nov 29 07:36:10 compute-2 sudo[232338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:36:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3966675715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.208 231467 DEBUG oslo_concurrency.processutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:36:10 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:36:10 compute-2 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:36:10 compute-2 ceph-mon[77138]: pgmap v936: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Nov 29 07:36:10 compute-2 python3.9[232340]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.568 231467 WARNING nova.virt.libvirt.driver [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.570 231467 DEBUG nova.compute.resource_tracker [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5278MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.570 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.570 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:10 compute-2 systemd[1]: Stopping nova_compute container...
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.593 231467 WARNING nova.compute.resource_tracker [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] No compute node record for compute-2.ctlplane.example.com:77f31ad1-818f-4610-8dd1-3fbcd25133f2: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 77f31ad1-818f-4610-8dd1-3fbcd25133f2 could not be found.
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.631 231467 INFO nova.compute.resource_tracker [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 77f31ad1-818f-4610-8dd1-3fbcd25133f2
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.639 231467 DEBUG oslo_concurrency.lockutils [None req-47f919cf-14df-423e-b1c1-e610bc970c7b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.640 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.640 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:36:10 compute-2 nova_compute[231463]: 2025-11-29 07:36:10.640 231467 DEBUG oslo_concurrency.lockutils [None req-94af0bf7-231d-49f4-a3be-9ed36500ddd7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:36:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:11 compute-2 virtqemud[231977]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 07:36:11 compute-2 virtqemud[231977]: hostname: compute-2
Nov 29 07:36:11 compute-2 virtqemud[231977]: End of file while reading data: Input/output error
Nov 29 07:36:11 compute-2 systemd[1]: libpod-31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d.scope: Deactivated successfully.
Nov 29 07:36:11 compute-2 systemd[1]: libpod-31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d.scope: Consumed 4.239s CPU time.
Nov 29 07:36:11 compute-2 podman[232369]: 2025-11-29 07:36:11.224915973 +0000 UTC m=+0.631211500 container died 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:36:11 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d-userdata-shm.mount: Deactivated successfully.
Nov 29 07:36:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130-merged.mount: Deactivated successfully.
Nov 29 07:36:11 compute-2 podman[232369]: 2025-11-29 07:36:11.313969349 +0000 UTC m=+0.720264876 container cleanup 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:36:11 compute-2 podman[232369]: nova_compute
Nov 29 07:36:11 compute-2 podman[232400]: nova_compute
Nov 29 07:36:11 compute-2 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 07:36:11 compute-2 systemd[1]: Stopped nova_compute container.
Nov 29 07:36:11 compute-2 systemd[1]: Starting nova_compute container...
Nov 29 07:36:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3966675715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4198467867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2138765783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:11 compute-2 ceph-mon[77138]: pgmap v937: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 07:36:11 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:36:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1faf1a627a18b503fa68a6684c7c2b80c4bb7c370449f3881531f6122ec3d130/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:11 compute-2 podman[232412]: 2025-11-29 07:36:11.517127135 +0000 UTC m=+0.098593606 container init 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:36:11 compute-2 podman[232412]: 2025-11-29 07:36:11.523263806 +0000 UTC m=+0.104730247 container start 31fd782510e0f3b6c99e82b844f62b963575a278a721135c1077fadb1842982d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:36:11 compute-2 podman[232412]: nova_compute
Nov 29 07:36:11 compute-2 nova_compute[232428]: + sudo -E kolla_set_configs
Nov 29 07:36:11 compute-2 systemd[1]: Started nova_compute container.
Nov 29 07:36:11 compute-2 sudo[232338]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Validating config file
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying service configuration files
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Writing out command to execute
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:11 compute-2 nova_compute[232428]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:36:11 compute-2 nova_compute[232428]: ++ cat /run_command
Nov 29 07:36:11 compute-2 nova_compute[232428]: + CMD=nova-compute
Nov 29 07:36:11 compute-2 nova_compute[232428]: + ARGS=
Nov 29 07:36:11 compute-2 nova_compute[232428]: + sudo kolla_copy_cacerts
Nov 29 07:36:11 compute-2 nova_compute[232428]: + [[ ! -n '' ]]
Nov 29 07:36:11 compute-2 nova_compute[232428]: + . kolla_extend_start
Nov 29 07:36:11 compute-2 nova_compute[232428]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:36:11 compute-2 nova_compute[232428]: Running command: 'nova-compute'
Nov 29 07:36:11 compute-2 nova_compute[232428]: + umask 0022
Nov 29 07:36:11 compute-2 nova_compute[232428]: + exec nova-compute
Nov 29 07:36:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:13 compute-2 sudo[232591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgebiwrnadxbsnfvjofrdlziwgbdhraf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401772.7185485-4392-148150335189452/AnsiballZ_podman_container.py'
Nov 29 07:36:13 compute-2 sudo[232591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:13 compute-2 python3.9[232593]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:36:13 compute-2 systemd[1]: Started libpod-conmon-78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a.scope.
Nov 29 07:36:13 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:36:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9055cb5fe7d03a07fa9a1b37bde120241297951fded116a0bfa0f867a3d273/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9055cb5fe7d03a07fa9a1b37bde120241297951fded116a0bfa0f867a3d273/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9055cb5fe7d03a07fa9a1b37bde120241297951fded116a0bfa0f867a3d273/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:13 compute-2 podman[232619]: 2025-11-29 07:36:13.649057094 +0000 UTC m=+0.175600875 container init 78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:36:13 compute-2 podman[232619]: 2025-11-29 07:36:13.66526516 +0000 UTC m=+0.191808871 container start 78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:36:13 compute-2 python3.9[232593]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 07:36:13 compute-2 nova_compute_init[232641]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 07:36:13 compute-2 systemd[1]: libpod-78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a.scope: Deactivated successfully.
Nov 29 07:36:13 compute-2 podman[232642]: 2025-11-29 07:36:13.751607692 +0000 UTC m=+0.040374324 container died 78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:36:13 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a-userdata-shm.mount: Deactivated successfully.
Nov 29 07:36:13 compute-2 systemd[1]: var-lib-containers-storage-overlay-5f9055cb5fe7d03a07fa9a1b37bde120241297951fded116a0bfa0f867a3d273-merged.mount: Deactivated successfully.
Nov 29 07:36:13 compute-2 podman[232655]: 2025-11-29 07:36:13.827667241 +0000 UTC m=+0.064293362 container cleanup 78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 29 07:36:13 compute-2 systemd[1]: libpod-conmon-78a40f46b080a1c3fb53d3dc713ce32908d7d556f2e7a84cd8be7edfb5758c1a.scope: Deactivated successfully.
Nov 29 07:36:13 compute-2 sudo[232591]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:13 compute-2 nova_compute[232428]: 2025-11-29 07:36:13.958 232432 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:13 compute-2 nova_compute[232428]: 2025-11-29 07:36:13.959 232432 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:13 compute-2 nova_compute[232428]: 2025-11-29 07:36:13.959 232432 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:36:13 compute-2 nova_compute[232428]: 2025-11-29 07:36:13.959 232432 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:36:13 compute-2 ceph-mon[77138]: pgmap v938: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.149 232432 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.183 232432 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.184 232432 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:36:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:14.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.749 232432 INFO nova.virt.driver [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.884 232432 INFO nova.compute.provider_config [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.902 232432 DEBUG oslo_concurrency.lockutils [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.902 232432 DEBUG oslo_concurrency.lockutils [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.903 232432 DEBUG oslo_concurrency.lockutils [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.903 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.904 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.904 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.904 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.905 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.905 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.905 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.905 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.906 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.906 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.906 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.906 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.907 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.907 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.908 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.908 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.908 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.908 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.909 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.909 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.909 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.909 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.910 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.910 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.910 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.911 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.911 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.911 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.911 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.912 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.912 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.912 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.913 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.913 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.913 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.913 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.914 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.914 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.914 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.914 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.915 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.915 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.915 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.916 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.916 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.916 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.916 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.917 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.917 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.917 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.917 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.918 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.918 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.918 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.919 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.919 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.919 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.919 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.919 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.920 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.920 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.920 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.920 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.921 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.921 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.921 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.921 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.922 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.923 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.923 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.923 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.924 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.925 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.926 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.926 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.926 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.926 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.926 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.927 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.927 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.927 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.927 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.927 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.928 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.929 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.930 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.930 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.930 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.930 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.930 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.931 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.932 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.933 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.934 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.935 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.936 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.937 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.938 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.939 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.940 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.941 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.942 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.943 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.944 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.945 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.946 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.947 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.948 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.949 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.950 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.950 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.950 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.950 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.950 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.951 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.952 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.953 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.954 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 sshd-session[202059]: Connection closed by 192.168.122.30 port 37640
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.954 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.954 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.954 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.954 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.955 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.956 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.957 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.958 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.959 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.960 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 sshd-session[202056]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.961 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.962 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.963 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.964 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.964 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.964 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.964 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.964 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.965 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.966 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 systemd[1]: session-49.scope: Consumed 2min 52.391s CPU time.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.967 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.968 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.969 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.969 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.969 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.969 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.969 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.970 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.971 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.972 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.973 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 systemd-logind[787]: Removed session 49.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.974 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.974 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.974 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.974 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.974 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.975 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.976 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.977 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.978 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.979 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.980 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.981 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.982 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.982 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.982 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.982 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.982 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.983 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.984 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.985 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.986 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.987 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.988 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.988 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.988 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.988 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.988 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.989 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.990 232432 WARNING oslo_config.cfg [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:36:14 compute-2 nova_compute[232428]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:36:14 compute-2 nova_compute[232428]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:36:14 compute-2 nova_compute[232428]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:36:14 compute-2 nova_compute[232428]: ).  Its value may be silently ignored in the future.
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.990 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.990 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.990 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.990 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.991 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.992 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.993 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.994 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.995 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.995 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.995 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.995 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.995 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.996 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.997 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.998 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:14 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:14.999 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.000 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.001 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.002 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.003 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.004 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.005 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.006 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.007 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.008 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.009 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.010 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.011 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.012 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.013 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.014 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.015 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.016 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.017 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.018 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.019 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.020 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.021 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.022 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.023 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.024 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.025 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.026 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.027 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.028 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.029 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.030 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.031 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.032 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.033 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.033 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.033 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.033 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.033 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.034 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.035 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.036 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.037 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.038 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.039 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.040 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.040 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.040 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.040 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.040 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.041 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.041 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.041 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.041 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.042 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.043 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.044 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.045 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.046 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.047 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.048 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.049 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.050 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.051 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.052 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.053 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.054 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.054 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.054 232432 DEBUG oslo_service.service [None req-d255da20-eff5-49d0-a05f-db8eda1e639d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.055 232432 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.079 232432 INFO nova.virt.node [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Determined node identity 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from /var/lib/nova/compute_id
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.080 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.081 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.081 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.082 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.099 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5a42d7e5b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.102 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5a42d7e5b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.103 232432 INFO nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Connection event '1' reason 'None'
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.113 232432 INFO nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]: 
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <host>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <uuid>841b8909-9838-4df3-bf7c-bb9b0c2a4d0c</uuid>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <arch>x86_64</arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model>EPYC-Rome-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <vendor>AMD</vendor>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <microcode version='16777317'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='x2apic'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='tsc-deadline'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='osxsave'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='hypervisor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='tsc_adjust'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='spec-ctrl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='stibp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='arch-capabilities'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='cmp_legacy'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='topoext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='virt-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='lbrv'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='tsc-scale'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='vmcb-clean'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='pause-filter'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='pfthreshold'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='svme-addr-chk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='rdctl-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='mds-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature name='pschange-mc-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <pages unit='KiB' size='4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <pages unit='KiB' size='2048'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <power_management>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <suspend_mem/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </power_management>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <iommu support='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <migration_features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <live/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <uri_transports>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <uri_transport>tcp</uri_transport>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <uri_transport>rdma</uri_transport>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </uri_transports>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </migration_features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <topology>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <cells num='1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <cell id='0'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <memory unit='KiB'>7864324</memory>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <distances>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <sibling id='0' value='10'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           </distances>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           <cpus num='8'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:           </cpus>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         </cell>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </cells>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </topology>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <cache>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </cache>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <secmodel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model>selinux</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <doi>0</doi>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </secmodel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <secmodel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model>dac</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <doi>0</doi>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </secmodel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </host>
Nov 29 07:36:15 compute-2 nova_compute[232428]: 
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <guest>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <os_type>hvm</os_type>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <arch name='i686'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <wordsize>32</wordsize>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <domain type='qemu'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <domain type='kvm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <pae/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <nonpae/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <acpi default='on' toggle='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <apic default='on' toggle='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <cpuselection/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <deviceboot/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <externalSnapshot/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </guest>
Nov 29 07:36:15 compute-2 nova_compute[232428]: 
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <guest>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <os_type>hvm</os_type>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <arch name='x86_64'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <wordsize>64</wordsize>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <domain type='qemu'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <domain type='kvm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <acpi default='on' toggle='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <apic default='on' toggle='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <cpuselection/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <deviceboot/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <externalSnapshot/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </guest>
Nov 29 07:36:15 compute-2 nova_compute[232428]: 
Nov 29 07:36:15 compute-2 nova_compute[232428]: </capabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]: 
Nov 29 07:36:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.120 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:36:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.124 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:36:15 compute-2 nova_compute[232428]: <domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <domain>kvm</domain>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <arch>i686</arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <vcpu max='240'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <iothreads supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <os supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='firmware'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <loader supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>rom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pflash</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='readonly'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>yes</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='secure'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </loader>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </os>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='maximumMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <vendor>AMD</vendor>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='succor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='custom' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-128'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-256'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-512'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <memoryBacking supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='sourceType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>anonymous</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>memfd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </memoryBacking>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <disk supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='diskDevice'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>disk</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cdrom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>floppy</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>lun</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ide</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>fdc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>sata</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <graphics supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vnc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egl-headless</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </graphics>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <video supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='modelType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vga</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cirrus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>none</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>bochs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ramfb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </video>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hostdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='mode'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>subsystem</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='startupPolicy'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>mandatory</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>requisite</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>optional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='subsysType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pci</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='capsType'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='pciBackend'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hostdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <rng supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>random</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <filesystem supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='driverType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>path</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>handle</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtiofs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </filesystem>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <tpm supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-tis</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-crb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emulator</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>external</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendVersion'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>2.0</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </tpm>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <redirdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </redirdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <channel supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </channel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <crypto supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </crypto>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <interface supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>passt</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <panic supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>isa</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>hyperv</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </panic>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <console supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>null</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dev</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pipe</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stdio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>udp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tcp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu-vdagent</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </console>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <gic supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <genid supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backup supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <async-teardown supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <ps2 supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sev supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sgx supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hyperv supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='features'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>relaxed</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vapic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>spinlocks</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vpindex</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>runtime</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>synic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stimer</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reset</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vendor_id</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>frequencies</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reenlightenment</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tlbflush</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ipi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>avic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emsr_bitmap</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>xmm_input</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hyperv>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <launchSecurity supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='sectype'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tdx</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </launchSecurity>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]: </domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.131 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:36:15 compute-2 nova_compute[232428]: <domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <domain>kvm</domain>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <arch>i686</arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <vcpu max='4096'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <iothreads supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <os supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='firmware'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <loader supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>rom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pflash</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='readonly'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>yes</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='secure'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </loader>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </os>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='maximumMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <vendor>AMD</vendor>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='succor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='custom' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-128'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-256'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-512'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <memoryBacking supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='sourceType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>anonymous</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>memfd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </memoryBacking>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <disk supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='diskDevice'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>disk</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cdrom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>floppy</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>lun</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>fdc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>sata</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <graphics supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vnc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egl-headless</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </graphics>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <video supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='modelType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vga</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cirrus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>none</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>bochs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ramfb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </video>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hostdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='mode'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>subsystem</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='startupPolicy'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>mandatory</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>requisite</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>optional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='subsysType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pci</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='capsType'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='pciBackend'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hostdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <rng supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>random</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <filesystem supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='driverType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>path</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>handle</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtiofs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </filesystem>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <tpm supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-tis</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-crb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emulator</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>external</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendVersion'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>2.0</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </tpm>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <redirdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </redirdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <channel supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </channel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <crypto supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </crypto>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <interface supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>passt</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <panic supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>isa</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>hyperv</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </panic>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <console supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>null</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dev</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pipe</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stdio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>udp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tcp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu-vdagent</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </console>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <gic supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <genid supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backup supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <async-teardown supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <ps2 supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sev supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sgx supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hyperv supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='features'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>relaxed</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vapic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>spinlocks</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vpindex</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>runtime</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>synic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stimer</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reset</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vendor_id</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>frequencies</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reenlightenment</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tlbflush</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ipi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>avic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emsr_bitmap</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>xmm_input</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hyperv>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <launchSecurity supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='sectype'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tdx</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </launchSecurity>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]: </domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.163 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.168 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:36:15 compute-2 nova_compute[232428]: <domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <domain>kvm</domain>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <arch>x86_64</arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <vcpu max='240'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <iothreads supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <os supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='firmware'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <loader supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>rom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pflash</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='readonly'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>yes</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='secure'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </loader>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </os>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='maximumMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <vendor>AMD</vendor>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='succor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='custom' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-128'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-256'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-512'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <memoryBacking supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='sourceType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>anonymous</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>memfd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </memoryBacking>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <disk supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='diskDevice'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>disk</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cdrom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>floppy</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>lun</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ide</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>fdc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>sata</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <graphics supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vnc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egl-headless</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </graphics>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <video supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='modelType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vga</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cirrus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>none</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>bochs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ramfb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </video>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hostdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='mode'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>subsystem</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='startupPolicy'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>mandatory</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>requisite</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>optional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='subsysType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pci</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='capsType'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='pciBackend'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hostdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <rng supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>random</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <filesystem supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='driverType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>path</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>handle</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtiofs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </filesystem>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <tpm supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-tis</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-crb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emulator</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>external</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendVersion'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>2.0</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </tpm>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <redirdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </redirdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <channel supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </channel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <crypto supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </crypto>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <interface supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>passt</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <panic supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>isa</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>hyperv</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </panic>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <console supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>null</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dev</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pipe</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stdio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>udp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tcp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu-vdagent</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </console>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <gic supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <genid supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backup supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <async-teardown supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <ps2 supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sev supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sgx supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hyperv supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='features'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>relaxed</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vapic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>spinlocks</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vpindex</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>runtime</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>synic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stimer</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reset</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vendor_id</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>frequencies</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reenlightenment</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tlbflush</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ipi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>avic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emsr_bitmap</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>xmm_input</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hyperv>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <launchSecurity supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='sectype'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tdx</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </launchSecurity>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]: </domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.236 232432 DEBUG nova.virt.libvirt.volume.mount [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.243 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:36:15 compute-2 nova_compute[232428]: <domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <domain>kvm</domain>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <arch>x86_64</arch>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <vcpu max='4096'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <iothreads supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <os supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='firmware'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>efi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <loader supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>rom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pflash</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='readonly'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>yes</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='secure'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>yes</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>no</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </loader>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </os>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='maximum' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='maximumMigratable'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>on</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>off</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='host-model' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <vendor>AMD</vendor>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='x2apic'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='stibp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='succor'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lbrv'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <mode name='custom' supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Broadwell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Cooperlake-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Denverton-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Dhyana-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='auto-ibrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amd-psfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='no-nested-data-bp'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='null-sel-clr-base'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='stibp-always-on'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='EPYC-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-128'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-256'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx10-512'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='prefetchiti'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Haswell-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='IvyBridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='KnightsMill-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4fmaps'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-4vnniw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512er'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512pf'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fma4'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tbm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xop'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='amx-tile'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-bf16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-fp16'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bitalg'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vbmi2'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrc'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fzrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='la57'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='taa-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='tsx-ldtrk'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xfd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='SierraForest-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ifma'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-ne-convert'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx-vnni-int8'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='bus-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cmpccxadd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fbsdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='fsrs'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ibrs-all'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mcdt-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pbrsb-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='psdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='serialize'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vaes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='vpclmulqdq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='hle'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='rtm'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512bw'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512cd'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512dq'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512f'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='avx512vl'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='invpcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pcid'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='pku'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='mpx'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v2'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v3'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='core-capability'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='split-lock-detect'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='Snowridge-v4'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='cldemote'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='erms'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='gfni'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdir64b'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='movdiri'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='xsaves'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='athlon-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='core2duo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='coreduo-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='n270-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='ss'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <blockers model='phenom-v1'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnow'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <feature name='3dnowext'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </blockers>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </mode>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <memoryBacking supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <enum name='sourceType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>anonymous</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <value>memfd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </memoryBacking>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <disk supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='diskDevice'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>disk</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cdrom</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>floppy</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>lun</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>fdc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>sata</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <graphics supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vnc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egl-headless</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </graphics>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <video supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='modelType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vga</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>cirrus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>none</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>bochs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ramfb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </video>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hostdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='mode'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>subsystem</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='startupPolicy'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>mandatory</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>requisite</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>optional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='subsysType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pci</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>scsi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='capsType'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='pciBackend'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hostdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <rng supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtio-non-transitional</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>random</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>egd</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <filesystem supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='driverType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>path</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>handle</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>virtiofs</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </filesystem>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <tpm supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-tis</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tpm-crb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emulator</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>external</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendVersion'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>2.0</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </tpm>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <redirdev supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='bus'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>usb</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </redirdev>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <channel supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </channel>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <crypto supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendModel'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>builtin</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </crypto>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <interface supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='backendType'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>default</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>passt</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <panic supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='model'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>isa</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>hyperv</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </panic>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <console supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='type'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>null</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vc</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pty</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dev</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>file</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>pipe</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stdio</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>udp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tcp</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>unix</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>qemu-vdagent</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>dbus</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </console>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <features>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <gic supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <vmcoreinfo supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <genid supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backingStoreInput supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <backup supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <async-teardown supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <ps2 supported='yes'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sev supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <sgx supported='no'/>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <hyperv supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='features'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>relaxed</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vapic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>spinlocks</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vpindex</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>runtime</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>synic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>stimer</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reset</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>vendor_id</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>frequencies</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>reenlightenment</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tlbflush</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>ipi</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>avic</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>emsr_bitmap</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>xmm_input</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <spinlocks>4095</spinlocks>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <stimer_direct>on</stimer_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </defaults>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </hyperv>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     <launchSecurity supported='yes'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       <enum name='sectype'>
Nov 29 07:36:15 compute-2 nova_compute[232428]:         <value>tdx</value>
Nov 29 07:36:15 compute-2 nova_compute[232428]:       </enum>
Nov 29 07:36:15 compute-2 nova_compute[232428]:     </launchSecurity>
Nov 29 07:36:15 compute-2 nova_compute[232428]:   </features>
Nov 29 07:36:15 compute-2 nova_compute[232428]: </domainCapabilities>
Nov 29 07:36:15 compute-2 nova_compute[232428]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.315 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.316 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.316 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.316 232432 INFO nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Secure Boot support detected
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.319 232432 INFO nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.319 232432 INFO nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.335 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 07:36:15 compute-2 nova_compute[232428]:   <model>Nehalem</model>
Nov 29 07:36:15 compute-2 nova_compute[232428]: </cpu>
Nov 29 07:36:15 compute-2 nova_compute[232428]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.338 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.364 232432 INFO nova.virt.node [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Determined node identity 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from /var/lib/nova/compute_id
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.420 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Verified node 77f31ad1-818f-4610-8dd1-3fbcd25133f2 matches my host compute-2.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 29 07:36:15 compute-2 nova_compute[232428]: 2025-11-29 07:36:15.533 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:36:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:15 compute-2 ceph-mon[77138]: pgmap v939: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.060 232432 ERROR nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Could not retrieve compute node resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '77f31ad1-818f-4610-8dd1-3fbcd25133f2' not found: No resource provider with uuid 77f31ad1-818f-4610-8dd1-3fbcd25133f2 found  ", "request_id": "req-45a2bdb3-6e37-4ef4-b281-94c83ed5b2f0"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '77f31ad1-818f-4610-8dd1-3fbcd25133f2' not found: No resource provider with uuid 77f31ad1-818f-4610-8dd1-3fbcd25133f2 found  ", "request_id": "req-45a2bdb3-6e37-4ef4-b281-94c83ed5b2f0"}]}
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.097 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.098 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.099 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.099 232432 DEBUG nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.100 232432 DEBUG oslo_concurrency.processutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:36:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:36:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1871104271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.561 232432 DEBUG oslo_concurrency.processutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:36:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:16 compute-2 podman[232753]: 2025-11-29 07:36:16.705663801 +0000 UTC m=+0.090754489 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.785 232432 WARNING nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.786 232432 DEBUG nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5247MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.786 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:16 compute-2 nova_compute[232428]: 2025-11-29 07:36:16.787 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1871104271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/804362916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1294229932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:17 compute-2 nova_compute[232428]: 2025-11-29 07:36:17.635 232432 ERROR nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '77f31ad1-818f-4610-8dd1-3fbcd25133f2' not found: No resource provider with uuid 77f31ad1-818f-4610-8dd1-3fbcd25133f2 found  ", "request_id": "req-604b087d-f956-404c-8eb6-ff32a4b4f227"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '77f31ad1-818f-4610-8dd1-3fbcd25133f2' not found: No resource provider with uuid 77f31ad1-818f-4610-8dd1-3fbcd25133f2 found  ", "request_id": "req-604b087d-f956-404c-8eb6-ff32a4b4f227"}]}
Nov 29 07:36:17 compute-2 nova_compute[232428]: 2025-11-29 07:36:17.637 232432 DEBUG nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:36:17 compute-2 nova_compute[232428]: 2025-11-29 07:36:17.637 232432 DEBUG nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:36:17 compute-2 nova_compute[232428]: 2025-11-29 07:36:17.736 232432 INFO nova.scheduler.client.report [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [req-ca1701c1-8420-42b4-ba23-ea01850153b2] Created resource provider record via placement API for resource provider with UUID 77f31ad1-818f-4610-8dd1-3fbcd25133f2 and name compute-2.ctlplane.example.com.
Nov 29 07:36:17 compute-2 nova_compute[232428]: 2025-11-29 07:36:17.779 232432 DEBUG oslo_concurrency.processutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:36:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:36:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3008547221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.246 232432 DEBUG oslo_concurrency.processutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.253 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 07:36:18 compute-2 nova_compute[232428]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.253 232432 INFO nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] kernel doesn't support AMD SEV
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.254 232432 DEBUG nova.compute.provider_tree [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.255 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.257 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 07:36:18 compute-2 nova_compute[232428]:   <arch>x86_64</arch>
Nov 29 07:36:18 compute-2 nova_compute[232428]:   <model>Nehalem</model>
Nov 29 07:36:18 compute-2 nova_compute[232428]:   <vendor>AMD</vendor>
Nov 29 07:36:18 compute-2 nova_compute[232428]:   <topology sockets="8" cores="1" threads="1"/>
Nov 29 07:36:18 compute-2 nova_compute[232428]: </cpu>
Nov 29 07:36:18 compute-2 nova_compute[232428]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.326 232432 DEBUG nova.scheduler.client.report [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Updated inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.327 232432 DEBUG nova.compute.provider_tree [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Updating resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.327 232432 DEBUG nova.compute.provider_tree [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.463 232432 DEBUG nova.compute.provider_tree [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Updating resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:36:18 compute-2 ceph-mon[77138]: pgmap v940: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1857400999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3072530336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.498 232432 DEBUG nova.compute.resource_tracker [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.499 232432 DEBUG oslo_concurrency.lockutils [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.499 232432 DEBUG nova.service [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.573 232432 DEBUG nova.service [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 07:36:18 compute-2 nova_compute[232428]: 2025-11-29 07:36:18.574 232432 DEBUG nova.servicegroup.drivers.db [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 07:36:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:36:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:36:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:36:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:36:20 compute-2 sudo[232796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:20 compute-2 sudo[232796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-2 sudo[232796]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-2 sudo[232821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:36:20 compute-2 sudo[232821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-2 sudo[232821]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-2 sudo[232846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:20 compute-2 sudo[232846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-2 sudo[232846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-2 sudo[232871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:36:20 compute-2 sudo[232871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:20.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:21 compute-2 sudo[232871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3008547221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:36:21 compute-2 ceph-mon[77138]: pgmap v941: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:21 compute-2 sudo[232927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:21 compute-2 sudo[232927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:21 compute-2 sudo[232927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:21 compute-2 sudo[232959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:21 compute-2 sudo[232959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:21 compute-2 podman[232951]: 2025-11-29 07:36:21.278186706 +0000 UTC m=+0.063762066 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:36:21 compute-2 sudo[232959]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:36:23 compute-2 ceph-mon[77138]: pgmap v942: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:36:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:36:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:23.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:36:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:36:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:36:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:36:24 compute-2 ceph-mon[77138]: pgmap v943: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:25.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:26.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:27.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:27 compute-2 ceph-mon[77138]: pgmap v944: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:36:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:28.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:36:29 compute-2 ceph-mon[77138]: pgmap v945: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:29.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:30 compute-2 ceph-mon[77138]: pgmap v946: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:32.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:32 compute-2 podman[233002]: 2025-11-29 07:36:32.734684458 +0000 UTC m=+0.121748310 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:36:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:33 compute-2 ceph-mon[77138]: pgmap v947: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:34.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:35 compute-2 ceph-mon[77138]: pgmap v948: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:35.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:36 compute-2 ceph-mon[77138]: pgmap v949: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:36 compute-2 nova_compute[232428]: 2025-11-29 07:36:36.576 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:36:36 compute-2 nova_compute[232428]: 2025-11-29 07:36:36.656 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:36:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:36.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:38.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:38 compute-2 ceph-mon[77138]: pgmap v950: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:39.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:39 compute-2 ceph-mon[77138]: pgmap v951: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:40.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:41 compute-2 sudo[233033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:41 compute-2 sudo[233033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:41 compute-2 sudo[233033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:41 compute-2 sudo[233058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:41 compute-2 sudo[233058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:41 compute-2 sudo[233058]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:41 compute-2 ceph-mon[77138]: pgmap v952: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:42 compute-2 sudo[233083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:42 compute-2 sudo[233083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:42 compute-2 sudo[233083]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:42 compute-2 sudo[233108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:36:42 compute-2 sudo[233108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:42 compute-2 sudo[233108]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:36:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:42.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:36:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:43.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:36:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:36:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:45.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:45 compute-2 ceph-mon[77138]: pgmap v953: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:46.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:47 compute-2 ceph-mon[77138]: pgmap v954: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:47 compute-2 podman[233136]: 2025-11-29 07:36:47.686197268 +0000 UTC m=+0.077289436 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:36:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:48.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:49 compute-2 ceph-mon[77138]: pgmap v955: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:50.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:50 compute-2 ceph-mon[77138]: pgmap v956: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:51.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:51 compute-2 podman[233159]: 2025-11-29 07:36:51.669473571 +0000 UTC m=+0.067962640 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:36:52 compute-2 ceph-mon[77138]: pgmap v957: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:54 compute-2 ceph-mon[77138]: pgmap v958: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:36:56 compute-2 ceph-mon[77138]: pgmap v959: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:36:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:58.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:36:59 compute-2 ceph-mon[77138]: pgmap v960: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:36:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:36:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:36:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:59.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:36:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:36:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122665251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:36:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:36:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122665251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:37:00 compute-2 ceph-mon[77138]: pgmap v961: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/297991512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:37:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/297991512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:37:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1122665251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:37:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1122665251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:37:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:00.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:01.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:37:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4227894432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:37:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:37:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4227894432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:37:01 compute-2 sudo[233184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:01 compute-2 sudo[233184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:01 compute-2 sudo[233184]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:01 compute-2 sudo[233209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:01 compute-2 sudo[233209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:01 compute-2 sudo[233209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:02 compute-2 ceph-mon[77138]: pgmap v962: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4227894432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:37:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4227894432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:37:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:02.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:37:03.283 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:37:03.284 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:37:03.284 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:37:03 compute-2 podman[233235]: 2025-11-29 07:37:03.695240306 +0000 UTC m=+0.088787721 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:37:04 compute-2 ceph-mon[77138]: pgmap v963: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:07 compute-2 ceph-mon[77138]: pgmap v964: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:08 compute-2 ceph-mon[77138]: pgmap v965: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:10 compute-2 ceph-mon[77138]: pgmap v966: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:10.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:12 compute-2 ceph-mon[77138]: pgmap v967: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:37:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:13.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:37:14 compute-2 ceph-mon[77138]: pgmap v968: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.455 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.456 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.456 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.457 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.457 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.458 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.458 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.458 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.459 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.494 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.495 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.495 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.495 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.496 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:37:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:14.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:37:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1864435368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:14 compute-2 nova_compute[232428]: 2025-11-29 07:37:14.995 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:37:15 compute-2 nova_compute[232428]: 2025-11-29 07:37:15.173 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:37:15 compute-2 nova_compute[232428]: 2025-11-29 07:37:15.175 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5304MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:37:15 compute-2 nova_compute[232428]: 2025-11-29 07:37:15.176 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:37:15 compute-2 nova_compute[232428]: 2025-11-29 07:37:15.176 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:37:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:15.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2266064152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1864435368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:16 compute-2 ceph-mon[77138]: pgmap v969: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3830689712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:16 compute-2 nova_compute[232428]: 2025-11-29 07:37:16.162 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:37:16 compute-2 nova_compute[232428]: 2025-11-29 07:37:16.163 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:37:16 compute-2 nova_compute[232428]: 2025-11-29 07:37:16.205 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:37:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:37:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3721743495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:16 compute-2 nova_compute[232428]: 2025-11-29 07:37:16.824 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:37:16 compute-2 nova_compute[232428]: 2025-11-29 07:37:16.836 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:37:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:17.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/115100883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/698531134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3721743495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:37:18 compute-2 nova_compute[232428]: 2025-11-29 07:37:18.357 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:37:18 compute-2 nova_compute[232428]: 2025-11-29 07:37:18.359 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:37:18 compute-2 nova_compute[232428]: 2025-11-29 07:37:18.360 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:37:18 compute-2 podman[233312]: 2025-11-29 07:37:18.666539503 +0000 UTC m=+0.062456031 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 07:37:18 compute-2 ceph-mon[77138]: pgmap v970: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:18.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:19.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:20 compute-2 ceph-mon[77138]: pgmap v971: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:21.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:21 compute-2 sudo[233333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:21 compute-2 sudo[233333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-2 sudo[233333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-2 sudo[233364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:21 compute-2 sudo[233364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-2 sudo[233364]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-2 podman[233357]: 2025-11-29 07:37:21.843743739 +0000 UTC m=+0.073799340 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:37:22 compute-2 ceph-mon[77138]: pgmap v972: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:23.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:24 compute-2 ceph-mon[77138]: pgmap v973: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:25.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:25 compute-2 ceph-mon[77138]: pgmap v974: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:37:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:26.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:37:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:27.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:28 compute-2 ceph-mon[77138]: pgmap v975: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:29.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:29 compute-2 ceph-mon[77138]: pgmap v976: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:30.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:31.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:32 compute-2 ceph-mon[77138]: pgmap v977: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:32.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:33.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:34 compute-2 podman[233409]: 2025-11-29 07:37:34.731541041 +0000 UTC m=+0.123242879 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:34.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:35 compute-2 ceph-mon[77138]: pgmap v978: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:35.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:36 compute-2 ceph-mon[77138]: pgmap v979: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:39 compute-2 ceph-mon[77138]: pgmap v980: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:41 compute-2 ceph-mon[77138]: pgmap v981: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:41.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:42 compute-2 sudo[233440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:42 compute-2 sudo[233440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 sudo[233440]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:42 compute-2 sudo[233465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:42 compute-2 sudo[233465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 sudo[233465]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:42 compute-2 sudo[233490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:42 compute-2 sudo[233490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 sudo[233490]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:42 compute-2 sudo[233515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:42 compute-2 sudo[233515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 sudo[233515]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:42 compute-2 sudo[233540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:42 compute-2 sudo[233540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 sudo[233540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:42 compute-2 sudo[233565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:37:42 compute-2 sudo[233565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:42.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:42 compute-2 sudo[233565]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:43.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:45.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:49.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:49 compute-2 podman[233627]: 2025-11-29 07:37:49.687628974 +0000 UTC m=+0.086979875 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 07:37:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:37:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:37:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 1256..1867) lease_timeout -- calling new election
Nov 29 07:37:52 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:37:52 compute-2 ceph-mon[77138]: paxos.1).electionLogic(44) init, last seen epoch 44
Nov 29 07:37:52 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:37:52 compute-2 podman[233648]: 2025-11-29 07:37:52.66445524 +0000 UTC m=+0.069323462 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:37:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:52 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:37:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:53.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:54 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:37:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:55.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:37:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:58 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:37:58 compute-2 ceph-mon[77138]: pgmap v982: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:37:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:37:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:37:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:37:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:37:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:37:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v983: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v984: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v985: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v986: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v987: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:38:00 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v988: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v989: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v990: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 07:38:00 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:38:00 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:38:00 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:38:00 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 28m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:38:00 compute-2 ceph-mon[77138]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 07:38:00 compute-2 ceph-mon[77138]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 07:38:00 compute-2 ceph-mon[77138]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 07:38:00 compute-2 ceph-mon[77138]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 07:38:00 compute-2 ceph-mon[77138]: pgmap v991: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:00 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:00 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:00 compute-2 sshd-session[233672]: Invalid user solana from 45.148.10.240 port 35854
Nov 29 07:38:00 compute-2 sshd-session[233672]: Connection closed by invalid user solana 45.148.10.240 port 35854 [preauth]
Nov 29 07:38:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:02 compute-2 sudo[233675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:02 compute-2 sudo[233675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:02 compute-2 sudo[233675]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:02 compute-2 sudo[233700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:02 compute-2 sudo[233700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:02 compute-2 sudo[233700]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:03 compute-2 ceph-mon[77138]: pgmap v992: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:03.284 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:03.285 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:03.285 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:05.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:05 compute-2 ceph-mon[77138]: pgmap v993: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:05 compute-2 podman[233727]: 2025-11-29 07:38:05.752080674 +0000 UTC m=+0.122975581 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 07:38:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:06 compute-2 ceph-mon[77138]: pgmap v994: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:38:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:07 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 07:38:07 compute-2 ceph-mon[77138]: paxos.1).electionLogic(49) init, last seen epoch 49, mid-election, bumping
Nov 29 07:38:07 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:38:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:07 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:38:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:38:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:09 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 07:38:09 compute-2 ceph-mon[77138]: pgmap v995: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:09 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 07:38:09 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 07:38:09 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:38:09 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 07:38:09 compute-2 ceph-mon[77138]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:38:09 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 28m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 07:38:09 compute-2 ceph-mon[77138]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 07:38:09 compute-2 ceph-mon[77138]: Cluster is now healthy
Nov 29 07:38:09 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:38:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:10 compute-2 ceph-mon[77138]: pgmap v996: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:38:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:38:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:11 compute-2 ceph-mon[77138]: pgmap v997: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:13.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:14 compute-2 ceph-mon[77138]: pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:15.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:15 compute-2 sudo[233760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:15 compute-2 sudo[233760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:15 compute-2 sudo[233760]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:15 compute-2 sudo[233785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:38:15 compute-2 sudo[233785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:15 compute-2 sudo[233785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:16 compute-2 ceph-mon[77138]: pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:38:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:17 compute-2 ceph-mon[77138]: pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3537593331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1753995405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.350 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.352 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.410 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.411 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.411 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.451 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.452 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.452 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.453 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.453 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.454 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.454 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.455 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.455 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.498 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.499 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.499 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.500 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.501 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1337845297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3142815245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:38:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:38:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2332414883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:18 compute-2 nova_compute[232428]: 2025-11-29 07:38:18.965 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.148 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.150 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5305MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.150 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.150 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.245 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.245 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.266 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:19.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3767838489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2332414883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:19 compute-2 ceph-mon[77138]: pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.685 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.693 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.716 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.717 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:38:19 compute-2 nova_compute[232428]: 2025-11-29 07:38:19.717 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:20 compute-2 podman[233856]: 2025-11-29 07:38:20.673825241 +0000 UTC m=+0.072251512 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3767838489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:21.757 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:38:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:21.759 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:38:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:38:21.760 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:38:22 compute-2 sudo[233876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:22 compute-2 sudo[233876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:22 compute-2 sudo[233876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:22 compute-2 sudo[233901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:22 compute-2 sudo[233901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:22 compute-2 sudo[233901]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:23.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:23 compute-2 ceph-mon[77138]: pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:23 compute-2 podman[233927]: 2025-11-29 07:38:23.71877519 +0000 UTC m=+0.107096223 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 07:38:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:25 compute-2 ceph-mon[77138]: pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:30 compute-2 ceph-mon[77138]: pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:30 compute-2 ceph-mon[77138]: pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1069074270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:38:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1069074270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:38:30 compute-2 ceph-mon[77138]: pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:32 compute-2 ceph-mon[77138]: pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:33.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:34 compute-2 ceph-mon[77138]: pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:35.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:36 compute-2 ceph-mon[77138]: pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:36 compute-2 podman[233953]: 2025-11-29 07:38:36.717304696 +0000 UTC m=+0.115442079 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:38:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:37 compute-2 ceph-mon[77138]: pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:38:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:38:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:40 compute-2 ceph-mon[77138]: pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:38:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:38:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:41.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:42 compute-2 ceph-mon[77138]: pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:42 compute-2 sudo[233983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:42 compute-2 sudo[233983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:42 compute-2 sudo[233983]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:42 compute-2 sudo[234008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:42 compute-2 sudo[234008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:42 compute-2 sudo[234008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:44 compute-2 ceph-mon[77138]: pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:45 compute-2 ceph-mon[77138]: pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:48 compute-2 ceph-mon[77138]: pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:38:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:38:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:49.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:50 compute-2 ceph-mon[77138]: pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:51.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:51 compute-2 podman[234038]: 2025-11-29 07:38:51.694914501 +0000 UTC m=+0.089089569 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:38:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:53 compute-2 ceph-mon[77138]: pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:53.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:54 compute-2 podman[234058]: 2025-11-29 07:38:54.662331858 +0000 UTC m=+0.060510602 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:38:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:55.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:38:55 compute-2 ceph-mon[77138]: pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:38:56 compute-2 ceph-mon[77138]: pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:57.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:57 compute-2 ceph-mon[77138]: pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:38:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:38:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:38:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:38:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:38:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:59.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:38:59 compute-2 ceph-mon[77138]: pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:01.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:02 compute-2 ceph-mon[77138]: pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:02 compute-2 sudo[234082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:02 compute-2 sudo[234082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:02 compute-2 sudo[234082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:02 compute-2 sudo[234107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:02 compute-2 sudo[234107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:02 compute-2 sudo[234107]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:39:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:39:03.285 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:39:03.286 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:39:03.286 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:03.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:04 compute-2 ceph-mon[77138]: pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:07 compute-2 ceph-mon[77138]: pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:07.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:07 compute-2 podman[234135]: 2025-11-29 07:39:07.701211313 +0000 UTC m=+0.105915947 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 07:39:08 compute-2 ceph-mon[77138]: pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:10 compute-2 ceph-mon[77138]: pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:11.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:13.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:15 compute-2 ceph-mon[77138]: pgmap v1027: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:39:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:15.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:39:15 compute-2 sudo[234165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:15 compute-2 sudo[234165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:15 compute-2 sudo[234165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-2 sudo[234190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:16 compute-2 sudo[234190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-2 sudo[234190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-2 sudo[234215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:16 compute-2 sudo[234215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-2 sudo[234215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-2 sudo[234240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:39:16 compute-2 sudo[234240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:16 compute-2 ceph-mon[77138]: pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:16 compute-2 ceph-mon[77138]: pgmap v1029: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:16 compute-2 sudo[234240]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:17.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:17.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:39:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:19 compute-2 ceph-mon[77138]: pgmap v1030: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4283126515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/871685077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.719 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.720 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.720 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.720 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.742 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.743 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.744 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.744 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.744 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.745 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.745 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.745 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.746 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.814 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.814 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.815 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.815 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:39:19 compute-2 nova_compute[232428]: 2025-11-29 07:39:19.816 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:39:20 compute-2 ceph-mon[77138]: pgmap v1031: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3741539133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3261707078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:39:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3101907084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.283 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.546 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.548 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5296MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.549 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.550 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.623 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.623 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:39:20 compute-2 nova_compute[232428]: 2025-11-29 07:39:20.647 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:39:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:21.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:39:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2132736440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:21 compute-2 nova_compute[232428]: 2025-11-29 07:39:21.123 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:39:21 compute-2 nova_compute[232428]: 2025-11-29 07:39:21.132 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:39:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:21.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3101907084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2132736440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:21 compute-2 nova_compute[232428]: 2025-11-29 07:39:21.978 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:39:21 compute-2 nova_compute[232428]: 2025-11-29 07:39:21.980 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:39:21 compute-2 nova_compute[232428]: 2025-11-29 07:39:21.981 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:22 compute-2 podman[234343]: 2025-11-29 07:39:22.702606098 +0000 UTC m=+0.093149895 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:39:22 compute-2 sudo[234362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:22 compute-2 sudo[234362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:22 compute-2 sudo[234362]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:23 compute-2 sudo[234387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:23 compute-2 sudo[234387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:23 compute-2 sudo[234387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:23.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:23 compute-2 ceph-mon[77138]: pgmap v1032: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:24 compute-2 ceph-mon[77138]: pgmap v1033: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:25 compute-2 ceph-mon[77138]: pgmap v1034: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:25 compute-2 podman[234414]: 2025-11-29 07:39:25.697673854 +0000 UTC m=+0.086227002 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:39:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3695365885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:39:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3695365885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:28 compute-2 ceph-mon[77138]: pgmap v1035: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3695365885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3695365885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:28 compute-2 sudo[234436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:28 compute-2 sudo[234436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:28 compute-2 sudo[234436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:28 compute-2 sudo[234461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:39:28 compute-2 sudo[234461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:28 compute-2 sudo[234461]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:29.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:39:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:39:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:31 compute-2 ceph-mon[77138]: pgmap v1036: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:31.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:32 compute-2 ceph-mon[77138]: pgmap v1037: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:34 compute-2 ceph-mon[77138]: pgmap v1038: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:35 compute-2 ceph-mon[77138]: pgmap v1039: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:37.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:38 compute-2 podman[234491]: 2025-11-29 07:39:38.786384301 +0000 UTC m=+0.187360911 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:39:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:39.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:39 compute-2 ceph-mon[77138]: pgmap v1040: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:40 compute-2 ceph-mon[77138]: pgmap v1041: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:39:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:41.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:39:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:39:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:39:41 compute-2 ceph-mon[77138]: pgmap v1042: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:43.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:43 compute-2 sudo[234520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:43 compute-2 sudo[234520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:43 compute-2 sudo[234520]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:43 compute-2 sudo[234545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:43 compute-2 sudo[234545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:43 compute-2 sudo[234545]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:43.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:44 compute-2 ceph-mon[77138]: pgmap v1043: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:45.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:45 compute-2 ceph-mon[77138]: pgmap v1044: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:47.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:39:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:39:49 compute-2 ceph-mon[77138]: pgmap v1045: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:49.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:50 compute-2 ceph-mon[77138]: pgmap v1046: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:51.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=404 latency=0.003000093s ======
Nov 29 07:39:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.108 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.003000093s
Nov 29 07:39:52 compute-2 ceph-mon[77138]: pgmap v1047: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:53.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:53.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:53 compute-2 podman[234575]: 2025-11-29 07:39:53.665480519 +0000 UTC m=+0.070785227 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:39:55 compute-2 ceph-mon[77138]: pgmap v1048: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:39:56 compute-2 podman[234597]: 2025-11-29 07:39:56.679449607 +0000 UTC m=+0.080315240 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:39:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:39:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:39:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:59.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:39:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 07:39:59 compute-2 ceph-mon[77138]: pgmap v1049: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:39:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:39:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:39:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 07:40:02 compute-2 ceph-mon[77138]: pgmap v1050: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:40:02 compute-2 ceph-mon[77138]: pgmap v1051: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:40:02 compute-2 ceph-mon[77138]: osdmap e136: 3 total, 3 up, 3 in
Nov 29 07:40:02 compute-2 ceph-mon[77138]: pgmap v1053: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 102 B/s wr, 0 op/s
Nov 29 07:40:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:40:03.286 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:40:03.287 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:40:03.287 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:03 compute-2 sudo[234621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:03 compute-2 sudo[234621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:03 compute-2 sudo[234621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:03 compute-2 sudo[234646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:03 compute-2 sudo[234646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:03 compute-2 sudo[234646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:04 compute-2 sshd-session[234671]: Invalid user solana from 45.148.10.240 port 37744
Nov 29 07:40:04 compute-2 sshd-session[234671]: Connection closed by invalid user solana 45.148.10.240 port 37744 [preauth]
Nov 29 07:40:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:05 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:40:05 compute-2 ceph-mon[77138]: osdmap e137: 3 total, 3 up, 3 in
Nov 29 07:40:05 compute-2 ceph-mon[77138]: pgmap v1055: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Nov 29 07:40:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:07.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:08 compute-2 ceph-mon[77138]: pgmap v1056: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Nov 29 07:40:08 compute-2 ceph-mon[77138]: pgmap v1057: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 383 B/s wr, 0 op/s
Nov 29 07:40:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:09 compute-2 podman[234676]: 2025-11-29 07:40:09.682216346 +0000 UTC m=+0.091618278 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 07:40:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:11 compute-2 ceph-mon[77138]: pgmap v1058: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 432 B/s rd, 648 B/s wr, 1 op/s
Nov 29 07:40:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:13.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:16 compute-2 ceph-mon[77138]: pgmap v1059: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s
Nov 29 07:40:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.907526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402018907677, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2366, "num_deletes": 251, "total_data_size": 5849118, "memory_usage": 5931536, "flush_reason": "Manual Compaction"}
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 29 07:40:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402018943378, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3821160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18349, "largest_seqno": 20709, "table_properties": {"data_size": 3811574, "index_size": 6016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20105, "raw_average_key_size": 20, "raw_value_size": 3792071, "raw_average_value_size": 3869, "num_data_blocks": 268, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401767, "oldest_key_time": 1764401767, "file_creation_time": 1764402018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 36004 microseconds, and 16765 cpu microseconds.
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.943496) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3821160 bytes OK
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.943564) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.949037) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.949095) EVENT_LOG_v1 {"time_micros": 1764402018949081, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.949130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5838773, prev total WAL file size 5838814, number of live WAL files 2.
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.952172) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3731KB)], [36(8540KB)]
Nov 29 07:40:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402018952270, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 12566229, "oldest_snapshot_seqno": -1}
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4839 keys, 10483448 bytes, temperature: kUnknown
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402019048187, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10483448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10448209, "index_size": 22009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122023, "raw_average_key_size": 25, "raw_value_size": 10357609, "raw_average_value_size": 2140, "num_data_blocks": 910, "num_entries": 4839, "num_filter_entries": 4839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.048665) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10483448 bytes
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.050884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.8 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5368, records dropped: 529 output_compression: NoCompression
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.050922) EVENT_LOG_v1 {"time_micros": 1764402019050904, "job": 20, "event": "compaction_finished", "compaction_time_micros": 96095, "compaction_time_cpu_micros": 27845, "output_level": 6, "num_output_files": 1, "total_output_size": 10483448, "num_input_records": 5368, "num_output_records": 4839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402019053700, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402019058036, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:18.952081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.058158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.058166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.058169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.058172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:40:19.058175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.452 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.453 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.478 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.478 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.479 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.494 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.495 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.496 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.496 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.497 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.497 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.497 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.498 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.525 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.526 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.526 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.527 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:40:19 compute-2 nova_compute[232428]: 2025-11-29 07:40:19.527 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:40:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:40:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/514502764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.040 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.257 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.260 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5279MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.260 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.261 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.329 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.329 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.358 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:40:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:40:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/845075005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.851 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.861 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.883 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.887 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:40:20 compute-2 nova_compute[232428]: 2025-11-29 07:40:20.887 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:21.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:21 compute-2 nova_compute[232428]: 2025-11-29 07:40:21.593 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:23.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:23 compute-2 sudo[234754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:23 compute-2 sudo[234754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:23 compute-2 sudo[234754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:23 compute-2 sudo[234779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:23 compute-2 sudo[234779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:23 compute-2 sudo[234779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:23 compute-2 ceph-mon[77138]: pgmap v1060: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 586 B/s wr, 5 op/s
Nov 29 07:40:23 compute-2 ceph-mon[77138]: pgmap v1061: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 683 KiB/s wr, 6 op/s
Nov 29 07:40:23 compute-2 ceph-mon[77138]: pgmap v1062: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 683 KiB/s wr, 6 op/s
Nov 29 07:40:24 compute-2 podman[234804]: 2025-11-29 07:40:24.702104685 +0000 UTC m=+0.094644361 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 07:40:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:25.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:26 compute-2 ceph-mon[77138]: pgmap v1063: 305 pgs: 305 active+clean; 8.4 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 683 KiB/s wr, 8 op/s
Nov 29 07:40:26 compute-2 ceph-mon[77138]: osdmap e138: 3 total, 3 up, 3 in
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/514502764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1232024832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/845075005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3744940819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: pgmap v1065: 305 pgs: 305 active+clean; 8.4 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 819 KiB/s wr, 9 op/s
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/926580274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/157036288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:26 compute-2 ceph-mon[77138]: pgmap v1066: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 1.6 MiB/s wr, 4 op/s
Nov 29 07:40:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:40:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:40:27 compute-2 podman[234826]: 2025-11-29 07:40:27.687023638 +0000 UTC m=+0.087595918 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:40:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:40:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/637907282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:40:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:40:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/637907282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:40:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 07:40:28 compute-2 ceph-mon[77138]: pgmap v1067: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 819 KiB/s wr, 3 op/s
Nov 29 07:40:29 compute-2 sudo[234846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:29 compute-2 sudo[234846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:29 compute-2 sudo[234846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:29 compute-2 sudo[234872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:29 compute-2 sudo[234872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:29 compute-2 sudo[234872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:29 compute-2 sudo[234897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:29 compute-2 sudo[234897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:29 compute-2 sudo[234897]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:29 compute-2 sudo[234922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:40:29 compute-2 sudo[234922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:29.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:29 compute-2 sudo[234922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:31.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:31.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:32 compute-2 ceph-mon[77138]: pgmap v1068: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.2 MiB/s wr, 3 op/s
Nov 29 07:40:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/637907282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:40:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/637907282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:40:32 compute-2 ceph-mon[77138]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 07:40:32 compute-2 ceph-mon[77138]: pgmap v1070: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 07:40:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:33.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:33.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:35.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:35.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:37 compute-2 ceph-mon[77138]: pgmap v1071: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 1.2 MiB/s wr, 3 op/s
Nov 29 07:40:37 compute-2 ceph-mon[77138]: pgmap v1072: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 455 KiB/s wr, 6 op/s
Nov 29 07:40:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:37.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:40 compute-2 podman[234983]: 2025-11-29 07:40:40.714772701 +0000 UTC m=+0.115387981 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 07:40:40 compute-2 ceph-mon[77138]: pgmap v1073: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 455 KiB/s wr, 6 op/s
Nov 29 07:40:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:40:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:40:40 compute-2 ceph-mon[77138]: pgmap v1074: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 614 B/s wr, 6 op/s
Nov 29 07:40:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:41.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:40:42 compute-2 ceph-mon[77138]: pgmap v1075: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 497 B/s wr, 4 op/s
Nov 29 07:40:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:40:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:40:42 compute-2 ceph-mon[77138]: pgmap v1076: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 426 B/s wr, 3 op/s
Nov 29 07:40:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:43.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:43 compute-2 sudo[235012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:43 compute-2 sudo[235012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:43 compute-2 sudo[235012]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:43 compute-2 sudo[235037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:43 compute-2 sudo[235037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:43 compute-2 sudo[235037]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:45.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:45.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:47.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:47.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:40:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:40:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:40:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:47 compute-2 ceph-mon[77138]: pgmap v1077: 305 pgs: 305 active+clean; 29 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 683 KiB/s wr, 3 op/s
Nov 29 07:40:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:49.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:40:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:40:50 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:40:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:40:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:51.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:40:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:53.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:53.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 07:40:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:55.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:55 compute-2 podman[235068]: 2025-11-29 07:40:55.664573882 +0000 UTC m=+0.067843527 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:56 compute-2 ceph-mon[77138]: pgmap v1078: 305 pgs: 305 active+clean; 33 MiB data, 186 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Nov 29 07:40:56 compute-2 ceph-mon[77138]: pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 07:40:56 compute-2 ceph-mon[77138]: pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 07:40:56 compute-2 ceph-mon[77138]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 07:40:56 compute-2 ceph-mon[77138]: pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.7 MiB/s wr, 11 op/s
Nov 29 07:40:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:40:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:57.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:57 compute-2 ceph-mon[77138]: pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.0 MiB/s wr, 11 op/s
Nov 29 07:40:57 compute-2 ceph-mon[77138]: osdmap e140: 3 total, 3 up, 3 in
Nov 29 07:40:58 compute-2 podman[235088]: 2025-11-29 07:40:58.700917617 +0000 UTC m=+0.086745319 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 07:40:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:40:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 07:40:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:40:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:40:59 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:00 compute-2 ceph-mon[77138]: pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Nov 29 07:41:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:02 compute-2 ceph-mon[77138]: pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s wr, 0 op/s
Nov 29 07:41:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:03.287 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:03.288 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:03.288 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:41:03 compute-2 ceph-mon[77138]: pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 07:41:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:03 compute-2 sudo[235111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:03 compute-2 sudo[235111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:03 compute-2 sudo[235111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:04 compute-2 sudo[235136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:04 compute-2 sudo[235136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:04 compute-2 sudo[235136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:05.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:05 compute-2 ceph-mon[77138]: pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 07:41:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:07 compute-2 ceph-mon[77138]: pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 07:41:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:08 compute-2 ceph-mon[77138]: pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 427 B/s wr, 3 op/s
Nov 29 07:41:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:41:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:41:10 compute-2 ceph-mon[77138]: pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Nov 29 07:41:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:11 compute-2 podman[235165]: 2025-11-29 07:41:11.735013367 +0000 UTC m=+0.121631882 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:41:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:12 compute-2 ceph-mon[77138]: pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 07:41:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:13 compute-2 sudo[235192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:13 compute-2 sudo[235192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:13 compute-2 sudo[235192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:14 compute-2 sudo[235217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:41:14 compute-2 sudo[235217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:14 compute-2 sudo[235217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.447 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.449 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.450 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:41:14 compute-2 nova_compute[232428]: 2025-11-29 07:41:14.508 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:15 compute-2 ceph-mon[77138]: pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:41:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:41:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:15.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.535 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.535 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.536 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.625 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.625 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.684 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.685 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.685 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.686 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:41:16 compute-2 nova_compute[232428]: 2025-11-29 07:41:16.687 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:41:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:17 compute-2 ceph-mon[77138]: pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:41:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3996648141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.163 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.314 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.315 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5293MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.316 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.316 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:41:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:17.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.646 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.647 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:41:17 compute-2 nova_compute[232428]: 2025-11-29 07:41:17.826 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:41:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:41:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/718421907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:18 compute-2 nova_compute[232428]: 2025-11-29 07:41:18.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:41:18 compute-2 nova_compute[232428]: 2025-11-29 07:41:18.243 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:41:18 compute-2 nova_compute[232428]: 2025-11-29 07:41:18.263 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:41:18 compute-2 nova_compute[232428]: 2025-11-29 07:41:18.265 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:41:18 compute-2 nova_compute[232428]: 2025-11-29 07:41:18.265 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:41:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3996648141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:18 compute-2 ceph-mon[77138]: pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1821876993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:19.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:19.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.841 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.842 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.842 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.842 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.842 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.843 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:19 compute-2 nova_compute[232428]: 2025-11-29 07:41:19.843 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:41:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:21.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/718421907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/175367771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:21 compute-2 ceph-mon[77138]: pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:22 compute-2 nova_compute[232428]: 2025-11-29 07:41:22.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:23 compute-2 ceph-mon[77138]: pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2425489226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:23.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:24 compute-2 sudo[235291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:24 compute-2 sudo[235291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:24 compute-2 sudo[235291]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:24 compute-2 sudo[235316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:24 compute-2 sudo[235316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:24 compute-2 sudo[235316]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:25.219 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:41:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:25.222 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:41:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:25.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:26 compute-2 podman[235342]: 2025-11-29 07:41:26.658359366 +0000 UTC m=+0.055300424 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:41:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:27 compute-2 ceph-mon[77138]: pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:27.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:41:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309850330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:41:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:41:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309850330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:41:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:41:29.224 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:41:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:29 compute-2 podman[235365]: 2025-11-29 07:41:29.712858031 +0000 UTC m=+0.099435597 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:41:30 compute-2 ceph-mon[77138]: pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:30 compute-2 ceph-mon[77138]: pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:31.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:41:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 3988 writes, 21K keys, 3988 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 3988 writes, 3988 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 941 writes, 4486 keys, 941 commit groups, 1.0 writes per commit group, ingest: 9.78 MB, 0.02 MB/s
                                           Interval WAL: 942 writes, 942 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.3      0.36              0.10        10    0.036       0      0       0.0       0.0
                                             L6      1/0   10.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    105.7     87.7      0.89              0.28         9    0.099     42K   4876       0.0       0.0
                                            Sum      1/0   10.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0     75.6     83.6      1.25              0.38        19    0.066     42K   4876       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     98.4    107.2      0.32              0.11         6    0.053     15K   1569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    105.7     87.7      0.89              0.28         9    0.099     42K   4876       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     96.4      0.27              0.10         9    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.025, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.2 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 8.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000138 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(420,7.69 MB,2.5302%) FilterBlock(19,126.05 KB,0.040491%) IndexBlock(19,246.36 KB,0.0791399%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:41:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:41:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:41:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3993350651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2309850330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:41:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2309850330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:41:33 compute-2 ceph-mon[77138]: pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:34 compute-2 ceph-mon[77138]: pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1930520548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:34 compute-2 ceph-mon[77138]: pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:35.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:35 compute-2 ceph-mon[77138]: pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:37.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:38 compute-2 ceph-mon[77138]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 07:41:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:39.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:39.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:40 compute-2 ceph-mon[77138]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 687 KiB/s rd, 85 B/s wr, 5 op/s
Nov 29 07:41:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 07:41:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:41.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:41 compute-2 ceph-mon[77138]: osdmap e141: 3 total, 3 up, 3 in
Nov 29 07:41:41 compute-2 ceph-mon[77138]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 29 07:41:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 07:41:42 compute-2 podman[235391]: 2025-11-29 07:41:42.716697682 +0000 UTC m=+0.113039204 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 07:41:43 compute-2 sshd-session[235417]: Invalid user support from 78.128.112.74 port 56830
Nov 29 07:41:43 compute-2 sshd-session[235417]: Connection closed by invalid user support 78.128.112.74 port 56830 [preauth]
Nov 29 07:41:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:43.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:43.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:44 compute-2 sudo[235420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:44 compute-2 sudo[235420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:44 compute-2 sudo[235420]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:44 compute-2 sudo[235445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:44 compute-2 sudo[235445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:44 compute-2 sudo[235445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:44 compute-2 ceph-mon[77138]: osdmap e142: 3 total, 3 up, 3 in
Nov 29 07:41:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:45.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:45.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:45 compute-2 ceph-mon[77138]: pgmap v1110: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 55 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 303 KiB/s wr, 11 op/s
Nov 29 07:41:45 compute-2 ceph-mon[77138]: pgmap v1111: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 66 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 12 op/s
Nov 29 07:41:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1680067597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:41:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/806798456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:41:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:41:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:41:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:41:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:51.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:41:51 compute-2 ceph-mon[77138]: pgmap v1112: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 79 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 41 op/s
Nov 29 07:41:51 compute-2 ceph-mon[77138]: pgmap v1113: 305 pgs: 305 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.4 MiB/s wr, 36 op/s
Nov 29 07:41:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:53.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:54 compute-2 ceph-mon[77138]: pgmap v1114: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Nov 29 07:41:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 07:41:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:55.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:56 compute-2 ceph-mon[77138]: pgmap v1115: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Nov 29 07:41:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:41:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:57 compute-2 podman[235477]: 2025-11-29 07:41:57.683412799 +0000 UTC m=+0.083423694 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:41:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:41:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:41:59 compute-2 ceph-mon[77138]: pgmap v1116: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Nov 29 07:41:59 compute-2 ceph-mon[77138]: osdmap e143: 3 total, 3 up, 3 in
Nov 29 07:41:59 compute-2 ceph-mon[77138]: pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 818 B/s rd, 362 KiB/s wr, 1 op/s
Nov 29 07:41:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:41:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:41:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:41:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:59.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:00 compute-2 podman[235498]: 2025-11-29 07:42:00.659798476 +0000 UTC m=+0.057052679 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:42:01 compute-2 ceph-mon[77138]: pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 921 B/s rd, 53 KiB/s wr, 1 op/s
Nov 29 07:42:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:01.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:03 compute-2 sshd-session[235519]: Invalid user sol from 45.148.10.240 port 55628
Nov 29 07:42:03 compute-2 sshd-session[235519]: Connection closed by invalid user sol 45.148.10.240 port 55628 [preauth]
Nov 29 07:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:03.288 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:03.289 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:03.289 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:03 compute-2 ceph-mon[77138]: pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 15 KiB/s wr, 22 op/s
Nov 29 07:42:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:03.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:04 compute-2 sudo[235522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:04 compute-2 sudo[235522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:04 compute-2 sudo[235522]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:04 compute-2 sudo[235547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:04 compute-2 sudo[235547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:04 compute-2 sudo[235547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:04 compute-2 ceph-mon[77138]: pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 985 KiB/s rd, 15 KiB/s wr, 45 op/s
Nov 29 07:42:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:05.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:06 compute-2 ceph-mon[77138]: pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 80 op/s
Nov 29 07:42:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:08 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 07:42:09 compute-2 ceph-mon[77138]: pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 07:42:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:09.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:10 compute-2 nova_compute[232428]: 2025-11-29 07:42:10.829 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:10 compute-2 nova_compute[232428]: 2025-11-29 07:42:10.829 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:10 compute-2 nova_compute[232428]: 2025-11-29 07:42:10.859 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.047 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.048 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.058 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.058 232432 INFO nova.compute.claims [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.251 232432 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.288 232432 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.288 232432 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.322 232432 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.482 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.482 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.517 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:42:11 compute-2 ceph-mon[77138]: pgmap v1124: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.649 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.654 232432 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:42:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:11 compute-2 nova_compute[232428]: 2025-11-29 07:42:11.739 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:42:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4120904982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.219 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.226 232432 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.248 232432 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.282 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.284 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.286 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.293 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.294 232432 INFO nova.compute.claims [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.409 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.410 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.440 232432 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.461 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.538 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.539 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.540 232432 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Creating image(s)
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.574 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.610 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.644 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.648 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.649 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:12 compute-2 nova_compute[232428]: 2025-11-29 07:42:12.652 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:42:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2141520058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 07:42:13 compute-2 ceph-mon[77138]: pgmap v1125: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 07:42:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1677040645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3133733945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4120904982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.102 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.110 232432 DEBUG nova.compute.provider_tree [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.130 232432 DEBUG nova.scheduler.client.report [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.203 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.204 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.281 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.282 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.329 232432 INFO nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:42:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:13 compute-2 podman[235675]: 2025-11-29 07:42:13.699487559 +0000 UTC m=+0.103516735 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:42:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.749 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:42:13 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.977 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.981 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:42:13 compute-2 nova_compute[232428]: 2025-11-29 07:42:13.982 232432 INFO nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Creating image(s)
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.024 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.062 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.101 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.106 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.114 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Automatically allocating a network for project 0d3a6ccbb2794f6e85d683953ac4b5fd. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Nov 29 07:42:14 compute-2 sudo[235734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:14 compute-2 sudo[235734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:14 compute-2 sudo[235734]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:14 compute-2 sudo[235780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:14 compute-2 sudo[235780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:14 compute-2 sudo[235780]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:14 compute-2 sudo[235805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.262 232432 DEBUG nova.virt.libvirt.imagebackend [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 07:42:14 compute-2 sudo[235805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:14 compute-2 sudo[235805]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:14 compute-2 sudo[235830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:42:14 compute-2 sudo[235830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.340 232432 WARNING oslo_policy.policy [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.340 232432 WARNING oslo_policy.policy [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:42:14 compute-2 nova_compute[232428]: 2025-11-29 07:42:14.342 232432 DEBUG nova.policy [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '221a3978e81b4d679382df9385da9946', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0953a8181d0404daeae16ad65c53823', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:42:14 compute-2 podman[235928]: 2025-11-29 07:42:14.868404507 +0000 UTC m=+0.082443644 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:15 compute-2 podman[235928]: 2025-11-29 07:42:15.02996024 +0000 UTC m=+0.243999377 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:42:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3478996777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2141520058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:15 compute-2 ceph-mon[77138]: osdmap e144: 3 total, 3 up, 3 in
Nov 29 07:42:15 compute-2 ceph-mon[77138]: pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 221 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 523 KiB/s wr, 52 op/s
Nov 29 07:42:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:15.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:15 compute-2 podman[236080]: 2025-11-29 07:42:15.694369769 +0000 UTC m=+0.062911712 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:42:15 compute-2 podman[236080]: 2025-11-29 07:42:15.712638562 +0000 UTC m=+0.081180495 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:42:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:15.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:15 compute-2 podman[236147]: 2025-11-29 07:42:15.949429202 +0000 UTC m=+0.061967693 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 07:42:15 compute-2 podman[236147]: 2025-11-29 07:42:15.961759059 +0000 UTC m=+0.074297510 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vcs-type=git, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, version=2.2.4, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Nov 29 07:42:16 compute-2 sudo[235830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.057 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Successfully created port: 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.225 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.225 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:42:16 compute-2 nova_compute[232428]: 2025-11-29 07:42:16.225 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:42:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:17 compute-2 nova_compute[232428]: 2025-11-29 07:42:17.215 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:17.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:17.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.230 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.231 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:42:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/532037690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.718 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.761 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Successfully updated port: 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:42:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.777 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6653 writes, 26K keys, 6653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6653 writes, 1295 syncs, 5.14 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 867 writes, 2256 keys, 867 commit groups, 1.0 writes per commit group, ingest: 2.36 MB, 0.00 MB/s
                                           Interval WAL: 867 writes, 377 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.777 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquired lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.778 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.925 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.926 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5176MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.926 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:18 compute-2 nova_compute[232428]: 2025-11-29 07:42:18.927 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.048 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.049 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 58869189-493b-4d57-acc4-10881f62b251 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.050 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.050 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.120 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.217 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:42:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:42:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4096725968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.551 232432 DEBUG nova.compute.manager [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-changed-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.553 232432 DEBUG nova.compute.manager [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Refreshing instance network info cache due to event network-changed-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.553 232432 DEBUG oslo_concurrency.lockutils [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.554 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.561 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.585 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.614 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:42:19 compute-2 nova_compute[232428]: 2025-11-29 07:42:19.615 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:19.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:20 compute-2 nova_compute[232428]: 2025-11-29 07:42:20.604 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:20 compute-2 nova_compute[232428]: 2025-11-29 07:42:20.605 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:20 compute-2 nova_compute[232428]: 2025-11-29 07:42:20.606 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:20 compute-2 nova_compute[232428]: 2025-11-29 07:42:20.606 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:20 compute-2 nova_compute[232428]: 2025-11-29 07:42:20.606 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:42:20 compute-2 ceph-mon[77138]: pgmap v1128: 305 pgs: 305 active+clean; 98 MiB data, 230 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 1.4 MiB/s wr, 23 op/s
Nov 29 07:42:21 compute-2 nova_compute[232428]: 2025-11-29 07:42:21.290 232432 DEBUG nova.network.neutron [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updating instance_info_cache with network_info: [{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:42:21 compute-2 nova_compute[232428]: 2025-11-29 07:42:21.368 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Releasing lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:21 compute-2 nova_compute[232428]: 2025-11-29 07:42:21.369 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Instance network_info: |[{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:42:21 compute-2 nova_compute[232428]: 2025-11-29 07:42:21.369 232432 DEBUG oslo_concurrency.lockutils [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:21 compute-2 nova_compute[232428]: 2025-11-29 07:42:21.369 232432 DEBUG nova.network.neutron [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Refreshing network info cache for port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:42:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:21.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:22 compute-2 nova_compute[232428]: 2025-11-29 07:42:22.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:22 compute-2 ceph-mon[77138]: pgmap v1129: 305 pgs: 305 active+clean; 124 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 3.0 MiB/s wr, 44 op/s
Nov 29 07:42:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/532037690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:22 compute-2 ceph-mon[77138]: pgmap v1130: 305 pgs: 305 active+clean; 136 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 93 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 29 07:42:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4096725968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:22 compute-2 ceph-mon[77138]: osdmap e145: 3 total, 3 up, 3 in
Nov 29 07:42:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:22 compute-2 ceph-mon[77138]: pgmap v1132: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 389 KiB/s rd, 5.0 MiB/s wr, 84 op/s
Nov 29 07:42:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1455888677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.026 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.090 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.091 232432 DEBUG nova.virt.images [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] 1be11678-cfa4-4dee-b54c-6c7e547e5a6a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.093 232432 DEBUG nova.privsep.utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.093 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.323 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.328 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.388 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.390 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 10.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.426 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.431 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.454 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 9.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.455 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.487 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:23 compute-2 nova_compute[232428]: 2025-11-29 07:42:23.492 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 58869189-493b-4d57-acc4-10881f62b251_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:24 compute-2 sudo[236318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 nova_compute[232428]: 2025-11-29 07:42:24.122 232432 DEBUG nova.network.neutron [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updated VIF entry in instance network info cache for port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:42:24 compute-2 nova_compute[232428]: 2025-11-29 07:42:24.123 232432 DEBUG nova.network.neutron [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updating instance_info_cache with network_info: [{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:42:24 compute-2 sudo[236343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:24 compute-2 sudo[236343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1355188258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:24 compute-2 ceph-mon[77138]: pgmap v1133: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 89 op/s
Nov 29 07:42:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:24 compute-2 nova_compute[232428]: 2025-11-29 07:42:24.251 232432 DEBUG oslo_concurrency.lockutils [req-0126e2f3-1414-4a93-bc04-79ce254e29fb req-e42b0741-1089-4c67-b06e-00f2e4c90088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:24 compute-2 sudo[236393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:42:24 compute-2 sudo[236393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236393]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:24 compute-2 sudo[236524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-2 sudo[236549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-2 sudo[236549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-2 sudo[236549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:25 compute-2 sudo[236574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:42:25 compute-2 sudo[236574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:25 compute-2 sudo[236574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:25.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:25.791 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:42:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:25.792 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:42:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:27.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1872631784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:28 compute-2 podman[236620]: 2025-11-29 07:42:28.701889186 +0000 UTC m=+0.081450234 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:42:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:29.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:29.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:42:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687361141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:42:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:42:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687361141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:42:30 compute-2 nova_compute[232428]: 2025-11-29 07:42:30.477 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 58869189-493b-4d57-acc4-10881f62b251_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.985s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:30 compute-2 ceph-mon[77138]: pgmap v1134: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 89 op/s
Nov 29 07:42:30 compute-2 ceph-mon[77138]: pgmap v1135: 305 pgs: 305 active+clean; 172 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 29 07:42:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2934978996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:30 compute-2 nova_compute[232428]: 2025-11-29 07:42:30.549 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] resizing rbd image 58869189-493b-4d57-acc4-10881f62b251_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:42:31 compute-2 podman[236696]: 2025-11-29 07:42:31.667370371 +0000 UTC m=+0.070627555 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd)
Nov 29 07:42:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:31.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:31.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:31 compute-2 nova_compute[232428]: 2025-11-29 07:42:31.939 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 07:42:31 compute-2 ceph-mon[77138]: pgmap v1136: 305 pgs: 305 active+clean; 179 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Nov 29 07:42:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1452225657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/687361141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:42:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/687361141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:42:31 compute-2 ceph-mon[77138]: pgmap v1137: 305 pgs: 305 active+clean; 226 MiB data, 321 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 29 07:42:31 compute-2 sudo[236716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:31 compute-2 sudo[236716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:31 compute-2 sudo[236716]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:32 compute-2 sudo[236759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:32 compute-2 sudo[236759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:32 compute-2 sudo[236759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:32 compute-2 sudo[236784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:32 compute-2 sudo[236784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:32 compute-2 sudo[236784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:32 compute-2 sudo[236809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:42:32 compute-2 sudo[236809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.217 232432 DEBUG nova.objects.instance [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'migration_context' on Instance uuid 58869189-493b-4d57-acc4-10881f62b251 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.223 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] resizing rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.263 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.263 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Ensure instance console log exists: /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.264 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.264 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.265 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.268 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Start _get_guest_xml network_info=[{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.275 232432 WARNING nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.283 232432 DEBUG nova.virt.libvirt.host [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.284 232432 DEBUG nova.virt.libvirt.host [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.295 232432 DEBUG nova.virt.libvirt.host [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.296 232432 DEBUG nova.virt.libvirt.host [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.298 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.298 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:41:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1888436079',id=4,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1785043179',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.298 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.299 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.299 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.299 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.300 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.300 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.300 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.301 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.301 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.301 232432 DEBUG nova.virt.hardware [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.305 232432 DEBUG nova.privsep.utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.305 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.474 232432 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'migration_context' on Instance uuid 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.503 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.505 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Ensure instance console log exists: /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.506 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.507 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.508 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.601178782 +0000 UTC m=+0.048583443 container create 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:42:32 compute-2 systemd[1]: Started libpod-conmon-2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0.scope.
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.578842472 +0000 UTC m=+0.026247173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:32 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.715075382 +0000 UTC m=+0.162480073 container init 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.726058376 +0000 UTC m=+0.173463037 container start 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.729549694 +0000 UTC m=+0.176954385 container attach 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:42:32 compute-2 cranky_napier[236984]: 167 167
Nov 29 07:42:32 compute-2 systemd[1]: libpod-2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0.scope: Deactivated successfully.
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.737160774 +0000 UTC m=+0.184565445 container died 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:42:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-e811daeba3223b1ede854a42dd876814bf37c8c2ab64b39df66f9656d0c80636-merged.mount: Deactivated successfully.
Nov 29 07:42:32 compute-2 podman[236967]: 2025-11-29 07:42:32.789780772 +0000 UTC m=+0.237185433 container remove 2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:32.793 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:32 compute-2 systemd[1]: libpod-conmon-2665e72b8c2f88a5c2ed9be60b34901e28c82253d0fff9b865fe3ff80162e4d0.scope: Deactivated successfully.
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.814 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.840 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:32 compute-2 nova_compute[232428]: 2025-11-29 07:42:32.846 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:32 compute-2 podman[237030]: 2025-11-29 07:42:32.94768759 +0000 UTC m=+0.038451316 container create e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:42:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:32 compute-2 ceph-mon[77138]: osdmap e146: 3 total, 3 up, 3 in
Nov 29 07:42:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1455864033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/940101142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2586963344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:32 compute-2 systemd[1]: Started libpod-conmon-e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095.scope.
Nov 29 07:42:33 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:42:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e773da1aecfe190268b55fd88ced354ff69a41907f25c1237d8fc287be546c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e773da1aecfe190268b55fd88ced354ff69a41907f25c1237d8fc287be546c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e773da1aecfe190268b55fd88ced354ff69a41907f25c1237d8fc287be546c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e773da1aecfe190268b55fd88ced354ff69a41907f25c1237d8fc287be546c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:33 compute-2 podman[237030]: 2025-11-29 07:42:32.931968898 +0000 UTC m=+0.022732654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:33 compute-2 podman[237030]: 2025-11-29 07:42:33.030620439 +0000 UTC m=+0.121384185 container init e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:42:33 compute-2 podman[237030]: 2025-11-29 07:42:33.038420253 +0000 UTC m=+0.129183979 container start e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:42:33 compute-2 podman[237030]: 2025-11-29 07:42:33.044754822 +0000 UTC m=+0.135518568 container attach e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:42:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:42:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/438021194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.351 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.354 232432 DEBUG nova.virt.libvirt.vif [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-35716146',id=5,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-i18w6ju4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=58869189-493b-4d57-acc4-10881f62b251,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.354 232432 DEBUG nova.network.os_vif_util [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.355 232432 DEBUG nova.network.os_vif_util [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.358 232432 DEBUG nova.objects.instance [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58869189-493b-4d57-acc4-10881f62b251 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.381 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <uuid>58869189-493b-4d57-acc4-10881f62b251</uuid>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <name>instance-00000005</name>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-35716146</nova:name>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:42:32</nova:creationTime>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1785043179">
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:user uuid="221a3978e81b4d679382df9385da9946">tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member</nova:user>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:project uuid="e0953a8181d0404daeae16ad65c53823">tempest-ServersWithSpecificFlavorTestJSON-154328222</nova:project>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <nova:port uuid="7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb">
Nov 29 07:42:33 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <system>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="serial">58869189-493b-4d57-acc4-10881f62b251</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="uuid">58869189-493b-4d57-acc4-10881f62b251</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </system>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <os>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </os>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <features>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </features>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/58869189-493b-4d57-acc4-10881f62b251_disk">
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </source>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/58869189-493b-4d57-acc4-10881f62b251_disk.config">
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </source>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:42:33 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:7e:b6:4a"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <target dev="tap7d8df1ab-c3"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/console.log" append="off"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <video>
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </video>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:42:33 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:42:33 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:42:33 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:42:33 compute-2 nova_compute[232428]: </domain>
Nov 29 07:42:33 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.391 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Preparing to wait for external event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.392 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.393 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.393 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.395 232432 DEBUG nova.virt.libvirt.vif [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-35716146',id=5,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-i18w6ju4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=58869189-493b-4d57-acc4-10881f62b251,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.395 232432 DEBUG nova.network.os_vif_util [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.397 232432 DEBUG nova.network.os_vif_util [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.397 232432 DEBUG os_vif [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.443 232432 DEBUG ovsdbapp.backend.ovs_idl [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.444 232432 DEBUG ovsdbapp.backend.ovs_idl [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.444 232432 DEBUG ovsdbapp.backend.ovs_idl [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.446 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.446 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.447 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.449 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.452 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.466 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.466 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:42:33 compute-2 nova_compute[232428]: 2025-11-29 07:42:33.468 232432 INFO oslo.privsep.daemon [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpb9zee_p_/privsep.sock']
Nov 29 07:42:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:33.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.295 232432 INFO oslo.privsep.daemon [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.112 237094 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.118 237094 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.120 237094 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.120 237094 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237094
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]: [
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:     {
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "available": false,
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "ceph_device": false,
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "lsm_data": {},
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "lvs": [],
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "path": "/dev/sr0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "rejected_reasons": [
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "Insufficient space (<5GB)",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "Has a FileSystem"
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         ],
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         "sys_api": {
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "actuators": null,
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "device_nodes": "sr0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "devname": "sr0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "human_readable_size": "482.00 KB",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "id_bus": "ata",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "model": "QEMU DVD-ROM",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "nr_requests": "2",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "parent": "/dev/sr0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "partitions": {},
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "path": "/dev/sr0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "removable": "1",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "rev": "2.5+",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "ro": "0",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "rotational": "1",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "sas_address": "",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "sas_device_handle": "",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "scheduler_mode": "mq-deadline",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "sectors": 0,
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "sectorsize": "2048",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "size": 493568.0,
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "support_discard": "2048",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "type": "disk",
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:             "vendor": "QEMU"
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:         }
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]:     }
Nov 29 07:42:34 compute-2 sad_mccarthy[237065]: ]
Nov 29 07:42:34 compute-2 systemd[1]: libpod-e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095.scope: Deactivated successfully.
Nov 29 07:42:34 compute-2 systemd[1]: libpod-e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095.scope: Consumed 1.384s CPU time.
Nov 29 07:42:34 compute-2 podman[237030]: 2025-11-29 07:42:34.491401003 +0000 UTC m=+1.582164739 container died e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:42:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-3e773da1aecfe190268b55fd88ced354ff69a41907f25c1237d8fc287be546c8-merged.mount: Deactivated successfully.
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.649 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.650 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d8df1ab-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.651 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d8df1ab-c3, col_values=(('external_ids', {'iface-id': '7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:b6:4a', 'vm-uuid': '58869189-493b-4d57-acc4-10881f62b251'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.653 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:34 compute-2 NetworkManager[48993]: <info>  [1764402154.6550] manager: (tap7d8df1ab-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.663 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.663 232432 INFO os_vif [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3')
Nov 29 07:42:34 compute-2 podman[237030]: 2025-11-29 07:42:34.681386556 +0000 UTC m=+1.772150292 container remove e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:34 compute-2 sudo[236809]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:34 compute-2 systemd[1]: libpod-conmon-e1abdf4bfee56b07dd3801e19627b37ad5cbf0bf06738bf192f9e9f378caf095.scope: Deactivated successfully.
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.798 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.799 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.799 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No VIF found with MAC fa:16:3e:7e:b6:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.800 232432 INFO nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Using config drive
Nov 29 07:42:34 compute-2 nova_compute[232428]: 2025-11-29 07:42:34.831 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:35 compute-2 nova_compute[232428]: 2025-11-29 07:42:35.386 232432 INFO nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Creating config drive at /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config
Nov 29 07:42:35 compute-2 nova_compute[232428]: 2025-11-29 07:42:35.392 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2auo_40 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:35 compute-2 nova_compute[232428]: 2025-11-29 07:42:35.524 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2auo_40" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:35.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:35 compute-2 nova_compute[232428]: 2025-11-29 07:42:35.745 232432 DEBUG nova.storage.rbd_utils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 58869189-493b-4d57-acc4-10881f62b251_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:35 compute-2 nova_compute[232428]: 2025-11-29 07:42:35.749 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config 58869189-493b-4d57-acc4-10881f62b251_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:35.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3316083430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:42:35 compute-2 ceph-mon[77138]: pgmap v1139: 305 pgs: 305 active+clean; 251 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 74 op/s
Nov 29 07:42:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/438021194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:36 compute-2 nova_compute[232428]: 2025-11-29 07:42:36.380 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Automatically allocated network: {'id': '6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'name': 'auto_allocated_network', 'tenant_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['54f77d9b-4fc0-4513-9e8e-0b66d5a5d1b2', 'd3409058-7381-4024-9d79-5f6d3aec308c'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:42:14Z', 'updated_at': '2025-11-29T07:42:25Z', 'revision_number': 4, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Nov 29 07:42:36 compute-2 nova_compute[232428]: 2025-11-29 07:42:36.382 232432 DEBUG nova.policy [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cf2495f54add463c8ce9d2dd8623347c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:42:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:37 compute-2 nova_compute[232428]: 2025-11-29 07:42:37.278 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Successfully created port: 07f2a066-f271-4ee3-b719-aa65b4dda724 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:42:37 compute-2 ceph-mon[77138]: pgmap v1140: 305 pgs: 305 active+clean; 298 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.2 MiB/s wr, 122 op/s
Nov 29 07:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:37.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:37 compute-2 nova_compute[232428]: 2025-11-29 07:42:37.740 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.045 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Successfully updated port: 07f2a066-f271-4ee3-b719-aa65b4dda724 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.237 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.237 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquired lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.237 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.639 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.655 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:39.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.752 232432 DEBUG nova.compute.manager [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-changed-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.752 232432 DEBUG nova.compute.manager [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Refreshing instance network info cache due to event network-changed-07f2a066-f271-4ee3-b719-aa65b4dda724. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:42:39 compute-2 nova_compute[232428]: 2025-11-29 07:42:39.753 232432 DEBUG oslo_concurrency.lockutils [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:40 compute-2 ceph-mon[77138]: pgmap v1141: 305 pgs: 305 active+clean; 342 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 133 KiB/s rd, 7.9 MiB/s wr, 112 op/s
Nov 29 07:42:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:41.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:41.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:42 compute-2 ceph-mon[77138]: pgmap v1142: 305 pgs: 305 active+clean; 352 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 118 KiB/s rd, 7.9 MiB/s wr, 107 op/s
Nov 29 07:42:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:42 compute-2 ceph-mon[77138]: pgmap v1143: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 6.2 MiB/s wr, 111 op/s
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.226621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162226705, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 257, "total_data_size": 3424755, "memory_usage": 3472544, "flush_reason": "Manual Compaction"}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162274692, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1527264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20716, "largest_seqno": 22084, "table_properties": {"data_size": 1521991, "index_size": 2541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13628, "raw_average_key_size": 21, "raw_value_size": 1510587, "raw_average_value_size": 2352, "num_data_blocks": 111, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402018, "oldest_key_time": 1764402018, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 48127 microseconds, and 9731 cpu microseconds.
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.274748) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1527264 bytes OK
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.274781) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.277891) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.277913) EVENT_LOG_v1 {"time_micros": 1764402162277906, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.277930) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3418017, prev total WAL file size 3418017, number of live WAL files 2.
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.278992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373630' seq:0, type:0; will stop at (end)
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1491KB)], [39(10237KB)]
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162279081, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 12010712, "oldest_snapshot_seqno": -1}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4989 keys, 8704768 bytes, temperature: kUnknown
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162357697, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8704768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8671679, "index_size": 19539, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125950, "raw_average_key_size": 25, "raw_value_size": 8581435, "raw_average_value_size": 1720, "num_data_blocks": 803, "num_entries": 4989, "num_filter_entries": 4989, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.358087) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8704768 bytes
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.359477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.5 rd, 110.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(13.6) write-amplify(5.7) OK, records in: 5481, records dropped: 492 output_compression: NoCompression
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.359503) EVENT_LOG_v1 {"time_micros": 1764402162359489, "job": 22, "event": "compaction_finished", "compaction_time_micros": 78777, "compaction_time_cpu_micros": 46307, "output_level": 6, "num_output_files": 1, "total_output_size": 8704768, "num_input_records": 5481, "num_output_records": 4989, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162359965, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162362349, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.278867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.362472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.362481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.362484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.362488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:42:42.362491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:42:42 compute-2 nova_compute[232428]: 2025-11-29 07:42:42.738 232432 DEBUG oslo_concurrency.processutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config 58869189-493b-4d57-acc4-10881f62b251_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.988s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:42 compute-2 nova_compute[232428]: 2025-11-29 07:42:42.739 232432 INFO nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Deleting local config drive /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251/disk.config because it was imported into RBD.
Nov 29 07:42:42 compute-2 nova_compute[232428]: 2025-11-29 07:42:42.741 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:42 compute-2 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:42:42 compute-2 systemd[1]: Started libvirt secret daemon.
Nov 29 07:42:42 compute-2 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 07:42:42 compute-2 kernel: tap7d8df1ab-c3: entered promiscuous mode
Nov 29 07:42:42 compute-2 NetworkManager[48993]: <info>  [1764402162.9275] manager: (tap7d8df1ab-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 07:42:42 compute-2 ovn_controller[134375]: 2025-11-29T07:42:42Z|00027|binding|INFO|Claiming lport 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb for this chassis.
Nov 29 07:42:42 compute-2 ovn_controller[134375]: 2025-11-29T07:42:42Z|00028|binding|INFO|7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb: Claiming fa:16:3e:7e:b6:4a 10.100.0.10
Nov 29 07:42:42 compute-2 nova_compute[232428]: 2025-11-29 07:42:42.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:42 compute-2 systemd-udevd[238372]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:42:42 compute-2 nova_compute[232428]: 2025-11-29 07:42:42.975 232432 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Updating instance_info_cache with network_info: [{"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.017 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.023 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:43 compute-2 ovn_controller[134375]: 2025-11-29T07:42:43Z|00029|binding|INFO|Setting lport 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb ovn-installed in OVS
Nov 29 07:42:43 compute-2 NetworkManager[48993]: <info>  [1764402163.0310] device (tap7d8df1ab-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.029 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:43 compute-2 NetworkManager[48993]: <info>  [1764402163.0334] device (tap7d8df1ab-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:42:43 compute-2 systemd-machined[194747]: New machine qemu-1-instance-00000005.
Nov 29 07:42:43 compute-2 systemd[1]: Started Virtual Machine qemu-1-instance-00000005.
Nov 29 07:42:43 compute-2 ovn_controller[134375]: 2025-11-29T07:42:43Z|00030|binding|INFO|Setting lport 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb up in Southbound
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.145 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:b6:4a 10.100.0.10'], port_security=['fa:16:3e:7e:b6:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '58869189-493b-4d57-acc4-10881f62b251', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0953a8181d0404daeae16ad65c53823', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62f7d9fe-d5cc-4772-b80b-035b36a2adf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa2fa06-fbfc-41d7-aa49-493c7e8260ef, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.147 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb in datapath 6fd1b7b4-729b-4264-a69c-0cec37de984c bound to our chassis
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.149 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.151 143801 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpp406nz8y/privsep.sock']
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.233 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Releasing lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.234 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Instance network_info: |[{"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.234 232432 DEBUG oslo_concurrency.lockutils [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.234 232432 DEBUG nova.network.neutron [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Refreshing network info cache for port 07f2a066-f271-4ee3-b719-aa65b4dda724 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.237 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Start _get_guest_xml network_info=[{"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.243 232432 WARNING nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.248 232432 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.249 232432 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.253 232432 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.253 232432 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.255 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.255 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.255 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.256 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.256 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.256 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.256 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.257 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.257 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.257 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.257 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.257 232432 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.260 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:42:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.685 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402163.684641, 58869189-493b-4d57-acc4-10881f62b251 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.687 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] VM Started (Lifecycle Event)
Nov 29 07:42:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:42:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3335667428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:43.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.722 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.747 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.753 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:43.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.779 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.784 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402163.686544, 58869189-493b-4d57-acc4-10881f62b251 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:43 compute-2 nova_compute[232428]: 2025-11-29 07:42:43.784 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] VM Paused (Lifecycle Event)
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.955 143801 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.956 143801 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp406nz8y/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.808 238475 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.813 238475 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.815 238475 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.815 238475 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238475
Nov 29 07:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:43.959 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a00d443a-5d2b-42ff-bd44-92c5f72cc27c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.019 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.022 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:42:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:42:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/573872820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.389 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.391 232432 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-1',id=2,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:12Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a66ef22-a889-44ba-a9b5-cc657a2b00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.391 232432 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.392 232432 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.394 232432 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.432 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.569 232432 DEBUG nova.compute.manager [req-d5569983-ce0b-4bdd-8d76-71e2aaa8bb35 req-d9d8f864-aae6-4e55-b5a6-8a5a09d92308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.570 232432 DEBUG oslo_concurrency.lockutils [req-d5569983-ce0b-4bdd-8d76-71e2aaa8bb35 req-d9d8f864-aae6-4e55-b5a6-8a5a09d92308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.570 232432 DEBUG oslo_concurrency.lockutils [req-d5569983-ce0b-4bdd-8d76-71e2aaa8bb35 req-d9d8f864-aae6-4e55-b5a6-8a5a09d92308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.571 232432 DEBUG oslo_concurrency.lockutils [req-d5569983-ce0b-4bdd-8d76-71e2aaa8bb35 req-d9d8f864-aae6-4e55-b5a6-8a5a09d92308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.571 232432 DEBUG nova.compute.manager [req-d5569983-ce0b-4bdd-8d76-71e2aaa8bb35 req-d9d8f864-aae6-4e55-b5a6-8a5a09d92308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Processing event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.572 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.580 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <uuid>7a66ef22-a889-44ba-a9b5-cc657a2b00b8</uuid>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <name>instance-00000002</name>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:name>tempest-tempest.common.compute-instance-1667508244-1</nova:name>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:42:43</nova:creationTime>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:user uuid="cf2495f54add463c8ce9d2dd8623347c">tempest-AutoAllocateNetworkTest-752491155-project-member</nova:user>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:project uuid="0d3a6ccbb2794f6e85d683953ac4b5fd">tempest-AutoAllocateNetworkTest-752491155</nova:project>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <nova:port uuid="07f2a066-f271-4ee3-b719-aa65b4dda724">
Nov 29 07:42:44 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="fdfe:381f:8400::19" ipVersion="6"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.0.60" ipVersion="4"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <system>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="serial">7a66ef22-a889-44ba-a9b5-cc657a2b00b8</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="uuid">7a66ef22-a889-44ba-a9b5-cc657a2b00b8</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </system>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <os>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </os>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <features>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </features>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk">
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </source>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config">
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </source>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:42:44 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:d6:df:17"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <target dev="tap07f2a066-f2"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/console.log" append="off"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <video>
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </video>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:42:44 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:42:44 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:42:44 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:42:44 compute-2 nova_compute[232428]: </domain>
Nov 29 07:42:44 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.584 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Preparing to wait for external event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.585 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.585 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.585 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.586 232432 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-1',id=2,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:12Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a66ef22-a889-44ba-a9b5-cc657a2b00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.587 232432 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.588 232432 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.588 232432 DEBUG os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.590 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.590 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.596 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07f2a066-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.597 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07f2a066-f2, col_values=(('external_ids', {'iface-id': '07f2a066-f271-4ee3-b719-aa65b4dda724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:df:17', 'vm-uuid': '7a66ef22-a889-44ba-a9b5-cc657a2b00b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.599 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402164.595929, 58869189-493b-4d57-acc4-10881f62b251 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.599 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] VM Resumed (Lifecycle Event)
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.601 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:44 compute-2 NetworkManager[48993]: <info>  [1764402164.6384] manager: (tap07f2a066-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.645 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.646 232432 INFO os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2')
Nov 29 07:42:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:44.646 238475 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:44.646 238475 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:44.646 238475 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.656 232432 INFO nova.virt.libvirt.driver [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] Instance spawned successfully.
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.657 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.762 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:44 compute-2 podman[238501]: 2025-11-29 07:42:44.767169228 +0000 UTC m=+0.108908443 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:42:44 compute-2 nova_compute[232428]: 2025-11-29 07:42:44.776 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:42:44 compute-2 sudo[238528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:44 compute-2 sudo[238528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:44 compute-2 sudo[238528]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:44 compute-2 sudo[238557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:44 compute-2 sudo[238557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:44 compute-2 sudo[238557]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:44 compute-2 ceph-mon[77138]: pgmap v1144: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 5.5 MiB/s wr, 99 op/s
Nov 29 07:42:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3335667428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/573872820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.027 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.027 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.028 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.028 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.028 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.029 232432 DEBUG nova.virt.libvirt.driver [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.095 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.095 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.095 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No VIF found with MAC fa:16:3e:d6:df:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.096 232432 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Using config drive
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.124 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.133 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.134 232432 INFO nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Took 31.16 seconds to spawn the instance on the hypervisor.
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.136 232432 DEBUG nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.209 232432 INFO nova.compute.manager [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Took 33.62 seconds to build instance.
Nov 29 07:42:45 compute-2 nova_compute[232428]: 2025-11-29 07:42:45.230 232432 DEBUG oslo_concurrency.lockutils [None req-7a2eeffa-cbb0-4439-9cf6-d04efd5866b4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.443 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d23ee1-a810-41f6-9828-5c29f5045556]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.444 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fd1b7b4-71 in ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.446 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fd1b7b4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.446 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[27496126-2478-4afb-a3b0-870a5ae6f745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.450 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4e016b6e-eca5-4481-8142-64926a4315da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.485 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0a4843-2068-4776-b5a6-5b3276f28463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.519 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[240afac3-933e-4584-a625-f06a6dbd9a2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:45.522 143801 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpmw9xuvgo/privsep.sock']
Nov 29 07:42:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.071 232432 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Creating config drive at /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.077 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnk8glv4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.210 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnk8glv4i" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.243 232432 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.248 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.306 143801 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.307 143801 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpmw9xuvgo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.139 238613 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.145 238613 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.147 238613 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.148 238613 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238613
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.310 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ff4ad8-a0a1-4d5c-a8b4-d556d705af07]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.346 232432 DEBUG nova.network.neutron [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Updated VIF entry in instance network info cache for port 07f2a066-f271-4ee3-b719-aa65b4dda724. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.348 232432 DEBUG nova.network.neutron [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Updating instance_info_cache with network_info: [{"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.377 232432 DEBUG oslo_concurrency.lockutils [req-955dff5f-e35f-49e7-8442-f3c0c7e6db4c req-c85d10c3-687f-4727-854c-5b0d5668fe53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7a66ef22-a889-44ba-a9b5-cc657a2b00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.710 232432 DEBUG nova.compute.manager [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.711 232432 DEBUG oslo_concurrency.lockutils [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.712 232432 DEBUG oslo_concurrency.lockutils [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.712 232432 DEBUG oslo_concurrency.lockutils [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.713 232432 DEBUG nova.compute.manager [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] No waiting events found dispatching network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:42:46 compute-2 nova_compute[232428]: 2025-11-29 07:42:46.714 232432 WARNING nova.compute.manager [req-d41b2f57-31db-4fb4-8214-c60579e58666 req-420a136c-dcfb-4e5f-b4f9-4aa8e7c51e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received unexpected event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb for instance with vm_state active and task_state None.
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.910 238613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.910 238613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:46.910 238613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.569 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[89d7c59e-9c5b-4d83-9b8b-87219c0a440e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.590 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[25134261-0027-4ade-a1c9-3f7432768d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 NetworkManager[48993]: <info>  [1764402167.5927] manager: (tap6fd1b7b4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 07:42:47 compute-2 systemd-udevd[238665]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.640 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f7db2557-5dfa-48d6-85c9-5f6568ad6ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.644 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4853db-cf11-4d0f-9692-e6070510b61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 NetworkManager[48993]: <info>  [1764402167.6773] device (tap6fd1b7b4-70): carrier: link connected
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.682 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[612e2201-985f-4345-9cea-231e9d8911ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.708 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9106ecca-ea43-408c-b266-c46345cbc004]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fd1b7b4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:9c:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513394, 'reachable_time': 30235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238685, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.733 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[374b128d-21d2-48ba-9c0e-0e63ca29e880]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:9ce1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513394, 'tstamp': 513394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238686, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.756 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[def17e39-7494-48fc-88fb-911dae6a3502]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fd1b7b4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:9c:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513394, 'reachable_time': 30235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238687, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:47.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.802 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b59484c-2c28-45fa-9379-a6b1c1af0a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.889 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[238002ae-12ec-4ee2-89ab-59602d895c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.891 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fd1b7b4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.891 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.891 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fd1b7b4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 NetworkManager[48993]: <info>  [1764402167.9362] manager: (tap6fd1b7b4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 07:42:47 compute-2 kernel: tap6fd1b7b4-70: entered promiscuous mode
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.942 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.943 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fd1b7b4-70, col_values=(('external_ids', {'iface-id': 'a8a25a5f-1fe9-4fe3-8e65-adbb9f372b77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.944 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 ovn_controller[134375]: 2025-11-29T07:42:47Z|00031|binding|INFO|Releasing lport a8a25a5f-1fe9-4fe3-8e65-adbb9f372b77 from this chassis (sb_readonly=0)
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.961 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 nova_compute[232428]: 2025-11-29 07:42:47.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.967 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.968 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43570934-b8fd-470b-8271-695e84d90118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.970 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:42:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:47.971 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'env', 'PROCESS_TAG=haproxy-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fd1b7b4-729b-4264-a69c-0cec37de984c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:42:48 compute-2 podman[238719]: 2025-11-29 07:42:48.468102419 +0000 UTC m=+0.073569547 container create 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:42:48 compute-2 systemd[1]: Started libpod-conmon-2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e.scope.
Nov 29 07:42:48 compute-2 podman[238719]: 2025-11-29 07:42:48.425622608 +0000 UTC m=+0.031089766 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:42:48 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:42:48 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea42e98ddcdb0b461ebedabecf5149e4521e0ccbb83fe62914a8bc17d04741ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:48 compute-2 podman[238719]: 2025-11-29 07:42:48.576275678 +0000 UTC m=+0.181742806 container init 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:42:48 compute-2 podman[238719]: 2025-11-29 07:42:48.584859437 +0000 UTC m=+0.190326545 container start 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 07:42:48 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [NOTICE]   (238738) : New worker (238740) forked
Nov 29 07:42:48 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [NOTICE]   (238738) : Loading success.
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.075 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0829] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0833] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0855] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0858] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0882] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0895] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0902] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:42:49 compute-2 NetworkManager[48993]: <info>  [1764402169.0907] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.319 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:49 compute-2 ovn_controller[134375]: 2025-11-29T07:42:49Z|00032|binding|INFO|Releasing lport a8a25a5f-1fe9-4fe3-8e65-adbb9f372b77 from this chassis (sb_readonly=0)
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.372 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.478 232432 DEBUG nova.compute.manager [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-changed-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.479 232432 DEBUG nova.compute.manager [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Refreshing instance network info cache due to event network-changed-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.479 232432 DEBUG oslo_concurrency.lockutils [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.479 232432 DEBUG oslo_concurrency.lockutils [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.479 232432 DEBUG nova.network.neutron [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Refreshing network info cache for port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:42:49 compute-2 nova_compute[232428]: 2025-11-29 07:42:49.638 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:49.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:51 compute-2 nova_compute[232428]: 2025-11-29 07:42:51.123 232432 DEBUG nova.network.neutron [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updated VIF entry in instance network info cache for port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:42:51 compute-2 nova_compute[232428]: 2025-11-29 07:42:51.123 232432 DEBUG nova.network.neutron [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updating instance_info_cache with network_info: [{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:42:51 compute-2 nova_compute[232428]: 2025-11-29 07:42:51.272 232432 DEBUG oslo_concurrency.lockutils [req-78116d0c-7cae-43a7-a89c-d1e35bcd6e95 req-37d726e9-edb1-4887-8889-516b9e81ec8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:51.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:52 compute-2 nova_compute[232428]: 2025-11-29 07:42:52.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:42:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:42:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.671 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:54 compute-2 ceph-mon[77138]: pgmap v1145: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 725 KiB/s rd, 3.7 MiB/s wr, 107 op/s
Nov 29 07:42:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3513594248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.717 232432 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config 7a66ef22-a889-44ba-a9b5-cc657a2b00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.718 232432 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Deleting local config drive /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8/disk.config because it was imported into RBD.
Nov 29 07:42:54 compute-2 NetworkManager[48993]: <info>  [1764402174.8135] manager: (tap07f2a066-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 07:42:54 compute-2 kernel: tap07f2a066-f2: entered promiscuous mode
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.816 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:54 compute-2 ovn_controller[134375]: 2025-11-29T07:42:54Z|00033|binding|INFO|Claiming lport 07f2a066-f271-4ee3-b719-aa65b4dda724 for this chassis.
Nov 29 07:42:54 compute-2 ovn_controller[134375]: 2025-11-29T07:42:54Z|00034|binding|INFO|07f2a066-f271-4ee3-b719-aa65b4dda724: Claiming fa:16:3e:d6:df:17 10.1.0.60 fdfe:381f:8400::19
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.826 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:df:17 10.1.0.60 fdfe:381f:8400::19'], port_security=['fa:16:3e:d6:df:17 10.1.0.60 fdfe:381f:8400::19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.60/26 fdfe:381f:8400::19/64', 'neutron:device_id': '7a66ef22-a889-44ba-a9b5-cc657a2b00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=07f2a066-f271-4ee3-b719-aa65b4dda724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.828 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 07f2a066-f271-4ee3-b719-aa65b4dda724 in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c bound to our chassis
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.831 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 07:42:54 compute-2 ovn_controller[134375]: 2025-11-29T07:42:54Z|00035|binding|INFO|Setting lport 07f2a066-f271-4ee3-b719-aa65b4dda724 ovn-installed in OVS
Nov 29 07:42:54 compute-2 ovn_controller[134375]: 2025-11-29T07:42:54Z|00036|binding|INFO|Setting lport 07f2a066-f271-4ee3-b719-aa65b4dda724 up in Southbound
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.849 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[05cb6c42-8b2e-44af-aaa1-534328499e59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.854 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c117dd1-51 in ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.859 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c117dd1-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.859 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[06dd4e90-ad33-4fd3-85f4-c653f9d196ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 nova_compute[232428]: 2025-11-29 07:42:54.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.867 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[baaeb079-5a8d-4d2b-8131-843c7f358463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 systemd-udevd[238768]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:42:54 compute-2 systemd-machined[194747]: New machine qemu-2-instance-00000002.
Nov 29 07:42:54 compute-2 NetworkManager[48993]: <info>  [1764402174.8901] device (tap07f2a066-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:42:54 compute-2 NetworkManager[48993]: <info>  [1764402174.8915] device (tap07f2a066-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:42:54 compute-2 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.906 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb6fe60-717d-476f-aa37-1fa7c3dfbe1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.936 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eccc14c1-afb0-47b4-8611-f80cc106c0ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.983 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e6c667-35bb-4f06-b94f-2a883acba815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:54 compute-2 NetworkManager[48993]: <info>  [1764402174.9920] manager: (tap6c117dd1-50): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 07:42:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:54.993 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8679fa3f-f5ee-4ee9-888a-bdc3ea3a0f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.047 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[49735e87-a72a-43e1-b461-70037b26a2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.051 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2614db-6063-42c6-bb5e-eaea009a1e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 NetworkManager[48993]: <info>  [1764402175.0850] device (tap6c117dd1-50): carrier: link connected
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.093 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9829b4-54b7-4408-9ed4-efb11749793b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.112 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[064686e6-686f-4b5d-b5c3-891adb9dfe8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514135, 'reachable_time': 18071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238803, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.139 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6558544c-6475-4203-93ff-3541215398b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e465'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514135, 'tstamp': 514135}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238804, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.168 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0523db79-c6e6-4852-bbfb-40d5d23c8cd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514135, 'reachable_time': 18071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238806, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.213 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[79191c45-9208-4068-83fd-bd0bd5e75189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.282 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c11f9ac1-defb-4ec1-8a9e-73527689b8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.283 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.283 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.284 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c117dd1-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:55 compute-2 NetworkManager[48993]: <info>  [1764402175.2868] manager: (tap6c117dd1-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 07:42:55 compute-2 kernel: tap6c117dd1-50: entered promiscuous mode
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.290 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c117dd1-50, col_values=(('external_ids', {'iface-id': '32f6a270-f2be-48b5-9316-7ff23d26e5c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:55 compute-2 ovn_controller[134375]: 2025-11-29T07:42:55Z|00037|binding|INFO|Releasing lport 32f6a270-f2be-48b5-9316-7ff23d26e5c2 from this chassis (sb_readonly=0)
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.293 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.294 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.295 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a268685c-4553-4473-802d-b74748bbfa0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.295 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:42:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:42:55.296 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'env', 'PROCESS_TAG=haproxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c117dd1-5064-4e69-b07c-c93c3d729d3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.644 232432 DEBUG nova.compute.manager [req-00fef796-abae-49a7-8a3a-b5741ed802e8 req-ef249e43-62e1-4564-955a-634cb9fda5a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.645 232432 DEBUG oslo_concurrency.lockutils [req-00fef796-abae-49a7-8a3a-b5741ed802e8 req-ef249e43-62e1-4564-955a-634cb9fda5a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.645 232432 DEBUG oslo_concurrency.lockutils [req-00fef796-abae-49a7-8a3a-b5741ed802e8 req-ef249e43-62e1-4564-955a-634cb9fda5a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.646 232432 DEBUG oslo_concurrency.lockutils [req-00fef796-abae-49a7-8a3a-b5741ed802e8 req-ef249e43-62e1-4564-955a-634cb9fda5a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:55 compute-2 nova_compute[232428]: 2025-11-29 07:42:55.646 232432 DEBUG nova.compute.manager [req-00fef796-abae-49a7-8a3a-b5741ed802e8 req-ef249e43-62e1-4564-955a-634cb9fda5a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Processing event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:42:55 compute-2 podman[238838]: 2025-11-29 07:42:55.715783058 +0000 UTC m=+0.060380544 container create dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:42:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:55 compute-2 systemd[1]: Started libpod-conmon-dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af.scope.
Nov 29 07:42:55 compute-2 podman[238838]: 2025-11-29 07:42:55.685099857 +0000 UTC m=+0.029697393 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:42:55 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:42:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb8e75c344367a69cd8155acc00d6a1bd4be736eb165c3ac943a63f7372af1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:55 compute-2 podman[238838]: 2025-11-29 07:42:55.827181049 +0000 UTC m=+0.171778555 container init dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:42:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:55.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:55 compute-2 podman[238838]: 2025-11-29 07:42:55.835167349 +0000 UTC m=+0.179764835 container start dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:42:55 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [NOTICE]   (238864) : New worker (238866) forked
Nov 29 07:42:55 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [NOTICE]   (238864) : Loading success.
Nov 29 07:42:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:42:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.003000095s ======
Nov 29 07:42:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000095s
Nov 29 07:42:57 compute-2 nova_compute[232428]: 2025-11-29 07:42:57.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:57.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:58 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 07:42:59 compute-2 podman[238911]: 2025-11-29 07:42:59.633692837 +0000 UTC m=+0.092885972 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.672 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402179.6716692, 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.673 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] VM Started (Lifecycle Event)
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.676 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.683 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.688 232432 INFO nova.virt.libvirt.driver [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Instance spawned successfully.
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.688 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.715 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.724 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.730 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.731 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.732 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.733 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.734 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.734 232432 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:42:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:42:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:59.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.763 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.763 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402179.675066, 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.764 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] VM Paused (Lifecycle Event)
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.793 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.798 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402179.6828513, 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.798 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] VM Resumed (Lifecycle Event)
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.821 232432 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Took 47.28 seconds to spawn the instance on the hypervisor.
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.823 232432 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.825 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:42:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:42:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:42:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:59.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.838 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.875 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.928 232432 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Took 48.95 seconds to build instance.
Nov 29 07:42:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/78130158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1819114952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4000369868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:42:59 compute-2 ceph-mon[77138]: pgmap v1146: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 07:42:59 compute-2 ceph-mon[77138]: pgmap v1147: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 458 KiB/s wr, 125 op/s
Nov 29 07:42:59 compute-2 ceph-mon[77138]: pgmap v1148: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 162 op/s
Nov 29 07:42:59 compute-2 ceph-mon[77138]: pgmap v1149: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 145 op/s
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.949 232432 DEBUG nova.compute.manager [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.949 232432 DEBUG oslo_concurrency.lockutils [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.950 232432 DEBUG oslo_concurrency.lockutils [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.950 232432 DEBUG oslo_concurrency.lockutils [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.951 232432 DEBUG nova.compute.manager [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] No waiting events found dispatching network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.951 232432 WARNING nova.compute.manager [req-bdd052b7-7338-4fd4-82cc-ea74010bacba req-b104240e-a8bd-4c73-905c-1ed72a8dff9e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received unexpected event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 for instance with vm_state active and task_state None.
Nov 29 07:42:59 compute-2 nova_compute[232428]: 2025-11-29 07:42:59.959 232432 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 49.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:01.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:02 compute-2 podman[238934]: 2025-11-29 07:43:02.691629149 +0000 UTC m=+0.089005450 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:43:02 compute-2 nova_compute[232428]: 2025-11-29 07:43:02.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:03.289 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:03.290 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:03.291 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:03 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 07:43:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:03.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:03.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:04 compute-2 nova_compute[232428]: 2025-11-29 07:43:04.706 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:05 compute-2 sudo[238955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:05 compute-2 sudo[238955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:05 compute-2 sudo[238955]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:05 compute-2 sudo[238980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:05 compute-2 sudo[238980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:05 compute-2 sudo[238980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:43:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:05.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:43:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:05.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:06 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:43:07 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 07:43:07 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 07:43:07 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 07:43:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:07.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:07 compute-2 nova_compute[232428]: 2025-11-29 07:43:07.757 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:07 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 07:43:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:08 compute-2 ceph-mon[77138]: pgmap v1150: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 KiB/s wr, 148 op/s
Nov 29 07:43:08 compute-2 ceph-mon[77138]: pgmap v1151: 305 pgs: 305 active+clean; 353 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 14 KiB/s wr, 123 op/s
Nov 29 07:43:08 compute-2 ceph-mon[77138]: pgmap v1152: 305 pgs: 305 active+clean; 353 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 75 op/s
Nov 29 07:43:09 compute-2 nova_compute[232428]: 2025-11-29 07:43:09.709 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:09.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:10 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 07:43:10 compute-2 ceph-mon[77138]: pgmap v1153: 305 pgs: 305 active+clean; 356 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 974 KiB/s wr, 65 op/s
Nov 29 07:43:10 compute-2 ceph-mon[77138]: pgmap v1154: 305 pgs: 305 active+clean; 369 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Nov 29 07:43:10 compute-2 ceph-mon[77138]: pgmap v1155: 305 pgs: 305 active+clean; 373 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 259 KiB/s rd, 2.5 MiB/s wr, 68 op/s
Nov 29 07:43:10 compute-2 ceph-mon[77138]: pgmap v1156: 305 pgs: 305 active+clean; 373 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Nov 29 07:43:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:11.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:11.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:12 compute-2 nova_compute[232428]: 2025-11-29 07:43:12.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:13.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:13 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 07:43:14 compute-2 ceph-mon[77138]: pgmap v1157: 305 pgs: 305 active+clean; 388 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 124 op/s
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.189 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.190 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.191 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.191 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.191 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.193 232432 INFO nova.compute.manager [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Terminating instance
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.194 232432 DEBUG nova.compute.manager [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:43:14 compute-2 kernel: tap07f2a066-f2 (unregistering): left promiscuous mode
Nov 29 07:43:14 compute-2 NetworkManager[48993]: <info>  [1764402194.4458] device (tap07f2a066-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 ovn_controller[134375]: 2025-11-29T07:43:14Z|00038|binding|INFO|Releasing lport 07f2a066-f271-4ee3-b719-aa65b4dda724 from this chassis (sb_readonly=0)
Nov 29 07:43:14 compute-2 ovn_controller[134375]: 2025-11-29T07:43:14Z|00039|binding|INFO|Setting lport 07f2a066-f271-4ee3-b719-aa65b4dda724 down in Southbound
Nov 29 07:43:14 compute-2 ovn_controller[134375]: 2025-11-29T07:43:14Z|00040|binding|INFO|Removing iface tap07f2a066-f2 ovn-installed in OVS
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.527 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:df:17 10.1.0.60 fdfe:381f:8400::19'], port_security=['fa:16:3e:d6:df:17 10.1.0.60 fdfe:381f:8400::19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.60/26 fdfe:381f:8400::19/64', 'neutron:device_id': '7a66ef22-a889-44ba-a9b5-cc657a2b00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=07f2a066-f271-4ee3-b719-aa65b4dda724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.529 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 07f2a066-f271-4ee3-b719-aa65b4dda724 in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c unbound from our chassis
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.531 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.532 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a401911b-e86d-4465-8b56-edaa4e387a89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.533 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace which is not needed anymore
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 29 07:43:14 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 12.893s CPU time.
Nov 29 07:43:14 compute-2 systemd-machined[194747]: Machine qemu-2-instance-00000002 terminated.
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.737 232432 DEBUG nova.compute.manager [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-unplugged-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.737 232432 DEBUG oslo_concurrency.lockutils [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.738 232432 DEBUG oslo_concurrency.lockutils [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.738 232432 DEBUG oslo_concurrency.lockutils [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.738 232432 DEBUG nova.compute.manager [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] No waiting events found dispatching network-vif-unplugged-07f2a066-f271-4ee3-b719-aa65b4dda724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.739 232432 DEBUG nova.compute.manager [req-34a3a1cb-c9f5-4437-b69f-ea63eb6aec88 req-26251ce5-8ae6-480e-9ebc-28a7db6aacfc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-unplugged-07f2a066-f271-4ee3-b719-aa65b4dda724 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [NOTICE]   (238864) : haproxy version is 2.8.14-c23fe91
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [NOTICE]   (238864) : path to executable is /usr/sbin/haproxy
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [WARNING]  (238864) : Exiting Master process...
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [WARNING]  (238864) : Exiting Master process...
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [ALERT]    (238864) : Current worker (238866) exited with code 143 (Terminated)
Nov 29 07:43:14 compute-2 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[238860]: [WARNING]  (238864) : All workers exited. Exiting... (0)
Nov 29 07:43:14 compute-2 systemd[1]: libpod-dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af.scope: Deactivated successfully.
Nov 29 07:43:14 compute-2 podman[239035]: 2025-11-29 07:43:14.771996321 +0000 UTC m=+0.069406465 container died dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:43:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-0eb8e75c344367a69cd8155acc00d6a1bd4be736eb165c3ac943a63f7372af1d-merged.mount: Deactivated successfully.
Nov 29 07:43:14 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af-userdata-shm.mount: Deactivated successfully.
Nov 29 07:43:14 compute-2 podman[239035]: 2025-11-29 07:43:14.821059758 +0000 UTC m=+0.118469882 container cleanup dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:43:14 compute-2 systemd[1]: libpod-conmon-dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af.scope: Deactivated successfully.
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.849 232432 INFO nova.virt.libvirt.driver [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Instance destroyed successfully.
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.851 232432 DEBUG nova.objects.instance [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'resources' on Instance uuid 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.871 232432 DEBUG nova.virt.libvirt.vif [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-1',id=2,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:42:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:42:59Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a66ef22-a889-44ba-a9b5-cc657a2b00b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.872 232432 DEBUG nova.network.os_vif_util [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "07f2a066-f271-4ee3-b719-aa65b4dda724", "address": "fa:16:3e:d6:df:17", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07f2a066-f2", "ovs_interfaceid": "07f2a066-f271-4ee3-b719-aa65b4dda724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.873 232432 DEBUG nova.network.os_vif_util [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.873 232432 DEBUG os_vif [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.875 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.875 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07f2a066-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.886 232432 INFO os_vif [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:df:17,bridge_name='br-int',has_traffic_filtering=True,id=07f2a066-f271-4ee3-b719-aa65b4dda724,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07f2a066-f2')
Nov 29 07:43:14 compute-2 podman[239078]: 2025-11-29 07:43:14.918528423 +0000 UTC m=+0.064149721 container remove dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.924 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3f9147-be8f-4e32-9649-7b032bc927ff]: (4, ('Sat Nov 29 07:43:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af)\ndd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af\nSat Nov 29 07:43:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (dd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af)\ndd4b83d79aed0aed70dcdfd3241a4afb59867d6c20232595d48fea7c896896af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.926 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3181b04e-ec8e-426a-929c-36be249734e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.928 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 kernel: tap6c117dd1-50: left promiscuous mode
Nov 29 07:43:14 compute-2 nova_compute[232428]: 2025-11-29 07:43:14.946 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:14 compute-2 podman[239065]: 2025-11-29 07:43:14.95226476 +0000 UTC m=+0.119450684 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.951 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3fa591-8d86-4e61-b0a8-b0beebe3449a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.973 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e55c8fa1-abca-4bc0-8ace-c00943e4f711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.975 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cf97c3fe-1ba3-4c2b-9d10-bb61908cbdff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:14.992 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb98cd14-8511-4ba3-bb7e-fd74f200eda8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514124, 'reachable_time': 44033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239132, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:15.006 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:43:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:15.007 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6ed43f-222c-40cf-8615-6de791fca9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:15 compute-2 systemd[1]: run-netns-ovnmeta\x2d6c117dd1\x2d5064\x2d4e69\x2db07c\x2dc93c3d729d3c.mount: Deactivated successfully.
Nov 29 07:43:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:16 compute-2 ovn_controller[134375]: 2025-11-29T07:43:16Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:b6:4a 10.100.0.10
Nov 29 07:43:16 compute-2 ovn_controller[134375]: 2025-11-29T07:43:16Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:b6:4a 10.100.0.10
Nov 29 07:43:16 compute-2 ceph-mon[77138]: pgmap v1158: 305 pgs: 305 active+clean; 395 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 132 op/s
Nov 29 07:43:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3294385100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:16 compute-2 ceph-mon[77138]: pgmap v1159: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.834 232432 DEBUG nova.compute.manager [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.835 232432 DEBUG oslo_concurrency.lockutils [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.836 232432 DEBUG oslo_concurrency.lockutils [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.836 232432 DEBUG oslo_concurrency.lockutils [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.836 232432 DEBUG nova.compute.manager [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] No waiting events found dispatching network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:43:16 compute-2 nova_compute[232428]: 2025-11-29 07:43:16.837 232432 WARNING nova.compute.manager [req-379d8995-f6bc-4c31-b67c-8febb213ce81 req-9f71034d-73c6-4395-899c-a70253f75751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received unexpected event network-vif-plugged-07f2a066-f271-4ee3-b719-aa65b4dda724 for instance with vm_state active and task_state deleting.
Nov 29 07:43:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:17.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:17 compute-2 nova_compute[232428]: 2025-11-29 07:43:17.779 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:17.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.231 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.875 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.875 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.876 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:43:18 compute-2 nova_compute[232428]: 2025-11-29 07:43:18.876 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58869189-493b-4d57-acc4-10881f62b251 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:43:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:19.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:19 compute-2 nova_compute[232428]: 2025-11-29 07:43:19.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:21.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.922 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updating instance_info_cache with network_info: [{"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.940 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-58869189-493b-4d57-acc4-10881f62b251" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.940 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.941 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.942 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.942 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.943 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.943 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.943 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.944 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.969 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.969 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.970 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.970 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:43:22 compute-2 nova_compute[232428]: 2025-11-29 07:43:22.971 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1746272014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.476 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.583 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.583 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.589 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.589 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:43:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:23.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.803 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.806 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4745MB free_disk=20.790821075439453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.806 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.807 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:23.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:23 compute-2 ceph-mon[77138]: pgmap v1160: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Nov 29 07:43:23 compute-2 ceph-mon[77138]: pgmap v1161: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.6 MiB/s wr, 224 op/s
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.918 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.918 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 58869189-493b-4d57-acc4-10881f62b251 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.919 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.919 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:43:23 compute-2 nova_compute[232428]: 2025-11-29 07:43:23.975 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1588500776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.462 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.471 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.514 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updated inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.515 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.515 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.542 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.543 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:24 compute-2 nova_compute[232428]: 2025-11-29 07:43:24.881 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:25 compute-2 sudo[239186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:25 compute-2 sudo[239186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:25 compute-2 sudo[239186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:25 compute-2 sudo[239211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:25 compute-2 sudo[239211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:25 compute-2 sudo[239211]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:25 compute-2 nova_compute[232428]: 2025-11-29 07:43:25.533 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:25 compute-2 nova_compute[232428]: 2025-11-29 07:43:25.534 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:25.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:26 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:43:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:27.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:27 compute-2 nova_compute[232428]: 2025-11-29 07:43:27.785 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:27.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:43:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2207151530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:43:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:43:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2207151530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:43:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:29.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.844 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402194.8435886, 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.845 232432 INFO nova.compute.manager [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] VM Stopped (Lifecycle Event)
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.872 232432 DEBUG nova.compute.manager [None req-8708857a-66f5-41da-808d-3dab19687597 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.876 232432 DEBUG nova.compute.manager [None req-8708857a-66f5-41da-808d-3dab19687597 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:43:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:29.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:29 compute-2 nova_compute[232428]: 2025-11-29 07:43:29.909 232432 INFO nova.compute.manager [None req-8708857a-66f5-41da-808d-3dab19687597 - - - - - -] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 29 07:43:30 compute-2 podman[239238]: 2025-11-29 07:43:30.737989382 +0000 UTC m=+0.114690065 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:43:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:32 compute-2 nova_compute[232428]: 2025-11-29 07:43:32.789 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:33 compute-2 ceph-mon[77138]: pgmap v1162: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.6 MiB/s wr, 270 op/s
Nov 29 07:43:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2428679417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:33 compute-2 ceph-mon[77138]: pgmap v1163: 305 pgs: 305 active+clean; 404 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 670 KiB/s wr, 219 op/s
Nov 29 07:43:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2878732894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1746272014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:33 compute-2 podman[239260]: 2025-11-29 07:43:33.679682061 +0000 UTC m=+0.076193838 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:43:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:43:34 compute-2 nova_compute[232428]: 2025-11-29 07:43:34.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:35.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:43:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:35.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:43:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:37 compute-2 nova_compute[232428]: 2025-11-29 07:43:37.792 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:37.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:37.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:39.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1164: 305 pgs: 305 active+clean; 404 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 208 KiB/s wr, 211 op/s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1588500776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1165: 305 pgs: 305 active+clean; 422 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 837 KiB/s wr, 145 op/s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1166: 305 pgs: 305 active+clean; 422 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 477 KiB/s rd, 836 KiB/s wr, 69 op/s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2207151530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:43:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2207151530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1167: 305 pgs: 305 active+clean; 442 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 482 KiB/s rd, 1.7 MiB/s wr, 74 op/s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1168: 305 pgs: 305 active+clean; 450 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 116 KiB/s rd, 2.9 MiB/s wr, 47 op/s
Nov 29 07:43:39 compute-2 ceph-mon[77138]: pgmap v1169: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Nov 29 07:43:39 compute-2 nova_compute[232428]: 2025-11-29 07:43:39.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:42 compute-2 ceph-mon[77138]: pgmap v1170: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 41 op/s
Nov 29 07:43:42 compute-2 ceph-mon[77138]: pgmap v1171: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.4 MiB/s wr, 32 op/s
Nov 29 07:43:42 compute-2 ceph-mon[77138]: pgmap v1172: 305 pgs: 305 active+clean; 475 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 3.4 MiB/s wr, 60 op/s
Nov 29 07:43:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1544090733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/640078798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:42 compute-2 nova_compute[232428]: 2025-11-29 07:43:42.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:43 compute-2 sshd-session[239285]: banner exchange: Connection from 184.105.247.195 port 20190: invalid format
Nov 29 07:43:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:43.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:44 compute-2 ovn_controller[134375]: 2025-11-29T07:43:44Z|00041|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 07:43:44 compute-2 ceph-mon[77138]: pgmap v1173: 305 pgs: 305 active+clean; 484 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 3.0 MiB/s wr, 65 op/s
Nov 29 07:43:44 compute-2 nova_compute[232428]: 2025-11-29 07:43:44.894 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:45 compute-2 podman[239288]: 2025-11-29 07:43:45.707195148 +0000 UTC m=+0.106673504 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:43:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:47 compute-2 nova_compute[232428]: 2025-11-29 07:43:47.797 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:49 compute-2 nova_compute[232428]: 2025-11-29 07:43:49.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:49.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:51 compute-2 ceph-mon[77138]: pgmap v1174: 305 pgs: 305 active+clean; 482 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 2.9 MiB/s wr, 63 op/s
Nov 29 07:43:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.662 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.663 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.663 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.664 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.664 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.666 232432 INFO nova.compute.manager [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Terminating instance
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.667 232432 DEBUG nova.compute.manager [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:43:52 compute-2 nova_compute[232428]: 2025-11-29 07:43:52.800 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:53.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:53 compute-2 sudo[239320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:53 compute-2 sudo[239320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:53 compute-2 sudo[239320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:54 compute-2 sudo[239345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:54 compute-2 sudo[239345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:54 compute-2 sudo[239345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:54 compute-2 sudo[239370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:54 compute-2 sudo[239370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:54 compute-2 sudo[239370]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:54 compute-2 ceph-mon[77138]: pgmap v1175: 305 pgs: 305 active+clean; 471 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Nov 29 07:43:54 compute-2 ceph-mon[77138]: pgmap v1176: 305 pgs: 305 active+clean; 471 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 29 07:43:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2376785573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:43:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:43:54 compute-2 ceph-mon[77138]: pgmap v1177: 305 pgs: 305 active+clean; 461 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Nov 29 07:43:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4037608517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:43:54 compute-2 ceph-mon[77138]: pgmap v1178: 305 pgs: 305 active+clean; 465 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 29 07:43:54 compute-2 kernel: tap7d8df1ab-c3 (unregistering): left promiscuous mode
Nov 29 07:43:54 compute-2 NetworkManager[48993]: <info>  [1764402234.3194] device (tap7d8df1ab-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:43:54 compute-2 sudo[239395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:43:54 compute-2 sudo[239395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:54 compute-2 ovn_controller[134375]: 2025-11-29T07:43:54Z|00042|binding|INFO|Releasing lport 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb from this chassis (sb_readonly=0)
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.376 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 sudo[239395]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:54 compute-2 ovn_controller[134375]: 2025-11-29T07:43:54Z|00043|binding|INFO|Setting lport 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb down in Southbound
Nov 29 07:43:54 compute-2 ovn_controller[134375]: 2025-11-29T07:43:54Z|00044|binding|INFO|Removing iface tap7d8df1ab-c3 ovn-installed in OVS
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.381 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:b6:4a 10.100.0.10'], port_security=['fa:16:3e:7e:b6:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '58869189-493b-4d57-acc4-10881f62b251', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0953a8181d0404daeae16ad65c53823', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62f7d9fe-d5cc-4772-b80b-035b36a2adf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa2fa06-fbfc-41d7-aa49-493c7e8260ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.383 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb in datapath 6fd1b7b4-729b-4264-a69c-0cec37de984c unbound from our chassis
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.385 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fd1b7b4-729b-4264-a69c-0cec37de984c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.387 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca369f5-939b-40bd-9e51-12bdcfb66d42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.388 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c namespace which is not needed anymore
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.394 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 29 07:43:54 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000005.scope: Consumed 17.393s CPU time.
Nov 29 07:43:54 compute-2 systemd-machined[194747]: Machine qemu-1-instance-00000005 terminated.
Nov 29 07:43:54 compute-2 kernel: tap7d8df1ab-c3: entered promiscuous mode
Nov 29 07:43:54 compute-2 NetworkManager[48993]: <info>  [1764402234.4937] manager: (tap7d8df1ab-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 07:43:54 compute-2 kernel: tap7d8df1ab-c3 (unregistering): left promiscuous mode
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.513 232432 INFO nova.virt.libvirt.driver [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] Instance destroyed successfully.
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.514 232432 DEBUG nova.objects.instance [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'resources' on Instance uuid 58869189-493b-4d57-acc4-10881f62b251 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.525 232432 DEBUG nova.virt.libvirt.vif [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-35716146',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-35716146',id=5,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:42:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-i18w6ju4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=58869189-493b-4d57-acc4-10881f62b251,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.526 232432 DEBUG nova.network.os_vif_util [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "address": "fa:16:3e:7e:b6:4a", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d8df1ab-c3", "ovs_interfaceid": "7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.527 232432 DEBUG nova.network.os_vif_util [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.527 232432 DEBUG os_vif [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.529 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d8df1ab-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.535 232432 INFO os_vif [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:b6:4a,bridge_name='br-int',has_traffic_filtering=True,id=7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d8df1ab-c3')
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [NOTICE]   (238738) : haproxy version is 2.8.14-c23fe91
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [NOTICE]   (238738) : path to executable is /usr/sbin/haproxy
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [WARNING]  (238738) : Exiting Master process...
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [WARNING]  (238738) : Exiting Master process...
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [ALERT]    (238738) : Current worker (238740) exited with code 143 (Terminated)
Nov 29 07:43:54 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[238734]: [WARNING]  (238738) : All workers exited. Exiting... (0)
Nov 29 07:43:54 compute-2 systemd[1]: libpod-2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e.scope: Deactivated successfully.
Nov 29 07:43:54 compute-2 podman[239446]: 2025-11-29 07:43:54.569633746 +0000 UTC m=+0.067053302 container died 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.598 232432 DEBUG nova.compute.manager [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-unplugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.599 232432 DEBUG oslo_concurrency.lockutils [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.599 232432 DEBUG oslo_concurrency.lockutils [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.599 232432 DEBUG oslo_concurrency.lockutils [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.600 232432 DEBUG nova.compute.manager [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] No waiting events found dispatching network-vif-unplugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.600 232432 DEBUG nova.compute.manager [req-d8b4a9e5-536c-4dae-958b-2a7c50dd4641 req-e9a1822a-f995-4010-9a92-b1ba5a3e3cab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-unplugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:43:54 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e-userdata-shm.mount: Deactivated successfully.
Nov 29 07:43:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-ea42e98ddcdb0b461ebedabecf5149e4521e0ccbb83fe62914a8bc17d04741ad-merged.mount: Deactivated successfully.
Nov 29 07:43:54 compute-2 podman[239446]: 2025-11-29 07:43:54.617782505 +0000 UTC m=+0.115202061 container cleanup 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:43:54 compute-2 systemd[1]: libpod-conmon-2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e.scope: Deactivated successfully.
Nov 29 07:43:54 compute-2 podman[239505]: 2025-11-29 07:43:54.706247238 +0000 UTC m=+0.058270598 container remove 2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.712 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb104f85-1e89-4c88-9b40-377117e291e0]: (4, ('Sat Nov 29 07:43:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c (2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e)\n2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e\nSat Nov 29 07:43:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c (2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e)\n2802c956839f8c24fffb64df8efcfd99f2e60299a58809a9daeee00dbc48a38e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.714 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b8190f-1ca7-4680-b9ab-3f359fad5368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.715 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fd1b7b4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.717 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.718 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:43:54 compute-2 kernel: tap6fd1b7b4-70: left promiscuous mode
Nov 29 07:43:54 compute-2 nova_compute[232428]: 2025-11-29 07:43:54.730 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.734 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74bb1ea6-b761-4247-8cc0-bdfd53441251]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.750 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2d80b6-7585-451b-83d4-920c8392ff59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.752 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb5c508-20e5-4adc-8f4e-174b4326f417]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.773 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4260fb32-f3d1-4d18-bc9c-dc25637eb5c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513382, 'reachable_time': 28346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239520, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 systemd[1]: run-netns-ovnmeta\x2d6fd1b7b4\x2d729b\x2d4264\x2da69c\x2d0cec37de984c.mount: Deactivated successfully.
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.779 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.779 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[55a94129-06b4-4dc3-9714-57ab5aa7f86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:43:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:54.780 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:43:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:55.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.783 232432 DEBUG nova.compute.manager [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:43:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:43:56.783 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.783 232432 DEBUG oslo_concurrency.lockutils [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58869189-493b-4d57-acc4-10881f62b251-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.784 232432 DEBUG oslo_concurrency.lockutils [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.784 232432 DEBUG oslo_concurrency.lockutils [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.784 232432 DEBUG nova.compute.manager [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] No waiting events found dispatching network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:43:56 compute-2 nova_compute[232428]: 2025-11-29 07:43:56.785 232432 WARNING nova.compute.manager [req-846a0db4-ee13-457f-84eb-2f816a70fcb9 req-63418c10-8268-4a43-b4c8-5779d6f96046 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received unexpected event network-vif-plugged-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb for instance with vm_state active and task_state deleting.
Nov 29 07:43:57 compute-2 ceph-mon[77138]: pgmap v1179: 305 pgs: 305 active+clean; 466 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 1.4 MiB/s wr, 77 op/s
Nov 29 07:43:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.284 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.285 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.316 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.449 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.450 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.462 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.462 232432 INFO nova.compute.claims [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.629 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:57 compute-2 nova_compute[232428]: 2025-11-29 07:43:57.802 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:43:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:43:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:58 compute-2 ceph-mon[77138]: pgmap v1180: 305 pgs: 305 active+clean; 470 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 598 KiB/s rd, 338 KiB/s wr, 100 op/s
Nov 29 07:43:58 compute-2 ceph-mon[77138]: pgmap v1181: 305 pgs: 305 active+clean; 470 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 192 KiB/s wr, 82 op/s
Nov 29 07:43:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2783458430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.256 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.263 232432 DEBUG nova.compute.provider_tree [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.510 232432 DEBUG nova.scheduler.client.report [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.542 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.543 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.590 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.591 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.632 232432 INFO nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.675 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.786 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.788 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.789 232432 INFO nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Creating image(s)
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.825 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.857 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.892 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.897 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.929 232432 DEBUG nova.policy [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '221a3978e81b4d679382df9385da9946', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0953a8181d0404daeae16ad65c53823', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.970 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.972 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.972 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:58 compute-2 nova_compute[232428]: 2025-11-29 07:43:58.973 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:59 compute-2 nova_compute[232428]: 2025-11-29 07:43:59.005 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:43:59 compute-2 nova_compute[232428]: 2025-11-29 07:43:59.009 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 800db21b-571e-4663-b1e4-846ab92adf04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:59 compute-2 nova_compute[232428]: 2025-11-29 07:43:59.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:43:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:43:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:43:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:43:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2783458430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.250 232432 INFO nova.virt.libvirt.driver [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Deleting instance files /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8_del
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.251 232432 INFO nova.virt.libvirt.driver [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Deletion of /var/lib/nova/instances/7a66ef22-a889-44ba-a9b5-cc657a2b00b8_del complete
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.487 232432 DEBUG nova.virt.libvirt.host [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.488 232432 INFO nova.virt.libvirt.host [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] UEFI support detected
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.491 232432 INFO nova.compute.manager [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Took 46.30 seconds to destroy the instance on the hypervisor.
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.492 232432 DEBUG oslo.service.loopingcall [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.493 232432 DEBUG nova.compute.manager [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:44:00 compute-2 nova_compute[232428]: 2025-11-29 07:44:00.493 232432 DEBUG nova.network.neutron [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.010 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 800db21b-571e-4663-b1e4-846ab92adf04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.001s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:01 compute-2 ceph-mon[77138]: pgmap v1182: 305 pgs: 305 active+clean; 474 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 272 KiB/s wr, 141 op/s
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.112 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Successfully created port: 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.121 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] resizing rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.592 232432 DEBUG nova.objects.instance [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'migration_context' on Instance uuid 800db21b-571e-4663-b1e4-846ab92adf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.646 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:01 compute-2 podman[239712]: 2025-11-29 07:44:01.667674317 +0000 UTC m=+0.069197629 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.677 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.681 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.681 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.682 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.712 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.713 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.757 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.758 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.789 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.793 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.815 232432 DEBUG nova.network.neutron [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.841 232432 INFO nova.compute.manager [-] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Took 1.35 seconds to deallocate network for instance.
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.850 232432 DEBUG nova.compute.manager [req-30af0a6d-2166-454b-b7c5-463fc3d951fb req-f8b02741-2c9d-485f-ab41-1e1808cc6659 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Received event network-vif-deleted-07f2a066-f271-4ee3-b719-aa65b4dda724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.850 232432 INFO nova.compute.manager [req-30af0a6d-2166-454b-b7c5-463fc3d951fb req-f8b02741-2c9d-485f-ab41-1e1808cc6659 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Neutron deleted interface 07f2a066-f271-4ee3-b719-aa65b4dda724; detaching it from the instance and deleting it from the info cache
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.851 232432 DEBUG nova.network.neutron [req-30af0a6d-2166-454b-b7c5-463fc3d951fb req-f8b02741-2c9d-485f-ab41-1e1808cc6659 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.871 232432 DEBUG nova.compute.manager [req-30af0a6d-2166-454b-b7c5-463fc3d951fb req-f8b02741-2c9d-485f-ab41-1e1808cc6659 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a66ef22-a889-44ba-a9b5-cc657a2b00b8] Detach interface failed, port_id=07f2a066-f271-4ee3-b719-aa65b4dda724, reason: Instance 7a66ef22-a889-44ba-a9b5-cc657a2b00b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.902 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:01 compute-2 nova_compute[232428]: 2025-11-29 07:44:01.903 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.060 232432 DEBUG oslo_concurrency.processutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:02 compute-2 ceph-mon[77138]: pgmap v1183: 305 pgs: 305 active+clean; 437 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 189 KiB/s wr, 182 op/s
Nov 29 07:44:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3944343326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.506 232432 DEBUG oslo_concurrency.processutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.514 232432 DEBUG nova.compute.provider_tree [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.531 232432 DEBUG nova.scheduler.client.report [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.552 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.580 232432 INFO nova.scheduler.client.report [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Deleted allocations for instance 7a66ef22-a889-44ba-a9b5-cc657a2b00b8
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.700 232432 DEBUG oslo_concurrency.lockutils [None req-988c5422-828f-4a17-a24e-bc7e15e6f61c cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a66ef22-a889-44ba-a9b5-cc657a2b00b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 48.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.728 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Successfully updated port: 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.744 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.745 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquired lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.745 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.861 232432 DEBUG nova.compute.manager [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-changed-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.861 232432 DEBUG nova.compute.manager [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Refreshing instance network info cache due to event network-changed-2a77d11e-13e0-4537-b43a-bb0ebbaf5617. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:44:02 compute-2 nova_compute[232428]: 2025-11-29 07:44:02.862 232432 DEBUG oslo_concurrency.lockutils [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.034 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:44:03 compute-2 sshd-session[239833]: Invalid user sol from 45.148.10.240 port 39902
Nov 29 07:44:03 compute-2 sshd-session[239833]: Connection closed by invalid user sol 45.148.10.240 port 39902 [preauth]
Nov 29 07:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:03.290 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:03.291 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:03.291 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.325 232432 INFO nova.virt.libvirt.driver [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Deleting instance files /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251_del
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.326 232432 INFO nova.virt.libvirt.driver [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Deletion of /var/lib/nova/instances/58869189-493b-4d57-acc4-10881f62b251_del complete
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.444 232432 INFO nova.compute.manager [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Took 10.78 seconds to destroy the instance on the hypervisor.
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.445 232432 DEBUG oslo.service.loopingcall [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.445 232432 DEBUG nova.compute.manager [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.446 232432 DEBUG nova.network.neutron [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:44:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:03.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:03 compute-2 nova_compute[232428]: 2025-11-29 07:44:03.973 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.175 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.176 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Ensure instance console log exists: /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.176 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.177 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.177 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3944343326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:04 compute-2 nova_compute[232428]: 2025-11-29 07:44:04.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:04 compute-2 podman[239890]: 2025-11-29 07:44:04.726883599 +0000 UTC m=+0.098985222 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:44:05 compute-2 ceph-mon[77138]: pgmap v1184: 305 pgs: 305 active+clean; 412 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 238 op/s
Nov 29 07:44:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1917334749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:05.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:05 compute-2 nova_compute[232428]: 2025-11-29 07:44:05.996 232432 DEBUG nova.network.neutron [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updating instance_info_cache with network_info: [{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.191 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Releasing lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.191 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Instance network_info: |[{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.191 232432 DEBUG oslo_concurrency.lockutils [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.192 232432 DEBUG nova.network.neutron [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Refreshing network info cache for port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.194 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Start _get_guest_xml network_info=[{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [{'size': 1, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.197 232432 DEBUG nova.network.neutron [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.203 232432 WARNING nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.211 232432 DEBUG nova.virt.libvirt.host [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.212 232432 DEBUG nova.virt.libvirt.host [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.216 232432 DEBUG nova.virt.libvirt.host [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.217 232432 DEBUG nova.virt.libvirt.host [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.218 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.218 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:41:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='20827970',id=3,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-991363589',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.219 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.219 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.219 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.220 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.220 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.220 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.220 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.220 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.221 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.221 232432 DEBUG nova.virt.hardware [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.224 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.325 232432 INFO nova.compute.manager [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] Took 2.88 seconds to deallocate network for instance.
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.417 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.418 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.484 232432 DEBUG oslo_concurrency.processutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:44:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/174858783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1744210631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.922 232432 DEBUG oslo_concurrency.processutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.928 232432 DEBUG nova.compute.provider_tree [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.961 232432 DEBUG nova.scheduler.client.report [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:44:06 compute-2 nova_compute[232428]: 2025-11-29 07:44:06.993 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.033 232432 INFO nova.scheduler.client.report [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Deleted allocations for instance 58869189-493b-4d57-acc4-10881f62b251
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.134 232432 DEBUG oslo_concurrency.lockutils [None req-0abb560b-8770-4a35-856c-3f5584adf5d4 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "58869189-493b-4d57-acc4-10881f62b251" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.338 232432 DEBUG nova.compute.manager [req-6dac59ba-9bd3-4bef-9e38-a69e8cc57c40 req-6e350535-a76c-4cd5-99c6-cd414196fbf3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58869189-493b-4d57-acc4-10881f62b251] Received event network-vif-deleted-7d8df1ab-c329-4fa1-b9a6-7e1c3b1699bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.712 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.714 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:07 compute-2 nova_compute[232428]: 2025-11-29 07:44:07.805 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:44:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2785161071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:08 compute-2 nova_compute[232428]: 2025-11-29 07:44:08.213 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:08 compute-2 nova_compute[232428]: 2025-11-29 07:44:08.236 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:08 compute-2 nova_compute[232428]: 2025-11-29 07:44:08.241 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:44:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1699947055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:08 compute-2 ceph-mon[77138]: pgmap v1185: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 267 op/s
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.444 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.447 232432 DEBUG nova.virt.libvirt.vif [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:43:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-971120006',id=8,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-rs333g49',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:43:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=800db21b-571e-4663-b1e4-846ab92adf04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.448 232432 DEBUG nova.network.os_vif_util [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.449 232432 DEBUG nova.network.os_vif_util [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.450 232432 DEBUG nova.objects.instance [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'pci_devices' on Instance uuid 800db21b-571e-4663-b1e4-846ab92adf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.512 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402234.5114388, 58869189-493b-4d57-acc4-10881f62b251 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.513 232432 INFO nova.compute.manager [-] [instance: 58869189-493b-4d57-acc4-10881f62b251] VM Stopped (Lifecycle Event)
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.570 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <uuid>800db21b-571e-4663-b1e4-846ab92adf04</uuid>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <name>instance-00000008</name>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-971120006</nova:name>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:44:06</nova:creationTime>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-991363589">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:ephemeral>1</nova:ephemeral>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:user uuid="221a3978e81b4d679382df9385da9946">tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member</nova:user>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:project uuid="e0953a8181d0404daeae16ad65c53823">tempest-ServersWithSpecificFlavorTestJSON-154328222</nova:project>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <nova:port uuid="2a77d11e-13e0-4537-b43a-bb0ebbaf5617">
Nov 29 07:44:09 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <system>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="serial">800db21b-571e-4663-b1e4-846ab92adf04</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="uuid">800db21b-571e-4663-b1e4-846ab92adf04</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </system>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <os>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </os>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <features>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </features>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/800db21b-571e-4663-b1e4-846ab92adf04_disk">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </source>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/800db21b-571e-4663-b1e4-846ab92adf04_disk.eph0">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </source>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/800db21b-571e-4663-b1e4-846ab92adf04_disk.config">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </source>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:44:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:d5:db:44"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <target dev="tap2a77d11e-13"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/console.log" append="off"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <video>
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </video>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:44:09 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:44:09 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:44:09 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:44:09 compute-2 nova_compute[232428]: </domain>
Nov 29 07:44:09 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.572 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Preparing to wait for external event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.573 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.574 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.574 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.576 232432 DEBUG nova.virt.libvirt.vif [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:43:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-971120006',id=8,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-rs333g49',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:43:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=800db21b-571e-4663-b1e4-846ab92adf04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.577 232432 DEBUG nova.network.os_vif_util [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.578 232432 DEBUG nova.network.os_vif_util [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.579 232432 DEBUG os_vif [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.581 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.582 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.587 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.587 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a77d11e-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.588 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a77d11e-13, col_values=(('external_ids', {'iface-id': '2a77d11e-13e0-4537-b43a-bb0ebbaf5617', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:db:44', 'vm-uuid': '800db21b-571e-4663-b1e4-846ab92adf04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:09 compute-2 NetworkManager[48993]: <info>  [1764402249.5910] manager: (tap2a77d11e-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.598 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.600 232432 INFO os_vif [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13')
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.713 232432 DEBUG nova.compute.manager [None req-5cb60049-ba13-4c27-b00d-9c2ae46b4e1d - - - - - -] [instance: 58869189-493b-4d57-acc4-10881f62b251] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:44:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.937 232432 DEBUG nova.network.neutron [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updated VIF entry in instance network info cache for port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:44:09 compute-2 nova_compute[232428]: 2025-11-29 07:44:09.938 232432 DEBUG nova.network.neutron [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updating instance_info_cache with network_info: [{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:09.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.200 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.201 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.201 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.202 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] No VIF found with MAC fa:16:3e:d5:db:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.202 232432 INFO nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Using config drive
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.252 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:10 compute-2 nova_compute[232428]: 2025-11-29 07:44:10.261 232432 DEBUG oslo_concurrency.lockutils [req-f89732a3-3824-4e5b-bb8e-f668a0bb2c3b req-3ffdd592-b2a7-41e4-b807-8e948b9c1b59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:44:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/174858783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1744210631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:10 compute-2 ceph-mon[77138]: pgmap v1186: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 225 op/s
Nov 29 07:44:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2785161071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1699947055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:10 compute-2 ceph-mon[77138]: pgmap v1187: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 255 op/s
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.324 232432 INFO nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Creating config drive at /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.332 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6pu4s7ih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.466 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6pu4s7ih" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.493 232432 DEBUG nova.storage.rbd_utils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] rbd image 800db21b-571e-4663-b1e4-846ab92adf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.496 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config 800db21b-571e-4663-b1e4-846ab92adf04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.630 232432 DEBUG oslo_concurrency.processutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config 800db21b-571e-4663-b1e4-846ab92adf04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.631 232432 INFO nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Deleting local config drive /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04/disk.config because it was imported into RBD.
Nov 29 07:44:11 compute-2 kernel: tap2a77d11e-13: entered promiscuous mode
Nov 29 07:44:11 compute-2 NetworkManager[48993]: <info>  [1764402251.6868] manager: (tap2a77d11e-13): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 07:44:11 compute-2 ovn_controller[134375]: 2025-11-29T07:44:11Z|00045|binding|INFO|Claiming lport 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 for this chassis.
Nov 29 07:44:11 compute-2 ovn_controller[134375]: 2025-11-29T07:44:11Z|00046|binding|INFO|2a77d11e-13e0-4537-b43a-bb0ebbaf5617: Claiming fa:16:3e:d5:db:44 10.100.0.11
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.689 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.698 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:db:44 10.100.0.11'], port_security=['fa:16:3e:d5:db:44 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800db21b-571e-4663-b1e4-846ab92adf04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0953a8181d0404daeae16ad65c53823', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62f7d9fe-d5cc-4772-b80b-035b36a2adf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa2fa06-fbfc-41d7-aa49-493c7e8260ef, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=2a77d11e-13e0-4537-b43a-bb0ebbaf5617) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.699 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 in datapath 6fd1b7b4-729b-4264-a69c-0cec37de984c bound to our chassis
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.701 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:44:11 compute-2 ovn_controller[134375]: 2025-11-29T07:44:11Z|00047|binding|INFO|Setting lport 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 ovn-installed in OVS
Nov 29 07:44:11 compute-2 ovn_controller[134375]: 2025-11-29T07:44:11Z|00048|binding|INFO|Setting lport 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 up in Southbound
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.707 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:11 compute-2 nova_compute[232428]: 2025-11-29 07:44:11.709 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.713 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8d090afe-3494-47c5-ace2-81c265612d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.714 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fd1b7b4-71 in ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.716 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fd1b7b4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.716 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[528af6fd-f261-4d39-89c1-4214210a0ea6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.718 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12a83a13-0a7f-4575-9975-06db1ce91d32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 systemd-udevd[240096]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:44:11 compute-2 NetworkManager[48993]: <info>  [1764402251.7337] device (tap2a77d11e-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:44:11 compute-2 systemd-machined[194747]: New machine qemu-3-instance-00000008.
Nov 29 07:44:11 compute-2 NetworkManager[48993]: <info>  [1764402251.7350] device (tap2a77d11e-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.738 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[266c114a-4be6-4298-b63c-047c2d597bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.755 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd8f6d0-9139-417e-89bf-7586603888f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.786 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[be7e3049-fbea-4251-b74c-9784af41f3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 systemd-udevd[240100]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:44:11 compute-2 NetworkManager[48993]: <info>  [1764402251.7967] manager: (tap6fd1b7b4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.796 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf2fa77-ceaf-47af-8aeb-25c013e4d06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.833 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[44a3492e-9ba7-446a-b648-88a10b625d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.836 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe9b16-711c-4b4b-9c1d-5be709bf4717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:11 compute-2 NetworkManager[48993]: <info>  [1764402251.8606] device (tap6fd1b7b4-70): carrier: link connected
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.869 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[db79a92e-a3ba-4dde-a8c6-5a00d1eb210e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.888 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bda01268-b5f9-465d-8eaa-2956b1be88b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fd1b7b4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:9c:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521812, 'reachable_time': 16447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240129, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.909 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8904b936-aac5-4ceb-8f8b-7426920a86dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:9ce1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521812, 'tstamp': 521812}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240130, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.931 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf59075c-a548-42ae-a339-d59ff1c847fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fd1b7b4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:9c:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521812, 'reachable_time': 16447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240131, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:11.962 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a9557844-ce4d-4e77-a087-de030b1021e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.020 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80f7afbd-3ac5-47c2-855d-2d66ce723f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.022 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fd1b7b4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.022 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.022 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fd1b7b4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.080 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:12 compute-2 kernel: tap6fd1b7b4-70: entered promiscuous mode
Nov 29 07:44:12 compute-2 NetworkManager[48993]: <info>  [1764402252.0820] manager: (tap6fd1b7b4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.083 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.084 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fd1b7b4-70, col_values=(('external_ids', {'iface-id': 'a8a25a5f-1fe9-4fe3-8e65-adbb9f372b77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.085 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:12 compute-2 ovn_controller[134375]: 2025-11-29T07:44:12Z|00049|binding|INFO|Releasing lport a8a25a5f-1fe9-4fe3-8e65-adbb9f372b77 from this chassis (sb_readonly=0)
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.100 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.102 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a90ca0f5-21ae-4207-bf0c-150a6fb92978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.103 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/6fd1b7b4-729b-4264-a69c-0cec37de984c.pid.haproxy
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 6fd1b7b4-729b-4264-a69c-0cec37de984c
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:44:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:12.104 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'env', 'PROCESS_TAG=haproxy-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fd1b7b4-729b-4264-a69c-0cec37de984c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.163 232432 DEBUG nova.compute.manager [req-6ae78e73-f1e5-4249-847a-fcd14a23bb25 req-ac0c4d21-9bba-4a39-a600-cd59aab2defc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.164 232432 DEBUG oslo_concurrency.lockutils [req-6ae78e73-f1e5-4249-847a-fcd14a23bb25 req-ac0c4d21-9bba-4a39-a600-cd59aab2defc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.164 232432 DEBUG oslo_concurrency.lockutils [req-6ae78e73-f1e5-4249-847a-fcd14a23bb25 req-ac0c4d21-9bba-4a39-a600-cd59aab2defc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.164 232432 DEBUG oslo_concurrency.lockutils [req-6ae78e73-f1e5-4249-847a-fcd14a23bb25 req-ac0c4d21-9bba-4a39-a600-cd59aab2defc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.165 232432 DEBUG nova.compute.manager [req-6ae78e73-f1e5-4249-847a-fcd14a23bb25 req-ac0c4d21-9bba-4a39-a600-cd59aab2defc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Processing event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:44:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:12 compute-2 podman[240181]: 2025-11-29 07:44:12.493666755 +0000 UTC m=+0.058224355 container create 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:44:12 compute-2 systemd[1]: Started libpod-conmon-5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833.scope.
Nov 29 07:44:12 compute-2 podman[240181]: 2025-11-29 07:44:12.462595562 +0000 UTC m=+0.027153182 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:44:12 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:44:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14faba3d24cdfc729e9805b16c721d427c15baef1f44c957c96df1db64f02b05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:12 compute-2 podman[240181]: 2025-11-29 07:44:12.585646477 +0000 UTC m=+0.150204097 container init 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:44:12 compute-2 podman[240181]: 2025-11-29 07:44:12.597033074 +0000 UTC m=+0.161590704 container start 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:44:12 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [NOTICE]   (240201) : New worker (240203) forked
Nov 29 07:44:12 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [NOTICE]   (240201) : Loading success.
Nov 29 07:44:12 compute-2 nova_compute[232428]: 2025-11-29 07:44:12.808 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:13.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:13.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:14 compute-2 sudo[240213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:14 compute-2 sudo[240213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:14 compute-2 sudo[240213]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:14 compute-2 sudo[240238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:14 compute-2 sudo[240238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:14 compute-2 sudo[240238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.625 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.663 232432 DEBUG nova.compute.manager [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.663 232432 DEBUG oslo_concurrency.lockutils [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.664 232432 DEBUG oslo_concurrency.lockutils [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.664 232432 DEBUG oslo_concurrency.lockutils [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.665 232432 DEBUG nova.compute.manager [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] No waiting events found dispatching network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.665 232432 WARNING nova.compute.manager [req-72a1d0f1-8de4-4d10-b085-b4f19058129a req-b8e49511-e17f-4db0-a62b-42669fcc4c41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received unexpected event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 for instance with vm_state building and task_state spawning.
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.814 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.815 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402254.815261, 800db21b-571e-4663-b1e4-846ab92adf04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.816 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] VM Started (Lifecycle Event)
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.819 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.823 232432 INFO nova.virt.libvirt.driver [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Instance spawned successfully.
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.823 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.848 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.853 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.854 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.854 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.855 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.855 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.856 232432 DEBUG nova.virt.libvirt.driver [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.861 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.910 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.911 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402254.8159611, 800db21b-571e-4663-b1e4-846ab92adf04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.911 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] VM Paused (Lifecycle Event)
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.966 232432 INFO nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Took 16.18 seconds to spawn the instance on the hypervisor.
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.967 232432 DEBUG nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.969 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.981 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402254.8181021, 800db21b-571e-4663-b1e4-846ab92adf04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:44:14 compute-2 nova_compute[232428]: 2025-11-29 07:44:14.982 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] VM Resumed (Lifecycle Event)
Nov 29 07:44:15 compute-2 nova_compute[232428]: 2025-11-29 07:44:15.019 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:44:15 compute-2 nova_compute[232428]: 2025-11-29 07:44:15.027 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:44:15 compute-2 nova_compute[232428]: 2025-11-29 07:44:15.066 232432 INFO nova.compute.manager [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Took 17.65 seconds to build instance.
Nov 29 07:44:15 compute-2 nova_compute[232428]: 2025-11-29 07:44:15.091 232432 DEBUG oslo_concurrency.lockutils [None req-16ccdc2c-cf80-4eb5-afb5-afe1ccddcce0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:15.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:16 compute-2 ceph-mon[77138]: pgmap v1188: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Nov 29 07:44:16 compute-2 podman[240307]: 2025-11-29 07:44:16.709202151 +0000 UTC m=+0.105026902 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:44:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:17 compute-2 nova_compute[232428]: 2025-11-29 07:44:17.811 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:17.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:18 compute-2 nova_compute[232428]: 2025-11-29 07:44:18.012 232432 DEBUG nova.compute.manager [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-changed-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:18 compute-2 nova_compute[232428]: 2025-11-29 07:44:18.013 232432 DEBUG nova.compute.manager [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Refreshing instance network info cache due to event network-changed-2a77d11e-13e0-4537-b43a-bb0ebbaf5617. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:44:18 compute-2 nova_compute[232428]: 2025-11-29 07:44:18.013 232432 DEBUG oslo_concurrency.lockutils [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:44:18 compute-2 nova_compute[232428]: 2025-11-29 07:44:18.013 232432 DEBUG oslo_concurrency.lockutils [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:44:18 compute-2 nova_compute[232428]: 2025-11-29 07:44:18.013 232432 DEBUG nova.network.neutron [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Refreshing network info cache for port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.240 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.241 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:19 compute-2 ceph-mon[77138]: pgmap v1189: 305 pgs: 305 active+clean; 399 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 29 07:44:19 compute-2 ceph-mon[77138]: pgmap v1190: 305 pgs: 305 active+clean; 357 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 565 KiB/s wr, 103 op/s
Nov 29 07:44:19 compute-2 ceph-mon[77138]: pgmap v1191: 305 pgs: 305 active+clean; 357 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 36 KiB/s wr, 62 op/s
Nov 29 07:44:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1488167004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.732 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:19.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.855 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.857 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:44:19 compute-2 nova_compute[232428]: 2025-11-29 07:44:19.857 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:44:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:19.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.073 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.074 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4771MB free_disk=20.785533905029297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.075 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.075 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.251 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 800db21b-571e-4663-b1e4-846ab92adf04 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 2}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.253 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.253 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.336 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1821450447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.787 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.794 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.816 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:44:20 compute-2 ceph-mon[77138]: pgmap v1192: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 112 op/s
Nov 29 07:44:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1488167004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.857 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:44:20 compute-2 nova_compute[232428]: 2025-11-29 07:44:20.858 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.549 232432 DEBUG nova.network.neutron [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updated VIF entry in instance network info cache for port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.549 232432 DEBUG nova.network.neutron [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updating instance_info_cache with network_info: [{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.677 232432 DEBUG oslo_concurrency.lockutils [req-cfe3bbd2-a8fb-49b0-ae15-be73de7d614d req-685b5d07-81b0-4704-a4a1-a240866dff6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.848 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.848 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:21.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.866 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.866 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:44:21 compute-2 nova_compute[232428]: 2025-11-29 07:44:21.867 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:44:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:22 compute-2 nova_compute[232428]: 2025-11-29 07:44:22.328 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:44:22 compute-2 nova_compute[232428]: 2025-11-29 07:44:22.329 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:44:22 compute-2 nova_compute[232428]: 2025-11-29 07:44:22.329 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:44:22 compute-2 nova_compute[232428]: 2025-11-29 07:44:22.330 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 800db21b-571e-4663-b1e4-846ab92adf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:44:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1821450447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2719169375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:44:22 compute-2 ceph-mon[77138]: pgmap v1193: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Nov 29 07:44:22 compute-2 nova_compute[232428]: 2025-11-29 07:44:22.814 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:24 compute-2 ceph-mon[77138]: pgmap v1194: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 105 op/s
Nov 29 07:44:24 compute-2 nova_compute[232428]: 2025-11-29 07:44:24.678 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.143 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updating instance_info_cache with network_info: [{"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.160 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-800db21b-571e-4663-b1e4-846ab92adf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.161 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.162 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.163 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.163 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.163 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.163 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:44:25 compute-2 nova_compute[232428]: 2025-11-29 07:44:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:25.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1393983249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:27 compute-2 nova_compute[232428]: 2025-11-29 07:44:27.818 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:27.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:28 compute-2 nova_compute[232428]: 2025-11-29 07:44:28.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:29 compute-2 nova_compute[232428]: 2025-11-29 07:44:29.682 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:29.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:29.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:31.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:31.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:32 compute-2 podman[240386]: 2025-11-29 07:44:32.736423802 +0000 UTC m=+0.080417311 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:44:32 compute-2 nova_compute[232428]: 2025-11-29 07:44:32.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:33.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:33.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:34 compute-2 sudo[240407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:34 compute-2 sudo[240407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:34 compute-2 sudo[240407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:34 compute-2 sudo[240432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:34 compute-2 sudo[240432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:34 compute-2 sudo[240432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 07:44:34 compute-2 ceph-mon[77138]: pgmap v1195: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 99 op/s
Nov 29 07:44:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1276293911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:34 compute-2 ceph-mon[77138]: pgmap v1196: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 86 op/s
Nov 29 07:44:34 compute-2 nova_compute[232428]: 2025-11-29 07:44:34.685 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:35 compute-2 podman[240458]: 2025-11-29 07:44:35.68507737 +0000 UTC m=+0.076124817 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 07:44:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:35.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3747782361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:44:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3747782361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:44:36 compute-2 ceph-mon[77138]: pgmap v1197: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 94 op/s
Nov 29 07:44:36 compute-2 ceph-mon[77138]: pgmap v1198: 305 pgs: 305 active+clean; 330 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 608 KiB/s rd, 240 KiB/s wr, 56 op/s
Nov 29 07:44:36 compute-2 ceph-mon[77138]: pgmap v1199: 305 pgs: 305 active+clean; 330 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 241 KiB/s wr, 37 op/s
Nov 29 07:44:36 compute-2 ceph-mon[77138]: pgmap v1200: 305 pgs: 305 active+clean; 335 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 713 KiB/s wr, 27 op/s
Nov 29 07:44:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:37 compute-2 nova_compute[232428]: 2025-11-29 07:44:37.824 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:38 compute-2 nova_compute[232428]: 2025-11-29 07:44:38.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/484426313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:38 compute-2 ceph-mon[77138]: pgmap v1201: 305 pgs: 305 active+clean; 335 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 709 KiB/s wr, 23 op/s
Nov 29 07:44:39 compute-2 nova_compute[232428]: 2025-11-29 07:44:39.688 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:44:40 compute-2 ceph-mon[77138]: pgmap v1202: 305 pgs: 305 active+clean; 344 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 40 op/s
Nov 29 07:44:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:41 compute-2 ovn_controller[134375]: 2025-11-29T07:44:41Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:db:44 10.100.0.11
Nov 29 07:44:41 compute-2 ovn_controller[134375]: 2025-11-29T07:44:41Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:db:44 10.100.0.11
Nov 29 07:44:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1938226935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:42 compute-2 ceph-mon[77138]: pgmap v1203: 305 pgs: 305 active+clean; 349 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Nov 29 07:44:42 compute-2 nova_compute[232428]: 2025-11-29 07:44:42.826 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:43.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:44.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:44 compute-2 nova_compute[232428]: 2025-11-29 07:44:44.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:44 compute-2 ceph-mon[77138]: pgmap v1204: 305 pgs: 305 active+clean; 349 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 07:44:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2077925806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:45.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:45 compute-2 ceph-mon[77138]: pgmap v1205: 305 pgs: 305 active+clean; 353 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 262 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Nov 29 07:44:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:46 compute-2 ovn_controller[134375]: 2025-11-29T07:44:46Z|00050|memory|INFO|peak resident set size grew 50% in last 1294.3 seconds, from 16256 kB to 24408 kB
Nov 29 07:44:46 compute-2 ovn_controller[134375]: 2025-11-29T07:44:46Z|00051|memory|INFO|idl-cells-OVN_Southbound:11542 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:412 lflow-cache-entries-cache-matches:309 lflow-cache-size-KB:1745 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:695 ofctrl_installed_flow_usage-KB:509 ofctrl_sb_flow_ref_usage-KB:260
Nov 29 07:44:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2030426042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:44:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2030426042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:44:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:47 compute-2 podman[240486]: 2025-11-29 07:44:47.69230119 +0000 UTC m=+0.087321658 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:44:47 compute-2 nova_compute[232428]: 2025-11-29 07:44:47.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:49 compute-2 nova_compute[232428]: 2025-11-29 07:44:49.694 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:49.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.432 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.433 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.433 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.433 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.434 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.436 232432 INFO nova.compute.manager [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Terminating instance
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.437 232432 DEBUG nova.compute.manager [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:44:51 compute-2 ceph-mon[77138]: pgmap v1206: 305 pgs: 305 active+clean; 353 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 262 KiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 29 07:44:51 compute-2 nova_compute[232428]: 2025-11-29 07:44:51.844 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:51.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:52.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:52 compute-2 kernel: tap2a77d11e-13 (unregistering): left promiscuous mode
Nov 29 07:44:52 compute-2 NetworkManager[48993]: <info>  [1764402292.1474] device (tap2a77d11e-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 ovn_controller[134375]: 2025-11-29T07:44:52Z|00052|binding|INFO|Releasing lport 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 from this chassis (sb_readonly=0)
Nov 29 07:44:52 compute-2 ovn_controller[134375]: 2025-11-29T07:44:52Z|00053|binding|INFO|Setting lport 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 down in Southbound
Nov 29 07:44:52 compute-2 ovn_controller[134375]: 2025-11-29T07:44:52Z|00054|binding|INFO|Removing iface tap2a77d11e-13 ovn-installed in OVS
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.203 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.210 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:db:44 10.100.0.11'], port_security=['fa:16:3e:d5:db:44 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800db21b-571e-4663-b1e4-846ab92adf04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0953a8181d0404daeae16ad65c53823', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62f7d9fe-d5cc-4772-b80b-035b36a2adf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa2fa06-fbfc-41d7-aa49-493c7e8260ef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=2a77d11e-13e0-4537-b43a-bb0ebbaf5617) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.211 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 2a77d11e-13e0-4537-b43a-bb0ebbaf5617 in datapath 6fd1b7b4-729b-4264-a69c-0cec37de984c unbound from our chassis
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.213 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fd1b7b4-729b-4264-a69c-0cec37de984c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.217 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2d22a710-ccbe-4140-8c80-4a8c42787810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.215 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c namespace which is not needed anymore
Nov 29 07:44:52 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 29 07:44:52 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 16.250s CPU time.
Nov 29 07:44:52 compute-2 systemd-machined[194747]: Machine qemu-3-instance-00000008 terminated.
Nov 29 07:44:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [NOTICE]   (240201) : haproxy version is 2.8.14-c23fe91
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [NOTICE]   (240201) : path to executable is /usr/sbin/haproxy
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [WARNING]  (240201) : Exiting Master process...
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [WARNING]  (240201) : Exiting Master process...
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [ALERT]    (240201) : Current worker (240203) exited with code 143 (Terminated)
Nov 29 07:44:52 compute-2 neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c[240197]: [WARNING]  (240201) : All workers exited. Exiting... (0)
Nov 29 07:44:52 compute-2 systemd[1]: libpod-5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833.scope: Deactivated successfully.
Nov 29 07:44:52 compute-2 conmon[240197]: conmon 5d56e639af998d42eb19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833.scope/container/memory.events
Nov 29 07:44:52 compute-2 podman[240538]: 2025-11-29 07:44:52.380157077 +0000 UTC m=+0.047450218 container died 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:44:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-14faba3d24cdfc729e9805b16c721d427c15baef1f44c957c96df1db64f02b05-merged.mount: Deactivated successfully.
Nov 29 07:44:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833-userdata-shm.mount: Deactivated successfully.
Nov 29 07:44:52 compute-2 podman[240538]: 2025-11-29 07:44:52.417115084 +0000 UTC m=+0.084408215 container cleanup 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:44:52 compute-2 systemd[1]: libpod-conmon-5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833.scope: Deactivated successfully.
Nov 29 07:44:52 compute-2 podman[240567]: 2025-11-29 07:44:52.484040111 +0000 UTC m=+0.046866370 container remove 5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.486 232432 INFO nova.virt.libvirt.driver [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Instance destroyed successfully.
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.487 232432 DEBUG nova.objects.instance [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lazy-loading 'resources' on Instance uuid 800db21b-571e-4663-b1e4-846ab92adf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[40d6437b-4f6e-4651-b842-90bff6ec2a04]: (4, ('Sat Nov 29 07:44:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c (5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833)\n5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833\nSat Nov 29 07:44:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c (5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833)\n5d56e639af998d42eb1983ee3b531c01b0767b0e6d5cb7488c74793d3d6f4833\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.493 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eace8b9d-6595-4b14-96a4-9a4c765ae197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.494 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fd1b7b4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 kernel: tap6fd1b7b4-70: left promiscuous mode
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.517 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9f94b463-c0aa-403b-8e08-e73d0a89f934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.524 232432 DEBUG nova.virt.libvirt.vif [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:43:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-971120006',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-971120006',id=8,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLp0x4N8szDizBZ98biJD1BkZ+fznvZiI4SUYlYOYZFP0LrOPI+yRZ0laJvGK045m/5OWEBOSlYzn9ZiN4Xlu8aeH1QoQBvXp+h49IhlPXx8vxZhKVl5ce9/GNBtqcifgg==',key_name='tempest-keypair-1480640792',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:44:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0953a8181d0404daeae16ad65c53823',ramdisk_id='',reservation_id='r-rs333g49',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-154328222',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-154328222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:44:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='221a3978e81b4d679382df9385da9946',uuid=800db21b-571e-4663-b1e4-846ab92adf04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.524 232432 DEBUG nova.network.os_vif_util [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converting VIF {"id": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "address": "fa:16:3e:d5:db:44", "network": {"id": "6fd1b7b4-729b-4264-a69c-0cec37de984c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2004112973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0953a8181d0404daeae16ad65c53823", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a77d11e-13", "ovs_interfaceid": "2a77d11e-13e0-4537-b43a-bb0ebbaf5617", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.525 232432 DEBUG nova.network.os_vif_util [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.526 232432 DEBUG os_vif [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.528 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.528 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a77d11e-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.533 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.534 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[38d928ff-085c-4d34-b32e-22a14d268998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.536 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b9b3b0-5261-40ff-85e3-7f85f3d0371c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.536 232432 INFO os_vif [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:44,bridge_name='br-int',has_traffic_filtering=True,id=2a77d11e-13e0-4537-b43a-bb0ebbaf5617,network=Network(6fd1b7b4-729b-4264-a69c-0cec37de984c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a77d11e-13')
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.554 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b231417-c99d-4a97-bb24-3db3d333901f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521805, 'reachable_time': 28125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240597, 'error': None, 'target': 'ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.559 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fd1b7b4-729b-4264-a69c-0cec37de984c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:44:52 compute-2 systemd[1]: run-netns-ovnmeta\x2d6fd1b7b4\x2d729b\x2d4264\x2da69c\x2d0cec37de984c.mount: Deactivated successfully.
Nov 29 07:44:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:44:52.559 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[68153d84-7419-4af0-9822-3c6b157b4b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.668 232432 DEBUG nova.compute.manager [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-unplugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.668 232432 DEBUG oslo_concurrency.lockutils [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.668 232432 DEBUG oslo_concurrency.lockutils [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.669 232432 DEBUG oslo_concurrency.lockutils [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.669 232432 DEBUG nova.compute.manager [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] No waiting events found dispatching network-vif-unplugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.669 232432 DEBUG nova.compute.manager [req-cd29d9a3-0453-42e6-98a6-024c6f39b3e7 req-aa301d7c-92bc-494f-8a30-b96bbae4321a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-unplugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:44:52 compute-2 ceph-mon[77138]: pgmap v1207: 305 pgs: 305 active+clean; 334 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 293 KiB/s rd, 1.4 MiB/s wr, 83 op/s
Nov 29 07:44:52 compute-2 ceph-mon[77138]: pgmap v1208: 305 pgs: 305 active+clean; 310 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 324 KiB/s rd, 417 KiB/s wr, 95 op/s
Nov 29 07:44:52 compute-2 nova_compute[232428]: 2025-11-29 07:44:52.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:53.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:54 compute-2 sudo[240617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:54 compute-2 sudo[240617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240617]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:54 compute-2 sudo[240642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:54 compute-2 sudo[240643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:54 compute-2 sudo[240642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240643]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:54 compute-2 sudo[240642]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:54 compute-2 sudo[240693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:54 compute-2 sudo[240692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:54 compute-2 sudo[240693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240693]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:54 compute-2 sudo[240692]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:54 compute-2 sudo[240742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:44:54 compute-2 sudo[240742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:54 compute-2 sudo[240742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:55 compute-2 ceph-mon[77138]: pgmap v1209: 305 pgs: 305 active+clean; 253 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 329 KiB/s rd, 168 KiB/s wr, 100 op/s
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.718 232432 DEBUG nova.compute.manager [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.719 232432 DEBUG oslo_concurrency.lockutils [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "800db21b-571e-4663-b1e4-846ab92adf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.719 232432 DEBUG oslo_concurrency.lockutils [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.720 232432 DEBUG oslo_concurrency.lockutils [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.720 232432 DEBUG nova.compute.manager [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] No waiting events found dispatching network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:44:55 compute-2 nova_compute[232428]: 2025-11-29 07:44:55.720 232432 WARNING nova.compute.manager [req-97777ac3-9ba7-4562-a76b-26b84cb32a68 req-a94edc1b-989c-4be7-8fcc-464956f37840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received unexpected event network-vif-plugged-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 for instance with vm_state active and task_state deleting.
Nov 29 07:44:55 compute-2 sudo[240788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:55 compute-2 sudo[240788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:55 compute-2 sudo[240788]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:55 compute-2 sudo[240813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:55 compute-2 sudo[240813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:55 compute-2 sudo[240813]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:55 compute-2 sudo[240838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:55 compute-2 sudo[240838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:55 compute-2 sudo[240838]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:44:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:56.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:44:56 compute-2 sudo[240863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:44:56 compute-2 sudo[240863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:56 compute-2 sudo[240863]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:44:57 compute-2 nova_compute[232428]: 2025-11-29 07:44:57.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:44:57 compute-2 ceph-mon[77138]: pgmap v1210: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 334 KiB/s rd, 129 KiB/s wr, 116 op/s
Nov 29 07:44:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:44:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:44:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:44:57 compute-2 nova_compute[232428]: 2025-11-29 07:44:57.833 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:44:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:57.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:44:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: pgmap v1211: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 95 KiB/s rd, 76 KiB/s wr, 81 op/s
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4130116835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1116700311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:44:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:44:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:59.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:00 compute-2 ceph-mon[77138]: pgmap v1212: 305 pgs: 305 active+clean; 147 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 101 KiB/s rd, 77 KiB/s wr, 90 op/s
Nov 29 07:45:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:02.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:02 compute-2 nova_compute[232428]: 2025-11-29 07:45:02.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2367818218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:02 compute-2 ceph-mon[77138]: pgmap v1213: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 16 KiB/s wr, 92 op/s
Nov 29 07:45:02 compute-2 nova_compute[232428]: 2025-11-29 07:45:02.836 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:03.291 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:03.292 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:03.292 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:03.548 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:03.550 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:45:03 compute-2 nova_compute[232428]: 2025-11-29 07:45:03.550 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:03 compute-2 podman[240924]: 2025-11-29 07:45:03.682881752 +0000 UTC m=+0.074436393 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:45:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:04.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:45:05.552 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:45:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:06.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:06 compute-2 ceph-mon[77138]: pgmap v1214: 305 pgs: 305 active+clean; 135 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 259 KiB/s wr, 66 op/s
Nov 29 07:45:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/481720125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:06 compute-2 podman[240944]: 2025-11-29 07:45:06.67815102 +0000 UTC m=+0.077322923 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 07:45:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.486 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402292.4845963, 800db21b-571e-4663-b1e4-846ab92adf04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.487 232432 INFO nova.compute.manager [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] VM Stopped (Lifecycle Event)
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.518 232432 DEBUG nova.compute.manager [None req-121f1dbe-11b8-42ec-979a-1c3340961da7 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.523 232432 DEBUG nova.compute.manager [None req-121f1dbe-11b8-42ec-979a-1c3340961da7 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.534 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.542 232432 INFO nova.compute.manager [None req-121f1dbe-11b8-42ec-979a-1c3340961da7 - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 29 07:45:07 compute-2 nova_compute[232428]: 2025-11-29 07:45:07.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:07.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:09.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:10.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:10 compute-2 ceph-mon[77138]: pgmap v1215: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 29 07:45:10 compute-2 nova_compute[232428]: 2025-11-29 07:45:10.382 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:10 compute-2 nova_compute[232428]: 2025-11-29 07:45:10.781 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:12 compute-2 nova_compute[232428]: 2025-11-29 07:45:12.535 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:12 compute-2 nova_compute[232428]: 2025-11-29 07:45:12.840 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:13.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:14.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:14 compute-2 sudo[240969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:14 compute-2 sudo[240969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-2 sudo[240969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-2 sudo[240994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:14 compute-2 sudo[240994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-2 sudo[240994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:16.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:16 compute-2 ceph-mon[77138]: pgmap v1216: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 07:45:16 compute-2 ceph-mon[77138]: pgmap v1217: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 29 07:45:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:17 compute-2 nova_compute[232428]: 2025-11-29 07:45:17.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:17 compute-2 nova_compute[232428]: 2025-11-29 07:45:17.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:17.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:18 compute-2 podman[241021]: 2025-11-29 07:45:18.705593445 +0000 UTC m=+0.105523162 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.236 232432 INFO nova.virt.libvirt.driver [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Deleting instance files /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04_del
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.237 232432 INFO nova.virt.libvirt.driver [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Deletion of /var/lib/nova/instances/800db21b-571e-4663-b1e4-846ab92adf04_del complete
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.329 232432 INFO nova.compute.manager [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Took 27.89 seconds to destroy the instance on the hypervisor.
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.330 232432 DEBUG oslo.service.loopingcall [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.330 232432 DEBUG nova.compute.manager [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:45:19 compute-2 nova_compute[232428]: 2025-11-29 07:45:19.330 232432 DEBUG nova.network.neutron [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:45:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:19.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:20 compute-2 nova_compute[232428]: 2025-11-29 07:45:20.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3817860258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:20 compute-2 ceph-mon[77138]: pgmap v1218: 305 pgs: 305 active+clean; 185 MiB data, 338 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 2.4 MiB/s wr, 70 op/s
Nov 29 07:45:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3497740403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:20 compute-2 ceph-mon[77138]: pgmap v1219: 305 pgs: 305 active+clean; 190 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.9 MiB/s wr, 51 op/s
Nov 29 07:45:20 compute-2 ceph-mon[77138]: pgmap v1220: 305 pgs: 305 active+clean; 190 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.217 232432 DEBUG nova.network.neutron [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.233 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.233 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.234 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.235 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.244 232432 INFO nova.compute.manager [-] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Took 1.91 seconds to deallocate network for instance.
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.298 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.298 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.298 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.299 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.299 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.324 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.325 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.418 232432 DEBUG oslo_concurrency.processutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.478 232432 DEBUG nova.compute.manager [req-1792b025-3a82-45ed-b260-fc1aa99887a6 req-f4d8c67b-7f4f-4580-9bd2-ac53fbaf4aa7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 800db21b-571e-4663-b1e4-846ab92adf04] Received event network-vif-deleted-2a77d11e-13e0-4537-b43a-bb0ebbaf5617 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:45:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2537271376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.737 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/195434183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.884 232432 DEBUG oslo_concurrency.processutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.891 232432 DEBUG nova.compute.provider_tree [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.915 232432 DEBUG nova.scheduler.client.report [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.941 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.989 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.990 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4892MB free_disk=20.90129852294922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.991 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.991 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:21 compute-2 nova_compute[232428]: 2025-11-29 07:45:21.993 232432 INFO nova.scheduler.client.report [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Deleted allocations for instance 800db21b-571e-4663-b1e4-846ab92adf04
Nov 29 07:45:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:22.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.138 232432 DEBUG oslo_concurrency.lockutils [None req-21ad4c4a-f0bd-4d09-8121-dc5f4f3ddfb0 221a3978e81b4d679382df9385da9946 e0953a8181d0404daeae16ad65c53823 - - default default] Lock "800db21b-571e-4663-b1e4-846ab92adf04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 30.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.187 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.188 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.227 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.539 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3844664053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.691 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.698 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.729 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.775 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.776 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:22 compute-2 nova_compute[232428]: 2025-11-29 07:45:22.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:23.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:24.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:24 compute-2 nova_compute[232428]: 2025-11-29 07:45:24.743 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:24 compute-2 nova_compute[232428]: 2025-11-29 07:45:24.744 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:24 compute-2 nova_compute[232428]: 2025-11-29 07:45:24.744 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:45:25 compute-2 nova_compute[232428]: 2025-11-29 07:45:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:25 compute-2 ceph-mon[77138]: pgmap v1221: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 07:45:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1568009851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3695214977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:25 compute-2 ceph-mon[77138]: pgmap v1222: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 07:45:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:27 compute-2 nova_compute[232428]: 2025-11-29 07:45:27.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:27 compute-2 nova_compute[232428]: 2025-11-29 07:45:27.542 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:27 compute-2 nova_compute[232428]: 2025-11-29 07:45:27.871 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:28 compute-2 ceph-mon[77138]: pgmap v1223: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 07:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2537271376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/195434183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3844664053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1099044415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:28 compute-2 ceph-mon[77138]: pgmap v1224: 305 pgs: 305 active+clean; 191 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Nov 29 07:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/833253687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:28 compute-2 ceph-mon[77138]: pgmap v1225: 305 pgs: 305 active+clean; 140 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 714 KiB/s wr, 35 op/s
Nov 29 07:45:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:30 compute-2 ceph-mon[77138]: pgmap v1226: 305 pgs: 305 active+clean; 140 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 714 KiB/s wr, 29 op/s
Nov 29 07:45:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/797657266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:45:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/797657266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:45:30 compute-2 ceph-mon[77138]: pgmap v1227: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 29 07:45:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:32 compute-2 ceph-mon[77138]: pgmap v1228: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 26 KiB/s wr, 93 op/s
Nov 29 07:45:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:32 compute-2 nova_compute[232428]: 2025-11-29 07:45:32.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:32 compute-2 nova_compute[232428]: 2025-11-29 07:45:32.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:34 compute-2 ceph-mon[77138]: pgmap v1229: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 25 KiB/s wr, 114 op/s
Nov 29 07:45:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1714524971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:34 compute-2 podman[241122]: 2025-11-29 07:45:34.65527515 +0000 UTC m=+0.059069265 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:45:34 compute-2 sudo[241143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:35 compute-2 sudo[241143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:35 compute-2 sudo[241143]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:35 compute-2 sudo[241168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:35 compute-2 sudo[241168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:35 compute-2 sudo[241168]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:35 compute-2 sudo[241194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:35 compute-2 sudo[241194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:35 compute-2 sudo[241194]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:35 compute-2 sudo[241219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:45:35 compute-2 sudo[241219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:35 compute-2 sudo[241219]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:45:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:45:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:36 compute-2 ceph-mon[77138]: pgmap v1230: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 163 op/s
Nov 29 07:45:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/305277659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:37 compute-2 nova_compute[232428]: 2025-11-29 07:45:37.546 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:37 compute-2 podman[241245]: 2025-11-29 07:45:37.68102626 +0000 UTC m=+0.085376919 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:45:37 compute-2 ceph-mon[77138]: pgmap v1231: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 153 op/s
Nov 29 07:45:37 compute-2 nova_compute[232428]: 2025-11-29 07:45:37.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:37.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:39 compute-2 ceph-mon[77138]: pgmap v1232: 305 pgs: 305 active+clean; 151 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 692 KiB/s wr, 157 op/s
Nov 29 07:45:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:42 compute-2 ceph-mon[77138]: pgmap v1233: 305 pgs: 305 active+clean; 181 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Nov 29 07:45:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:42 compute-2 nova_compute[232428]: 2025-11-29 07:45:42.549 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:42 compute-2 nova_compute[232428]: 2025-11-29 07:45:42.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:43.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:44 compute-2 ceph-mon[77138]: pgmap v1234: 305 pgs: 305 active+clean; 187 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Nov 29 07:45:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:46.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2175026816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4264991800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1000503346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:47 compute-2 ceph-mon[77138]: pgmap v1235: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.8 MiB/s wr, 117 op/s
Nov 29 07:45:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/899598920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:47 compute-2 nova_compute[232428]: 2025-11-29 07:45:47.551 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:47 compute-2 nova_compute[232428]: 2025-11-29 07:45:47.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:45:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:45:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:49 compute-2 ceph-mon[77138]: pgmap v1236: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 4.8 MiB/s wr, 67 op/s
Nov 29 07:45:49 compute-2 podman[241272]: 2025-11-29 07:45:49.703306194 +0000 UTC m=+0.102306132 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:45:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.060 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.061 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.078 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:45:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:50.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.209 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.209 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.217 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.217 232432 INFO nova.compute.claims [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.329 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1883990742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.764 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.771 232432 DEBUG nova.compute.provider_tree [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:50 compute-2 ceph-mon[77138]: pgmap v1237: 305 pgs: 305 active+clean; 216 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 5.2 MiB/s wr, 76 op/s
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.788 232432 DEBUG nova.scheduler.client.report [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.819 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.820 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.881 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:45:50 compute-2 nova_compute[232428]: 2025-11-29 07:45:50.881 232432 DEBUG nova.network.neutron [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.085 232432 INFO nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.111 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.214 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.216 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.216 232432 INFO nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Creating image(s)
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.245 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.280 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.312 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.317 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.389 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.390 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.391 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.391 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.422 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.426 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.453 232432 DEBUG nova.network.neutron [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.454 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.730 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1883990742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:51 compute-2 ceph-mon[77138]: pgmap v1238: 305 pgs: 305 active+clean; 232 MiB data, 368 MiB used, 21 GiB / 21 GiB avail; 436 KiB/s rd, 5.3 MiB/s wr, 116 op/s
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.796 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] resizing rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.897 232432 DEBUG nova.objects.instance [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lazy-loading 'migration_context' on Instance uuid b5eb4acc-8b3c-42f0-8c0e-bc362446a430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.912 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.913 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Ensure instance console log exists: /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.914 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.914 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.915 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.917 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.924 232432 WARNING nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.928 232432 DEBUG nova.virt.libvirt.host [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.929 232432 DEBUG nova.virt.libvirt.host [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.932 232432 DEBUG nova.virt.libvirt.host [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.932 232432 DEBUG nova.virt.libvirt.host [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.935 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.935 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.936 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.936 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.937 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.937 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.937 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.938 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.938 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.938 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.939 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.939 232432 DEBUG nova.virt.hardware [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:45:51 compute-2 nova_compute[232428]: 2025-11-29 07:45:51.944 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:51.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:52.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:45:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3595398153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.397 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.424 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.429 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:45:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3333490600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.876 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.878 232432 DEBUG nova.objects.instance [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lazy-loading 'pci_devices' on Instance uuid b5eb4acc-8b3c-42f0-8c0e-bc362446a430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.882 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:52 compute-2 nova_compute[232428]: 2025-11-29 07:45:52.971 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <uuid>b5eb4acc-8b3c-42f0-8c0e-bc362446a430</uuid>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <name>instance-0000000c</name>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-394531221</nova:name>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:45:51</nova:creationTime>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:user uuid="37822b5c62cd45aebbcbd953e06c4516">tempest-DeleteServersAdminTestJSON-119371266-project-member</nova:user>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <nova:project uuid="a1de18be9de849f9885ffa928cd531bb">tempest-DeleteServersAdminTestJSON-119371266</nova:project>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <system>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="serial">b5eb4acc-8b3c-42f0-8c0e-bc362446a430</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="uuid">b5eb4acc-8b3c-42f0-8c0e-bc362446a430</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </system>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <os>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </os>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <features>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </features>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk">
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </source>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:45:52 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config">
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </source>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:45:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/console.log" append="off"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <video>
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </video>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:45:52 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:45:52 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:45:52 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:45:52 compute-2 nova_compute[232428]: </domain>
Nov 29 07:45:52 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:45:52 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:45:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3595398153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3333490600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.027 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.028 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.028 232432 INFO nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Using config drive
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.052 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.533 232432 INFO nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Creating config drive at /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.539 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4fqvi5r1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.672 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4fqvi5r1" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.702 232432 DEBUG nova.storage.rbd_utils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:53 compute-2 nova_compute[232428]: 2025-11-29 07:45:53.706 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:54.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:54 compute-2 nova_compute[232428]: 2025-11-29 07:45:54.235 232432 DEBUG oslo_concurrency.processutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config b5eb4acc-8b3c-42f0-8c0e-bc362446a430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:54 compute-2 nova_compute[232428]: 2025-11-29 07:45:54.237 232432 INFO nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Deleting local config drive /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430/disk.config because it was imported into RBD.
Nov 29 07:45:54 compute-2 systemd-machined[194747]: New machine qemu-4-instance-0000000c.
Nov 29 07:45:54 compute-2 systemd[1]: Started Virtual Machine qemu-4-instance-0000000c.
Nov 29 07:45:54 compute-2 ceph-mon[77138]: pgmap v1239: 305 pgs: 305 active+clean; 254 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 145 op/s
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.180 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402355.1785998, b5eb4acc-8b3c-42f0-8c0e-bc362446a430 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.181 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] VM Resumed (Lifecycle Event)
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.184 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.185 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.192 232432 INFO nova.virt.libvirt.driver [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance spawned successfully.
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.192 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:45:55 compute-2 sudo[241668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:55 compute-2 sudo[241668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:55 compute-2 sudo[241668]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.209 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.216 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.220 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.221 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.221 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.222 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.222 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.222 232432 DEBUG nova.virt.libvirt.driver [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.245 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.246 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402355.180798, b5eb4acc-8b3c-42f0-8c0e-bc362446a430 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.246 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] VM Started (Lifecycle Event)
Nov 29 07:45:55 compute-2 sudo[241694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:55 compute-2 sudo[241694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:55 compute-2 sudo[241694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.274 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.280 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.287 232432 INFO nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Took 4.07 seconds to spawn the instance on the hypervisor.
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.287 232432 DEBUG nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.319 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.364 232432 INFO nova.compute.manager [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Took 5.20 seconds to build instance.
Nov 29 07:45:55 compute-2 nova_compute[232428]: 2025-11-29 07:45:55.403 232432 DEBUG oslo_concurrency.lockutils [None req-7c6dbab4-5262-4af8-ad39-873f8eca02c1 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:55 compute-2 ceph-mon[77138]: pgmap v1240: 305 pgs: 305 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 217 op/s
Nov 29 07:45:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:56.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.548 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Acquiring lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.549 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.549 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Acquiring lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.550 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.550 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.551 232432 INFO nova.compute.manager [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Terminating instance
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.552 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Acquiring lock "refresh_cache-b5eb4acc-8b3c-42f0-8c0e-bc362446a430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.552 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Acquired lock "refresh_cache-b5eb4acc-8b3c-42f0-8c0e-bc362446a430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.552 232432 DEBUG nova.network.neutron [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.672 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.673 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.692 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.754 232432 DEBUG nova.network.neutron [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.769 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.769 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.775 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.776 232432 INFO nova.compute.claims [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:45:56 compute-2 nova_compute[232428]: 2025-11-29 07:45:56.886 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.249 232432 DEBUG nova.network.neutron [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.269 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Releasing lock "refresh_cache-b5eb4acc-8b3c-42f0-8c0e-bc362446a430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.270 232432 DEBUG nova.compute.manager [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:45:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/272868164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.323 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.328 232432 DEBUG nova.compute.provider_tree [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:57 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 29 07:45:57 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000c.scope: Consumed 2.903s CPU time.
Nov 29 07:45:57 compute-2 systemd-machined[194747]: Machine qemu-4-instance-0000000c terminated.
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.342 232432 DEBUG nova.scheduler.client.report [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.369 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.369 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:45:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/272868164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.421 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.422 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.444 232432 INFO nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.463 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.490 232432 INFO nova.virt.libvirt.driver [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance destroyed successfully.
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.490 232432 DEBUG nova.objects.instance [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lazy-loading 'resources' on Instance uuid b5eb4acc-8b3c-42f0-8c0e-bc362446a430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.584 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.586 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.588 232432 INFO nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Creating image(s)
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.661 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.696 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.724 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.728 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.761 232432 DEBUG nova.policy [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd94c707cca604d72a8e1d49b636095e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96ea84545e71401fb69d21be6e2472f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.799 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.800 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.801 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.801 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.829 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.833 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:57 compute-2 nova_compute[232428]: 2025-11-29 07:45:57.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:45:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:57.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:45:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:45:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:58.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:45:58 compute-2 ceph-mon[77138]: pgmap v1241: 305 pgs: 305 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.473 232432 INFO nova.virt.libvirt.driver [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Deleting instance files /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430_del
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.473 232432 INFO nova.virt.libvirt.driver [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Deletion of /var/lib/nova/instances/b5eb4acc-8b3c-42f0-8c0e-bc362446a430_del complete
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.568 232432 INFO nova.compute.manager [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Took 1.30 seconds to destroy the instance on the hypervisor.
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.569 232432 DEBUG oslo.service.loopingcall [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.570 232432 DEBUG nova.compute.manager [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.570 232432 DEBUG nova.network.neutron [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.581 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.650 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Successfully created port: 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.658 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.765 232432 DEBUG nova.objects.instance [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.778 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.778 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Ensure instance console log exists: /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.778 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.779 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.779 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.856 232432 DEBUG nova.network.neutron [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.883 232432 DEBUG nova.network.neutron [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.899 232432 INFO nova.compute.manager [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Took 0.33 seconds to deallocate network for instance.
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.948 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:58 compute-2 nova_compute[232428]: 2025-11-29 07:45:58.949 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.010 232432 DEBUG oslo_concurrency.processutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3215431691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.451 232432 DEBUG oslo_concurrency.processutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.458 232432 DEBUG nova.compute.provider_tree [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.488 232432 DEBUG nova.scheduler.client.report [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.518 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.551 232432 INFO nova.scheduler.client.report [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Deleted allocations for instance b5eb4acc-8b3c-42f0-8c0e-bc362446a430
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.563 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Successfully updated port: 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.673 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.674 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquired lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.674 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.736 232432 DEBUG nova.compute.manager [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-changed-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.737 232432 DEBUG nova.compute.manager [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Refreshing instance network info cache due to event network-changed-6ee8db63-b095-48c4-b9d5-fc8ed17f9925. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.737 232432 DEBUG oslo_concurrency.lockutils [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:45:59 compute-2 ceph-mon[77138]: pgmap v1242: 305 pgs: 305 active+clean; 284 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 210 op/s
Nov 29 07:45:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3215431691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.757376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359757503, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 251, "total_data_size": 3594886, "memory_usage": 3627736, "flush_reason": "Manual Compaction"}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 29 07:45:59 compute-2 nova_compute[232428]: 2025-11-29 07:45:59.760 232432 DEBUG oslo_concurrency.lockutils [None req-da29fc40-9ca2-4a46-b1af-4f573a6d8957 24011a14da8a4ba78de5834d57dce27d fe30fdc530cb4e5286ac73e333d4aa4b - - default default] Lock "b5eb4acc-8b3c-42f0-8c0e-bc362446a430" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359773949, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2358792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22089, "largest_seqno": 23621, "table_properties": {"data_size": 2352056, "index_size": 3807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15172, "raw_average_key_size": 20, "raw_value_size": 2338385, "raw_average_value_size": 3194, "num_data_blocks": 169, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402162, "oldest_key_time": 1764402162, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16668 microseconds, and 5826 cpu microseconds.
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.774018) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2358792 bytes OK
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.774081) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.775579) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.775595) EVENT_LOG_v1 {"time_micros": 1764402359775589, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.775618) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3587740, prev total WAL file size 3587740, number of live WAL files 2.
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.776760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2303KB)], [42(8500KB)]
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359776850, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 11063560, "oldest_snapshot_seqno": -1}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5204 keys, 8878486 bytes, temperature: kUnknown
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359935676, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 8878486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8843932, "index_size": 20428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131552, "raw_average_key_size": 25, "raw_value_size": 8750111, "raw_average_value_size": 1681, "num_data_blocks": 838, "num_entries": 5204, "num_filter_entries": 5204, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.936453) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8878486 bytes
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.943268) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.6 rd, 55.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 8.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 5721, records dropped: 517 output_compression: NoCompression
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.943300) EVENT_LOG_v1 {"time_micros": 1764402359943286, "job": 24, "event": "compaction_finished", "compaction_time_micros": 158936, "compaction_time_cpu_micros": 22363, "output_level": 6, "num_output_files": 1, "total_output_size": 8878486, "num_input_records": 5721, "num_output_records": 5204, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359944160, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359946075, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.776635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.946193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.946201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.946203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.946205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:45:59.946208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:45:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:45:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:00 compute-2 nova_compute[232428]: 2025-11-29 07:46:00.048 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:46:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.655 232432 DEBUG nova.network.neutron [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updating instance_info_cache with network_info: [{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.679 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Releasing lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.679 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Instance network_info: |[{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.680 232432 DEBUG oslo_concurrency.lockutils [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.680 232432 DEBUG nova.network.neutron [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Refreshing network info cache for port 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.683 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Start _get_guest_xml network_info=[{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.688 232432 WARNING nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.693 232432 DEBUG nova.virt.libvirt.host [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.694 232432 DEBUG nova.virt.libvirt.host [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.698 232432 DEBUG nova.virt.libvirt.host [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.698 232432 DEBUG nova.virt.libvirt.host [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.700 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.700 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.700 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.701 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.701 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.701 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.701 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.701 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.702 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.702 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.702 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.703 232432 DEBUG nova.virt.hardware [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:46:01 compute-2 nova_compute[232428]: 2025-11-29 07:46:01.706 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:01 compute-2 ceph-mon[77138]: pgmap v1243: 305 pgs: 305 active+clean; 277 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 295 op/s
Nov 29 07:46:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:46:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/236522634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.139 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:02.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.164 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.168 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:46:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/791247964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.591 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.594 232432 DEBUG nova.virt.libvirt.vif [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1309787289',display_name='tempest-ServersAdminTestJSON-server-1309787289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1309787289',id=13,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-yxj9118z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:57Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.594 232432 DEBUG nova.network.os_vif_util [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.596 232432 DEBUG nova.network.os_vif_util [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.597 232432 DEBUG nova.objects.instance [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.624 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <uuid>b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45</uuid>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <name>instance-0000000d</name>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersAdminTestJSON-server-1309787289</nova:name>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:46:01</nova:creationTime>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <nova:port uuid="6ee8db63-b095-48c4-b9d5-fc8ed17f9925">
Nov 29 07:46:02 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <system>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="serial">b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="uuid">b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </system>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <os>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </os>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <features>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </features>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk">
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </source>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config">
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </source>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:46:02 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:c8:27:14"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <target dev="tap6ee8db63-b0"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/console.log" append="off"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <video>
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </video>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:46:02 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:46:02 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:46:02 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:46:02 compute-2 nova_compute[232428]: </domain>
Nov 29 07:46:02 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.625 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Preparing to wait for external event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.626 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.626 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.627 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.628 232432 DEBUG nova.virt.libvirt.vif [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1309787289',display_name='tempest-ServersAdminTestJSON-server-1309787289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1309787289',id=13,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-yxj9118z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:57Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.628 232432 DEBUG nova.network.os_vif_util [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.629 232432 DEBUG nova.network.os_vif_util [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.629 232432 DEBUG os_vif [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.630 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.631 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.635 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.636 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ee8db63-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.636 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ee8db63-b0, col_values=(('external_ids', {'iface-id': '6ee8db63-b095-48c4-b9d5-fc8ed17f9925', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:27:14', 'vm-uuid': 'b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.638 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:02 compute-2 NetworkManager[48993]: <info>  [1764402362.6393] manager: (tap6ee8db63-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.639 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.645 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.647 232432 INFO os_vif [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0')
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.728 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.728 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.729 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:c8:27:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.729 232432 INFO nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Using config drive
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.753 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:02 compute-2 nova_compute[232428]: 2025-11-29 07:46:02.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.175 232432 INFO nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Creating config drive at /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config
Nov 29 07:46:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/236522634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/791247964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.183 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwvndxwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.215 232432 DEBUG nova.network.neutron [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updated VIF entry in instance network info cache for port 6ee8db63-b095-48c4-b9d5-fc8ed17f9925. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.216 232432 DEBUG nova.network.neutron [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updating instance_info_cache with network_info: [{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.235 232432 DEBUG oslo_concurrency.lockutils [req-01f08947-379c-40a7-a793-29a301f6ce93 req-36383016-a1cb-4120-847b-08d02a79027d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.292 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.293 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.294 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.315 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwvndxwn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.346 232432 DEBUG nova.storage.rbd_utils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.350 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.548 232432 DEBUG oslo_concurrency.processutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.549 232432 INFO nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Deleting local config drive /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45/disk.config because it was imported into RBD.
Nov 29 07:46:03 compute-2 kernel: tap6ee8db63-b0: entered promiscuous mode
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.6044] manager: (tap6ee8db63-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 07:46:03 compute-2 ovn_controller[134375]: 2025-11-29T07:46:03Z|00055|binding|INFO|Claiming lport 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 for this chassis.
Nov 29 07:46:03 compute-2 ovn_controller[134375]: 2025-11-29T07:46:03Z|00056|binding|INFO|6ee8db63-b095-48c4-b9d5-fc8ed17f9925: Claiming fa:16:3e:c8:27:14 10.100.0.6
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.607 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.619 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:27:14 10.100.0.6'], port_security=['fa:16:3e:c8:27:14 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6ee8db63-b095-48c4-b9d5-fc8ed17f9925) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.620 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.621 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.635 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbd877f-85d1-43b5-8222-5641467d342f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.636 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap788595a6-81 in ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.637 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap788595a6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.637 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[505326c2-0537-4aef-8791-0fb6fd239b81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.638 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8cd54f-09fd-4dde-98e8-5d101ad3a64a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 systemd-machined[194747]: New machine qemu-5-instance-0000000d.
Nov 29 07:46:03 compute-2 systemd-udevd[242090]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:46:03 compute-2 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.653 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a007ae0f-0691-4818-953d-f71165ede695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.6656] device (tap6ee8db63-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.6679] device (tap6ee8db63-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.682 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[94a036b6-ec0f-4deb-bd5c-b1e40a3c686f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_controller[134375]: 2025-11-29T07:46:03Z|00057|binding|INFO|Setting lport 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 ovn-installed in OVS
Nov 29 07:46:03 compute-2 ovn_controller[134375]: 2025-11-29T07:46:03Z|00058|binding|INFO|Setting lport 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 up in Southbound
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.713 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.717 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c16ccd15-7a1d-4473-b4a2-a30cce039f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 systemd-udevd[242094]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.724 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd32feb-0eac-467d-b9f2-d25750d35451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.7262] manager: (tap788595a6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.752 232432 DEBUG oslo_concurrency.processutils [None req-8029f564-1cc4-444f-8ffc-1068b8803bb0 c68c464a4d3d4d389bd9a7fd5ec8e20e 34118ff1773f4b8d815deece73a5ae07 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.764 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[54fa7e1c-8767-4af1-9ac7-f9f4a72fe9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.770 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d54034f1-8b62-4866-b6ac-aa35c3db47d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.7941] device (tap788595a6-80): carrier: link connected
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.800 232432 DEBUG oslo_concurrency.processutils [None req-8029f564-1cc4-444f-8ffc-1068b8803bb0 c68c464a4d3d4d389bd9a7fd5ec8e20e 34118ff1773f4b8d815deece73a5ae07 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.800 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5b32c739-c9c5-4dd9-a10f-e16f5e80deb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.822 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[53de0fda-722a-49c7-8e8b-210d62aa2d73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533006, 'reachable_time': 42261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242123, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.842 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e37f110b-7f22-499e-ab00-6e166153ee8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:529d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533006, 'tstamp': 533006}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242124, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.864 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba0422-257c-4813-b271-fef290f77348]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533006, 'reachable_time': 42261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242125, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.901 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79529c-fe6b-46d9-9bc6-fe0600d0c245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.974 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c7979c-b148-4267-a431-e07d75692360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.976 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.976 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.977 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:03 compute-2 NetworkManager[48993]: <info>  [1764402363.9798] manager: (tap788595a6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.979 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 kernel: tap788595a6-80: entered promiscuous mode
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:03.985 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:03 compute-2 nova_compute[232428]: 2025-11-29 07:46:03.986 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:03 compute-2 ovn_controller[134375]: 2025-11-29T07:46:03Z|00059|binding|INFO|Releasing lport 4a1365a2-9549-4214-ba8d-c7bb361501a6 from this chassis (sb_readonly=0)
Nov 29 07:46:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:04.002 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:04.004 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b963d0-0ea6-4b38-a84b-addbab50660f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:04.005 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:46:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:04.006 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'env', 'PROCESS_TAG=haproxy-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/788595a6-8f3f-45f7-807d-f88c9bf0e050.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.020 232432 DEBUG nova.compute.manager [req-90255c06-81e6-44b6-b9ac-6ac7f11aa28f req-29b9ea1b-1d55-4664-a260-a691f67cf428 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.020 232432 DEBUG oslo_concurrency.lockutils [req-90255c06-81e6-44b6-b9ac-6ac7f11aa28f req-29b9ea1b-1d55-4664-a260-a691f67cf428 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.021 232432 DEBUG oslo_concurrency.lockutils [req-90255c06-81e6-44b6-b9ac-6ac7f11aa28f req-29b9ea1b-1d55-4664-a260-a691f67cf428 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.021 232432 DEBUG oslo_concurrency.lockutils [req-90255c06-81e6-44b6-b9ac-6ac7f11aa28f req-29b9ea1b-1d55-4664-a260-a691f67cf428 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:04 compute-2 nova_compute[232428]: 2025-11-29 07:46:04.021 232432 DEBUG nova.compute.manager [req-90255c06-81e6-44b6-b9ac-6ac7f11aa28f req-29b9ea1b-1d55-4664-a260-a691f67cf428 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Processing event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:46:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:04.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:04 compute-2 ceph-mon[77138]: pgmap v1244: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 254 op/s
Nov 29 07:46:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3769101896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:04 compute-2 podman[242156]: 2025-11-29 07:46:04.431763978 +0000 UTC m=+0.054181982 container create 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:46:04 compute-2 systemd[1]: Started libpod-conmon-5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac.scope.
Nov 29 07:46:04 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:46:04 compute-2 podman[242156]: 2025-11-29 07:46:04.401342643 +0000 UTC m=+0.023760667 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:46:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9719f4d32ac034548d9dded680b62d8097ec25a5718eddc4384f2b4047c976cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:04 compute-2 podman[242156]: 2025-11-29 07:46:04.518414987 +0000 UTC m=+0.140833001 container init 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:46:04 compute-2 podman[242156]: 2025-11-29 07:46:04.525773737 +0000 UTC m=+0.148191741 container start 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:46:04 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [NOTICE]   (242175) : New worker (242177) forked
Nov 29 07:46:04 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [NOTICE]   (242175) : Loading success.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.075 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402365.0753057, b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.076 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] VM Started (Lifecycle Event)
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.078 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.081 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.085 232432 INFO nova.virt.libvirt.driver [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Instance spawned successfully.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.086 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.100 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.104 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.114 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.115 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.116 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.116 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.117 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.118 232432 DEBUG nova.virt.libvirt.driver [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.123 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.123 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402365.0755436, b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.123 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] VM Paused (Lifecycle Event)
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.151 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.155 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402365.0808105, b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.156 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] VM Resumed (Lifecycle Event)
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.177 232432 INFO nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Took 7.59 seconds to spawn the instance on the hypervisor.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.178 232432 DEBUG nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.188 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.192 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.229 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.268 232432 INFO nova.compute.manager [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Took 8.52 seconds to build instance.
Nov 29 07:46:05 compute-2 nova_compute[232428]: 2025-11-29 07:46:05.288 232432 DEBUG oslo_concurrency.lockutils [None req-a2e7d91a-616f-4561-9b06-6b35d7f1fe98 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/16859782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:05 compute-2 podman[242229]: 2025-11-29 07:46:05.670147266 +0000 UTC m=+0.057346401 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 07:46:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:06.100 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:46:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:06.101 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:46:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.212 232432 DEBUG nova.compute.manager [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.213 232432 DEBUG oslo_concurrency.lockutils [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.213 232432 DEBUG oslo_concurrency.lockutils [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.213 232432 DEBUG oslo_concurrency.lockutils [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.213 232432 DEBUG nova.compute.manager [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] No waiting events found dispatching network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:46:06 compute-2 nova_compute[232428]: 2025-11-29 07:46:06.214 232432 WARNING nova.compute.manager [req-18e50d42-377a-4f6e-85e0-5e6b2bbc35b0 req-a8dcd17a-0470-4077-aa6a-28ac6f8687a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received unexpected event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 for instance with vm_state active and task_state None.
Nov 29 07:46:06 compute-2 ceph-mon[77138]: pgmap v1245: 305 pgs: 305 active+clean; 350 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.8 MiB/s wr, 271 op/s
Nov 29 07:46:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3957404881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:07 compute-2 nova_compute[232428]: 2025-11-29 07:46:07.067 232432 DEBUG oslo_concurrency.lockutils [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] Acquiring lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:46:07 compute-2 nova_compute[232428]: 2025-11-29 07:46:07.067 232432 DEBUG oslo_concurrency.lockutils [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] Acquired lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:46:07 compute-2 nova_compute[232428]: 2025-11-29 07:46:07.068 232432 DEBUG nova.network.neutron [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:46:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:07 compute-2 nova_compute[232428]: 2025-11-29 07:46:07.639 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:07 compute-2 nova_compute[232428]: 2025-11-29 07:46:07.885 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:08.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:08.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:08 compute-2 ceph-mon[77138]: pgmap v1246: 305 pgs: 305 active+clean; 350 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 182 op/s
Nov 29 07:46:08 compute-2 podman[242249]: 2025-11-29 07:46:08.666341858 +0000 UTC m=+0.071214545 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:46:08 compute-2 nova_compute[232428]: 2025-11-29 07:46:08.687 232432 DEBUG nova.network.neutron [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updating instance_info_cache with network_info: [{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:08 compute-2 nova_compute[232428]: 2025-11-29 07:46:08.718 232432 DEBUG oslo_concurrency.lockutils [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] Releasing lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:08 compute-2 nova_compute[232428]: 2025-11-29 07:46:08.718 232432 DEBUG nova.compute.manager [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 29 07:46:08 compute-2 nova_compute[232428]: 2025-11-29 07:46:08.718 232432 DEBUG nova.compute.manager [None req-708ece45-0ef4-4967-91ca-e1cb72c4621d 3821a242c26441f5a13b1ada4483d82b da1d3cc07aab49df911e7ff398f00e54 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] network_info to inject: |[{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 29 07:46:09 compute-2 ceph-mon[77138]: pgmap v1247: 305 pgs: 305 active+clean; 366 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 230 op/s
Nov 29 07:46:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:10.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:10 compute-2 sshd-session[242271]: Invalid user sol from 45.148.10.240 port 38232
Nov 29 07:46:10 compute-2 sshd-session[242271]: Connection closed by invalid user sol 45.148.10.240 port 38232 [preauth]
Nov 29 07:46:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:12.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:12.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:12 compute-2 nova_compute[232428]: 2025-11-29 07:46:12.488 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402357.4871056, b5eb4acc-8b3c-42f0-8c0e-bc362446a430 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:46:12 compute-2 nova_compute[232428]: 2025-11-29 07:46:12.489 232432 INFO nova.compute.manager [-] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] VM Stopped (Lifecycle Event)
Nov 29 07:46:12 compute-2 nova_compute[232428]: 2025-11-29 07:46:12.515 232432 DEBUG nova.compute.manager [None req-8d6d5e63-b841-4b42-9d30-86d8e562daca - - - - - -] [instance: b5eb4acc-8b3c-42f0-8c0e-bc362446a430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:46:12 compute-2 nova_compute[232428]: 2025-11-29 07:46:12.671 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:12 compute-2 nova_compute[232428]: 2025-11-29 07:46:12.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:13 compute-2 ceph-mon[77138]: pgmap v1248: 305 pgs: 305 active+clean; 372 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 318 op/s
Nov 29 07:46:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/761511784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:14.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:14 compute-2 nova_compute[232428]: 2025-11-29 07:46:14.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:14 compute-2 ceph-mon[77138]: pgmap v1249: 305 pgs: 305 active+clean; 361 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 243 op/s
Nov 29 07:46:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:46:15.103 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:15 compute-2 sudo[242276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:15 compute-2 sudo[242276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:15 compute-2 sudo[242276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:15 compute-2 sudo[242301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:15 compute-2 sudo[242301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:15 compute-2 sudo[242301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:16 compute-2 ceph-mon[77138]: pgmap v1250: 305 pgs: 305 active+clean; 326 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Nov 29 07:46:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:17 compute-2 ceph-mon[77138]: pgmap v1251: 305 pgs: 305 active+clean; 326 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 201 op/s
Nov 29 07:46:17 compute-2 nova_compute[232428]: 2025-11-29 07:46:17.703 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:17 compute-2 nova_compute[232428]: 2025-11-29 07:46:17.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:18.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.164154) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378164206, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 434, "num_deletes": 260, "total_data_size": 466513, "memory_usage": 476256, "flush_reason": "Manual Compaction"}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378169098, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 307523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23626, "largest_seqno": 24055, "table_properties": {"data_size": 305138, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5558, "raw_average_key_size": 17, "raw_value_size": 300316, "raw_average_value_size": 926, "num_data_blocks": 22, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402361, "oldest_key_time": 1764402361, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5025 microseconds, and 2124 cpu microseconds.
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169170) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 307523 bytes OK
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169205) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170608) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170624) EVENT_LOG_v1 {"time_micros": 1764402378170619, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 463767, prev total WAL file size 463767, number of live WAL files 2.
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.171115) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(300KB)], [45(8670KB)]
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378171203, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 9186009, "oldest_snapshot_seqno": -1}
Nov 29 07:46:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:18 compute-2 nova_compute[232428]: 2025-11-29 07:46:18.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:18 compute-2 nova_compute[232428]: 2025-11-29 07:46:18.218 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5000 keys, 9033650 bytes, temperature: kUnknown
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378236959, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 9033650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8999750, "index_size": 20286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 128523, "raw_average_key_size": 25, "raw_value_size": 8908804, "raw_average_value_size": 1781, "num_data_blocks": 827, "num_entries": 5000, "num_filter_entries": 5000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.237390) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9033650 bytes
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.238995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 137.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(59.2) write-amplify(29.4) OK, records in: 5528, records dropped: 528 output_compression: NoCompression
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.239028) EVENT_LOG_v1 {"time_micros": 1764402378239013, "job": 26, "event": "compaction_finished", "compaction_time_micros": 65871, "compaction_time_cpu_micros": 25977, "output_level": 6, "num_output_files": 1, "total_output_size": 9033650, "num_input_records": 5528, "num_output_records": 5000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378239417, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378242109, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.171029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.242153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.242160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.242163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.242165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:46:18.242168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:46:19 compute-2 ovn_controller[134375]: 2025-11-29T07:46:19Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:27:14 10.100.0.6
Nov 29 07:46:19 compute-2 ovn_controller[134375]: 2025-11-29T07:46:19Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:27:14 10.100.0.6
Nov 29 07:46:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:20 compute-2 nova_compute[232428]: 2025-11-29 07:46:20.235 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:20 compute-2 podman[242328]: 2025-11-29 07:46:20.7447093 +0000 UTC m=+0.138471676 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:46:21 compute-2 ceph-mon[77138]: pgmap v1252: 305 pgs: 305 active+clean; 332 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Nov 29 07:46:21 compute-2 nova_compute[232428]: 2025-11-29 07:46:21.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:21 compute-2 nova_compute[232428]: 2025-11-29 07:46:21.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:46:21 compute-2 nova_compute[232428]: 2025-11-29 07:46:21.223 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:46:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:22.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:22 compute-2 nova_compute[232428]: 2025-11-29 07:46:22.213 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:22 compute-2 nova_compute[232428]: 2025-11-29 07:46:22.213 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:22 compute-2 nova_compute[232428]: 2025-11-29 07:46:22.240 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:22 compute-2 ceph-mon[77138]: pgmap v1253: 305 pgs: 305 active+clean; 301 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 230 op/s
Nov 29 07:46:22 compute-2 nova_compute[232428]: 2025-11-29 07:46:22.747 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:22 compute-2 nova_compute[232428]: 2025-11-29 07:46:22.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.591 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.592 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.592 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:46:23 compute-2 nova_compute[232428]: 2025-11-29 07:46:23.592 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:46:23 compute-2 ceph-mon[77138]: pgmap v1254: 305 pgs: 305 active+clean; 279 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 866 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 29 07:46:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:24.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.234 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updating instance_info_cache with network_info: [{"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.331 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.331 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.332 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.332 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.332 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.332 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.361 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.361 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.361 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.361 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.362 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1527428038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2539696403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.832 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.905 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:46:25 compute-2 nova_compute[232428]: 2025-11-29 07:46:25.906 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:46:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.069 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.070 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4702MB free_disk=20.85177993774414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.071 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.071 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:26.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.330 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.331 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.331 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.415 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:26 compute-2 ceph-mon[77138]: pgmap v1255: 305 pgs: 305 active+clean; 279 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Nov 29 07:46:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2539696403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2378973378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1884400528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3392973116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.917 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.924 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:46:26 compute-2 nova_compute[232428]: 2025-11-29 07:46:26.945 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:46:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:27 compute-2 nova_compute[232428]: 2025-11-29 07:46:27.428 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:46:27 compute-2 nova_compute[232428]: 2025-11-29 07:46:27.428 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:27 compute-2 nova_compute[232428]: 2025-11-29 07:46:27.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3392973116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2635811855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/970400620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:27 compute-2 ceph-mon[77138]: pgmap v1256: 305 pgs: 305 active+clean; 279 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 07:46:27 compute-2 nova_compute[232428]: 2025-11-29 07:46:27.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276193479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276193479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:28 compute-2 nova_compute[232428]: 2025-11-29 07:46:28.299 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:29 compute-2 nova_compute[232428]: 2025-11-29 07:46:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/276193479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/276193479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:30.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:30.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/182931077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:30 compute-2 ceph-mon[77138]: pgmap v1257: 305 pgs: 305 active+clean; 295 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 104 op/s
Nov 29 07:46:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:32.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:32.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:32 compute-2 ceph-mon[77138]: pgmap v1258: 305 pgs: 305 active+clean; 325 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 123 op/s
Nov 29 07:46:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:32 compute-2 nova_compute[232428]: 2025-11-29 07:46:32.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:32 compute-2 nova_compute[232428]: 2025-11-29 07:46:32.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:33 compute-2 ceph-mon[77138]: pgmap v1259: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 29 07:46:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:34.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:35 compute-2 sudo[242408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:35 compute-2 sudo[242408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 sudo[242408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:35 compute-2 sudo[242433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:35 compute-2 sudo[242433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 sudo[242433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:35 compute-2 sudo[242437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:35 compute-2 sudo[242437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 sudo[242437]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:35 compute-2 sudo[242482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:35 compute-2 sudo[242482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 sudo[242482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:35 compute-2 sudo[242489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:35 compute-2 sudo[242489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 sudo[242489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:35 compute-2 sudo[242533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:46:35 compute-2 sudo[242533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:35 compute-2 podman[242557]: 2025-11-29 07:46:35.805012517 +0000 UTC m=+0.070939187 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:46:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:36.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:36 compute-2 sudo[242533]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:36 compute-2 ceph-mon[77138]: pgmap v1260: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 29 07:46:36 compute-2 nova_compute[232428]: 2025-11-29 07:46:36.578 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:36 compute-2 nova_compute[232428]: 2025-11-29 07:46:36.605 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 07:46:36 compute-2 nova_compute[232428]: 2025-11-29 07:46:36.606 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:36 compute-2 nova_compute[232428]: 2025-11-29 07:46:36.606 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:36 compute-2 nova_compute[232428]: 2025-11-29 07:46:36.665 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:46:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:37 compute-2 nova_compute[232428]: 2025-11-29 07:46:37.753 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:37 compute-2 nova_compute[232428]: 2025-11-29 07:46:37.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:38.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:39 compute-2 ceph-mon[77138]: pgmap v1261: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 07:46:39 compute-2 podman[242611]: 2025-11-29 07:46:39.674529213 +0000 UTC m=+0.068555971 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 07:46:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:40.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:40 compute-2 ceph-mon[77138]: pgmap v1262: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 07:46:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:42.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:42 compute-2 nova_compute[232428]: 2025-11-29 07:46:42.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:42 compute-2 ceph-mon[77138]: pgmap v1263: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 99 op/s
Nov 29 07:46:42 compute-2 nova_compute[232428]: 2025-11-29 07:46:42.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:44.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:45 compute-2 ceph-mon[77138]: pgmap v1264: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 69 op/s
Nov 29 07:46:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:46.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:46 compute-2 ceph-mon[77138]: pgmap v1265: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.3 KiB/s wr, 51 op/s
Nov 29 07:46:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:47 compute-2 nova_compute[232428]: 2025-11-29 07:46:47.759 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:47 compute-2 nova_compute[232428]: 2025-11-29 07:46:47.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:48.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:50.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:50.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:50 compute-2 ceph-mon[77138]: pgmap v1266: 305 pgs: 305 active+clean; 326 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 725 KiB/s rd, 75 KiB/s wr, 27 op/s
Nov 29 07:46:51 compute-2 podman[242635]: 2025-11-29 07:46:51.723014959 +0000 UTC m=+0.121193504 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:46:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:52.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:52 compute-2 ceph-mon[77138]: pgmap v1267: 305 pgs: 305 active+clean; 337 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 1.0 MiB/s wr, 16 op/s
Nov 29 07:46:52 compute-2 ceph-mon[77138]: pgmap v1268: 305 pgs: 305 active+clean; 347 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 150 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Nov 29 07:46:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:46:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:46:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:52 compute-2 nova_compute[232428]: 2025-11-29 07:46:52.763 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:52 compute-2 nova_compute[232428]: 2025-11-29 07:46:52.908 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:54.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2689649829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:54.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:55 compute-2 sudo[242667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:55 compute-2 sudo[242667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:55 compute-2 sudo[242667]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:55 compute-2 sudo[242692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:55 compute-2 sudo[242692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:55 compute-2 sudo[242692]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:46:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:46:56 compute-2 ceph-mon[77138]: pgmap v1269: 305 pgs: 305 active+clean; 347 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 245 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Nov 29 07:46:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:46:57 compute-2 nova_compute[232428]: 2025-11-29 07:46:57.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:57 compute-2 nova_compute[232428]: 2025-11-29 07:46:57.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:58.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:46:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:46:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:46:58 compute-2 sudo[242718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:58 compute-2 sudo[242718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:58 compute-2 sudo[242718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-2 ceph-mon[77138]: pgmap v1270: 305 pgs: 305 active+clean; 351 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 07:46:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:46:58 compute-2 sudo[242743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:46:58 compute-2 sudo[242743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:58 compute-2 sudo[242743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:00 compute-2 ceph-mon[77138]: pgmap v1271: 305 pgs: 305 active+clean; 356 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 07:47:00 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:47:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:00.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:01 compute-2 ceph-mon[77138]: pgmap v1272: 305 pgs: 305 active+clean; 359 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 07:47:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:02 compute-2 ceph-mon[77138]: pgmap v1273: 305 pgs: 305 active+clean; 389 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 314 KiB/s rd, 2.2 MiB/s wr, 79 op/s
Nov 29 07:47:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:02 compute-2 nova_compute[232428]: 2025-11-29 07:47:02.803 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:02 compute-2 nova_compute[232428]: 2025-11-29 07:47:02.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3814134550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3570569359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:03.293 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:03.294 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:03.295 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:04.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:04 compute-2 ceph-mon[77138]: pgmap v1274: 305 pgs: 305 active+clean; 405 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 193 KiB/s rd, 1.9 MiB/s wr, 63 op/s
Nov 29 07:47:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:04 compute-2 nova_compute[232428]: 2025-11-29 07:47:04.974 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "dc42f6b3-eda5-409e-aac8-68275e50922e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:04 compute-2 nova_compute[232428]: 2025-11-29 07:47:04.974 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.003 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.075 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.075 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.084 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.085 232432 INFO nova.compute.claims [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.223 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3124604790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.861 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:05 compute-2 nova_compute[232428]: 2025-11-29 07:47:05.872 232432 DEBUG nova.compute.provider_tree [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:47:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2973788246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.023 232432 DEBUG nova.scheduler.client.report [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.057 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.058 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:47:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.115 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.116 232432 DEBUG nova.network.neutron [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.139 232432 INFO nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.158 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:47:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.363 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.365 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.365 232432 INFO nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Creating image(s)
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.397 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.430 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.524 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.529 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.563 232432 DEBUG nova.network.neutron [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.563 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.631 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.632 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.633 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.633 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:06 compute-2 podman[242849]: 2025-11-29 07:47:06.658154919 +0000 UTC m=+0.063971438 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.752 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:06 compute-2 nova_compute[232428]: 2025-11-29 07:47:06.755 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc42f6b3-eda5-409e-aac8-68275e50922e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:07 compute-2 ceph-mon[77138]: pgmap v1275: 305 pgs: 305 active+clean; 386 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 115 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Nov 29 07:47:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4087089054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3124604790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:07 compute-2 nova_compute[232428]: 2025-11-29 07:47:07.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:07 compute-2 nova_compute[232428]: 2025-11-29 07:47:07.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:08.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.223 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc42f6b3-eda5-409e-aac8-68275e50922e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:47:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:08.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:47:08 compute-2 ceph-mon[77138]: pgmap v1276: 305 pgs: 305 active+clean; 381 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 2.9 MiB/s wr, 67 op/s
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.392 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] resizing rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.667 232432 DEBUG nova.objects.instance [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.693 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.693 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Ensure instance console log exists: /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.694 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.694 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.695 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.696 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.702 232432 WARNING nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.707 232432 DEBUG nova.virt.libvirt.host [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.707 232432 DEBUG nova.virt.libvirt.host [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.709 232432 DEBUG nova.virt.libvirt.host [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.710 232432 DEBUG nova.virt.libvirt.host [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.711 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.712 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.712 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.712 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.713 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.713 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.713 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.713 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.713 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.714 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.714 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.714 232432 DEBUG nova.virt.hardware [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:47:08 compute-2 nova_compute[232428]: 2025-11-29 07:47:08.718 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:47:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3391860639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:09 compute-2 nova_compute[232428]: 2025-11-29 07:47:09.229 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:09 compute-2 nova_compute[232428]: 2025-11-29 07:47:09.264 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:09 compute-2 nova_compute[232428]: 2025-11-29 07:47:09.270 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:47:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2431746694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:10 compute-2 nova_compute[232428]: 2025-11-29 07:47:10.045 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:10 compute-2 nova_compute[232428]: 2025-11-29 07:47:10.049 232432 DEBUG nova.objects.instance [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:10.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3391860639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:10 compute-2 ceph-mon[77138]: pgmap v1277: 305 pgs: 305 active+clean; 392 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 724 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Nov 29 07:47:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:10.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:10 compute-2 nova_compute[232428]: 2025-11-29 07:47:10.447 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <uuid>dc42f6b3-eda5-409e-aac8-68275e50922e</uuid>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <name>instance-00000010</name>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-734359268</nova:name>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:47:08</nova:creationTime>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <system>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="serial">dc42f6b3-eda5-409e-aac8-68275e50922e</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="uuid">dc42f6b3-eda5-409e-aac8-68275e50922e</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </system>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <os>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </os>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <features>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </features>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dc42f6b3-eda5-409e-aac8-68275e50922e_disk">
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config">
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:47:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/console.log" append="off"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <video>
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </video>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:47:10 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:47:10 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:47:10 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:47:10 compute-2 nova_compute[232428]: </domain>
Nov 29 07:47:10 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:47:10 compute-2 podman[243043]: 2025-11-29 07:47:10.699126934 +0000 UTC m=+0.090261812 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.267 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.268 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.269 232432 INFO nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Using config drive
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.304 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.792 232432 INFO nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Creating config drive at /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.797 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0zv5s85 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.929 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0zv5s85" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.965 232432 DEBUG nova.storage.rbd_utils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:47:11 compute-2 nova_compute[232428]: 2025-11-29 07:47:11.968 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:12.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:12.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:12.525 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:12.527 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:47:12 compute-2 nova_compute[232428]: 2025-11-29 07:47:12.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:12 compute-2 nova_compute[232428]: 2025-11-29 07:47:12.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:12 compute-2 nova_compute[232428]: 2025-11-29 07:47:12.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2431746694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.645 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Creating tmpfile /var/lib/nova/instances/tmpwgtxewll to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.752 232432 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.779 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.780 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.792 232432 INFO nova.compute.rpcapi [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Nov 29 07:47:13 compute-2 nova_compute[232428]: 2025-11-29 07:47:13.793 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:14.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:14.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:14 compute-2 ovn_controller[134375]: 2025-11-29T07:47:14Z|00060|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 07:47:14 compute-2 ceph-mon[77138]: pgmap v1278: 305 pgs: 305 active+clean; 418 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.4 MiB/s wr, 182 op/s
Nov 29 07:47:14 compute-2 ceph-mon[77138]: pgmap v1279: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Nov 29 07:47:15 compute-2 sudo[243124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:15 compute-2 sudo[243124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:15 compute-2 sudo[243124]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:15 compute-2 sudo[243149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:15 compute-2 sudo[243149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:15 compute-2 sudo[243149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:16.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.242 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.244 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.245 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.245 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.245 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.249 232432 INFO nova.compute.manager [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Terminating instance
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.251 232432 DEBUG nova.compute.manager [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:47:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:16.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:16 compute-2 kernel: tap6ee8db63-b0 (unregistering): left promiscuous mode
Nov 29 07:47:16 compute-2 NetworkManager[48993]: <info>  [1764402436.6989] device (tap6ee8db63-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:47:16 compute-2 ovn_controller[134375]: 2025-11-29T07:47:16Z|00061|binding|INFO|Releasing lport 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 from this chassis (sb_readonly=0)
Nov 29 07:47:16 compute-2 ovn_controller[134375]: 2025-11-29T07:47:16Z|00062|binding|INFO|Setting lport 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 down in Southbound
Nov 29 07:47:16 compute-2 ovn_controller[134375]: 2025-11-29T07:47:16Z|00063|binding|INFO|Removing iface tap6ee8db63-b0 ovn-installed in OVS
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.759 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:16.775 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:27:14 10.100.0.6'], port_security=['fa:16:3e:c8:27:14 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6ee8db63-b095-48c4-b9d5-fc8ed17f9925) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:16.777 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6ee8db63-b095-48c4-b9d5-fc8ed17f9925 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis
Nov 29 07:47:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:16.780 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 788595a6-8f3f-45f7-807d-f88c9bf0e050, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:47:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:16.782 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc98c2af-0897-4dcc-a1ff-873dc464e606]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:16.784 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace which is not needed anymore
Nov 29 07:47:16 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 07:47:16 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 17.697s CPU time.
Nov 29 07:47:16 compute-2 systemd-machined[194747]: Machine qemu-5-instance-0000000d terminated.
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.892 232432 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.899 232432 INFO nova.virt.libvirt.driver [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Instance destroyed successfully.
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.900 232432 DEBUG nova.objects.instance [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.918 232432 DEBUG nova.virt.libvirt.vif [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1309787289',display_name='tempest-ServersAdminTestJSON-server-1309787289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1309787289',id=13,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:46:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-yxj9118z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:46:05Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.918 232432 DEBUG nova.network.os_vif_util [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "address": "fa:16:3e:c8:27:14", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee8db63-b0", "ovs_interfaceid": "6ee8db63-b095-48c4-b9d5-fc8ed17f9925", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.919 232432 DEBUG nova.network.os_vif_util [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.920 232432 DEBUG os_vif [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.922 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.922 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.922 232432 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.923 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.923 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ee8db63-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.927 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-2 nova_compute[232428]: 2025-11-29 07:47:16.930 232432 INFO os_vif [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:27:14,bridge_name='br-int',has_traffic_filtering=True,id=6ee8db63-b095-48c4-b9d5-fc8ed17f9925,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee8db63-b0')
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [NOTICE]   (242175) : haproxy version is 2.8.14-c23fe91
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [NOTICE]   (242175) : path to executable is /usr/sbin/haproxy
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [WARNING]  (242175) : Exiting Master process...
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [WARNING]  (242175) : Exiting Master process...
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [ALERT]    (242175) : Current worker (242177) exited with code 143 (Terminated)
Nov 29 07:47:16 compute-2 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[242171]: [WARNING]  (242175) : All workers exited. Exiting... (0)
Nov 29 07:47:16 compute-2 systemd[1]: libpod-5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac.scope: Deactivated successfully.
Nov 29 07:47:16 compute-2 podman[243198]: 2025-11-29 07:47:16.950495879 +0000 UTC m=+0.054749089 container died 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:47:16 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac-userdata-shm.mount: Deactivated successfully.
Nov 29 07:47:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-9719f4d32ac034548d9dded680b62d8097ec25a5718eddc4384f2b4047c976cf-merged.mount: Deactivated successfully.
Nov 29 07:47:16 compute-2 podman[243198]: 2025-11-29 07:47:16.991761784 +0000 UTC m=+0.096014984 container cleanup 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:47:17 compute-2 systemd[1]: libpod-conmon-5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac.scope: Deactivated successfully.
Nov 29 07:47:17 compute-2 podman[243253]: 2025-11-29 07:47:17.061222824 +0000 UTC m=+0.046511011 container remove 5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.067 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e8587092-ed4e-4b2e-903e-b96f2b742e1b]: (4, ('Sat Nov 29 07:47:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac)\n5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac\nSat Nov 29 07:47:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac)\n5888206553e66f3160ee37d277aad711e85945f20f24d1ebaeffea028f6bf8ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.069 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[39c33403-4a0d-4e43-a6fb-c5ce6a398b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.070 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.072 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:17 compute-2 kernel: tap788595a6-80: left promiscuous mode
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.089 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[35978369-9c6f-474a-a281-9267940e2ecf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.104 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9565b1-263e-466f-8c2a-11324ab78645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.106 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c295f2c1-d0bd-4d32-bae0-6cb6a3302d05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.126 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5588e1e5-6ab4-404a-8ed9-00aa907abb09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532997, 'reachable_time': 28351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243270, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.130 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:47:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:17.130 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[98871520-5357-4dca-b1bd-63bc5837260f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:17 compute-2 systemd[1]: run-netns-ovnmeta\x2d788595a6\x2d8f3f\x2d45f7\x2d807d\x2df88c9bf0e050.mount: Deactivated successfully.
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.194 232432 DEBUG nova.compute.manager [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-unplugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.195 232432 DEBUG oslo_concurrency.lockutils [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.195 232432 DEBUG oslo_concurrency.lockutils [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.195 232432 DEBUG oslo_concurrency.lockutils [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.196 232432 DEBUG nova.compute.manager [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] No waiting events found dispatching network-vif-unplugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.196 232432 DEBUG nova.compute.manager [req-0c5e62f9-905f-4ab1-a0a0-f0cc146e5948 req-33246b44-8476-48e2-9bb3-2794aadcbf91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-unplugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:47:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:17 compute-2 nova_compute[232428]: 2025-11-29 07:47:17.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:18 compute-2 nova_compute[232428]: 2025-11-29 07:47:18.565 232432 DEBUG oslo_concurrency.processutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:18 compute-2 nova_compute[232428]: 2025-11-29 07:47:18.566 232432 INFO nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Deleting local config drive /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/disk.config because it was imported into RBD.
Nov 29 07:47:18 compute-2 systemd-machined[194747]: New machine qemu-6-instance-00000010.
Nov 29 07:47:18 compute-2 systemd[1]: Started Virtual Machine qemu-6-instance-00000010.
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.307 232432 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.341 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.342 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.343 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Creating instance directory: /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.343 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Ensure instance console log exists: /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.344 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.345 232432 DEBUG nova.virt.libvirt.vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:07Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.345 232432 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.346 232432 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.346 232432 DEBUG os_vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.347 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.347 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.349 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.349 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda69d7f6-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.350 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda69d7f6-de, col_values=(('external_ids', {'iface-id': 'da69d7f6-de64-485f-96a1-c51ad9274372', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:b0:27', 'vm-uuid': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:19 compute-2 NetworkManager[48993]: <info>  [1764402439.3534] manager: (tapda69d7f6-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.362 232432 INFO os_vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de')
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.363 232432 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.363 232432 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.412 232432 DEBUG nova.compute.manager [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.413 232432 DEBUG oslo_concurrency.lockutils [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.413 232432 DEBUG oslo_concurrency.lockutils [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.413 232432 DEBUG oslo_concurrency.lockutils [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.414 232432 DEBUG nova.compute.manager [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] No waiting events found dispatching network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:19 compute-2 nova_compute[232428]: 2025-11-29 07:47:19.414 232432 WARNING nova.compute.manager [req-99c63d79-b9b5-45bc-8bb8-250da5cb39f5 req-327e3183-734c-4abb-b211-a3b993c2884b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received unexpected event network-vif-plugged-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 for instance with vm_state active and task_state deleting.
Nov 29 07:47:19 compute-2 ceph-mon[77138]: pgmap v1280: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Nov 29 07:47:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:20.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:20.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.661 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402440.6612608, dc42f6b3-eda5-409e-aac8-68275e50922e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.662 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Resumed (Lifecycle Event)
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.665 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.666 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.670 232432 INFO nova.virt.libvirt.driver [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance spawned successfully.
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.671 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.898 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:20 compute-2 nova_compute[232428]: 2025-11-29 07:47:20.903 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:47:21 compute-2 ceph-mon[77138]: pgmap v1281: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 207 op/s
Nov 29 07:47:21 compute-2 ceph-mon[77138]: pgmap v1282: 305 pgs: 305 active+clean; 419 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.176 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.176 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.177 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.178 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.178 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.179 232432 DEBUG nova.virt.libvirt.driver [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.345 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.346 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402440.6644592, dc42f6b3-eda5-409e-aac8-68275e50922e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.346 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Started (Lifecycle Event)
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.474 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.477 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.501 232432 INFO nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Took 15.14 seconds to spawn the instance on the hypervisor.
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.502 232432 DEBUG nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.511 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:47:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:21.529 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.570 232432 INFO nova.compute.manager [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Took 16.53 seconds to build instance.
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.594 232432 DEBUG oslo_concurrency.lockutils [None req-205535dd-2cd0-4332-b5e3-56d6badc0911 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.600 232432 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Port da69d7f6-de64-485f-96a1-c51ad9274372 updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.601 232432 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Nov 29 07:47:21 compute-2 systemd[1]: Starting libvirt proxy daemon...
Nov 29 07:47:21 compute-2 systemd[1]: Started libvirt proxy daemon.
Nov 29 07:47:21 compute-2 podman[243353]: 2025-11-29 07:47:21.907532719 +0000 UTC m=+0.096881571 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:47:21 compute-2 kernel: tapda69d7f6-de: entered promiscuous mode
Nov 29 07:47:21 compute-2 NetworkManager[48993]: <info>  [1764402441.9132] manager: (tapda69d7f6-de): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 07:47:21 compute-2 systemd-udevd[243330]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:21 compute-2 nova_compute[232428]: 2025-11-29 07:47:21.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:21 compute-2 ovn_controller[134375]: 2025-11-29T07:47:21Z|00064|binding|INFO|Claiming lport da69d7f6-de64-485f-96a1-c51ad9274372 for this additional chassis.
Nov 29 07:47:21 compute-2 ovn_controller[134375]: 2025-11-29T07:47:21Z|00065|binding|INFO|da69d7f6-de64-485f-96a1-c51ad9274372: Claiming fa:16:3e:27:b0:27 10.100.0.8
Nov 29 07:47:21 compute-2 ovn_controller[134375]: 2025-11-29T07:47:21Z|00066|binding|INFO|Claiming lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b for this additional chassis.
Nov 29 07:47:21 compute-2 ovn_controller[134375]: 2025-11-29T07:47:21Z|00067|binding|INFO|d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b: Claiming fa:16:3e:e7:8a:05 19.80.0.53
Nov 29 07:47:21 compute-2 NetworkManager[48993]: <info>  [1764402441.9313] device (tapda69d7f6-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:47:21 compute-2 NetworkManager[48993]: <info>  [1764402441.9324] device (tapda69d7f6-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:47:21 compute-2 systemd-machined[194747]: New machine qemu-7-instance-0000000f.
Nov 29 07:47:21 compute-2 systemd[1]: Started Virtual Machine qemu-7-instance-0000000f.
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:22 compute-2 ovn_controller[134375]: 2025-11-29T07:47:22Z|00068|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 ovn-installed in OVS
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:22.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.219 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.219 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:22.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:22 compute-2 ceph-mon[77138]: pgmap v1283: 305 pgs: 305 active+clean; 427 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.0 MiB/s wr, 168 op/s
Nov 29 07:47:22 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 07:47:22 compute-2 nova_compute[232428]: 2025-11-29 07:47:22.939 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:23 compute-2 nova_compute[232428]: 2025-11-29 07:47:23.490 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402443.4903579, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:23 compute-2 nova_compute[232428]: 2025-11-29 07:47:23.491 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Started (Lifecycle Event)
Nov 29 07:47:23 compute-2 nova_compute[232428]: 2025-11-29 07:47:23.512 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:24.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:24 compute-2 ceph-mon[77138]: pgmap v1284: 305 pgs: 305 active+clean; 427 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 861 KiB/s wr, 103 op/s
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.232 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.232 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.233 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:24.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.619 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402444.6190202, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.620 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Resumed (Lifecycle Event)
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.649 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.656 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.676 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 07:47:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1056257870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.727 232432 INFO nova.virt.libvirt.driver [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Deleting instance files /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_del
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.729 232432 INFO nova.virt.libvirt.driver [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Deletion of /var/lib/nova/instances/b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45_del complete
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.734 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.805 232432 INFO nova.compute.manager [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Took 8.55 seconds to destroy the instance on the hypervisor.
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.806 232432 DEBUG oslo.service.loopingcall [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.806 232432 DEBUG nova.compute.manager [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.807 232432 DEBUG nova.network.neutron [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.827 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.827 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.830 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.830 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:24 compute-2 nova_compute[232428]: 2025-11-29 07:47:24.832 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Error from libvirt while getting description of instance-0000000d: [Error Code 42] Domain not found: no domain with matching uuid 'b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45' (instance-0000000d): libvirt.libvirtError: Domain not found: no domain with matching uuid 'b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45' (instance-0000000d)
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.060 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.062 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4558MB free_disk=20.779457092285156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.062 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.062 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1056257870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.240 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration for instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.293 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating resource usage from migration 61e2127b-055f-4b3a-8c41-a0fe32b26029
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.294 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Starting to track incoming migration 61e2127b-055f-4b3a-8c41-a0fe32b26029 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.543 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.544 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance dc42f6b3-eda5-409e-aac8-68275e50922e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.566 232432 WARNING nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.567 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.567 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.584 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.613 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.614 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.629 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.654 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:47:25 compute-2 nova_compute[232428]: 2025-11-29 07:47:25.749 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3324425674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:26 compute-2 ceph-mon[77138]: pgmap v1285: 305 pgs: 305 active+clean; 388 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 177 op/s
Nov 29 07:47:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/154495875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.191 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.199 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.218 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.264 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.265 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:26.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.332 232432 DEBUG nova.network.neutron [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.362 232432 INFO nova.compute.manager [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Took 1.56 seconds to deallocate network for instance.
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.421 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.421 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.458 232432 DEBUG nova.compute.manager [req-6d769f98-cbd6-49fb-a2ac-f5ca8c1f235d req-f10a481d-38db-4e04-9d66-55238cc11916 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Received event network-vif-deleted-6ee8db63-b095-48c4-b9d5-fc8ed17f9925 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.496 232432 DEBUG oslo_concurrency.processutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00069|binding|INFO|Claiming lport da69d7f6-de64-485f-96a1-c51ad9274372 for this chassis.
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00070|binding|INFO|da69d7f6-de64-485f-96a1-c51ad9274372: Claiming fa:16:3e:27:b0:27 10.100.0.8
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00071|binding|INFO|Claiming lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b for this chassis.
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00072|binding|INFO|d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b: Claiming fa:16:3e:e7:8a:05 19.80.0.53
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00073|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 up in Southbound
Nov 29 07:47:26 compute-2 ovn_controller[134375]: 2025-11-29T07:47:26Z|00074|binding|INFO|Setting lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b up in Southbound
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.698 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.700 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.702 143801 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 bound to our chassis
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.704 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.721 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f180ac13-d6a7-42f1-ba32-e6db98b862de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.722 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64f65ccd-71 in ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.724 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64f65ccd-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.724 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[074962f2-ab5c-40df-95b3-009c0161c5d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.726 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[be43f850-67ea-48dc-bcf7-a37d7595a633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.745 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f06a19-eb75-4860-aace-cddb5801e39e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.767 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f71bd95-4b09-4ee4-8568-8211860c635f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.808 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a86e140e-5196-4811-aa7d-486bda9b72b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 NetworkManager[48993]: <info>  [1764402446.8258] manager: (tap64f65ccd-70): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.825 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eae68cff-1f48-4624-a0bb-0cbd7e8708e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 systemd-udevd[243516]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.867 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[24706161-1821-417b-b554-11feec43c683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.873 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[69ac864d-7e3f-48f6-b632-4f92d57a51d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.892 232432 INFO nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Post operation of migration started
Nov 29 07:47:26 compute-2 NetworkManager[48993]: <info>  [1764402446.9043] device (tap64f65ccd-70): carrier: link connected
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.912 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[291c7adc-5a74-4f9d-a6b5-e8922d9e878f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.934 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fed59984-a829-4d8f-84e7-fc62a91c1d5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541317, 'reachable_time': 38357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243535, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3208943668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.957 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[feb53703-2a84-432e-be56-ab61ac5eccd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:be36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541317, 'tstamp': 541317}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243536, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.972 232432 DEBUG oslo_concurrency.processutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.980 232432 DEBUG nova.compute.provider_tree [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:47:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:26.985 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca1c785-57d3-4c75-8710-f956e942f89a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541317, 'reachable_time': 38357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243539, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:26 compute-2 nova_compute[232428]: 2025-11-29 07:47:26.997 232432 DEBUG nova.scheduler.client.report [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.031 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[68a5a846-a340-4ea3-b8f8-54272cdd0c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.036 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.070 232432 INFO nova.scheduler.client.report [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Deleted allocations for instance b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.101 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8a69e0-3e61-48e2-9a6f-becf7f40c57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.103 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.104 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.104 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.106 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:27 compute-2 NetworkManager[48993]: <info>  [1764402447.1073] manager: (tap64f65ccd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 07:47:27 compute-2 kernel: tap64f65ccd-70: entered promiscuous mode
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.109 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.113 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:27 compute-2 ovn_controller[134375]: 2025-11-29T07:47:27Z|00075|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.142 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.143 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e09e1b63-9095-45ab-9128-7b7455111a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.148 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.150 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'env', 'PROCESS_TAG=haproxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.153 232432 DEBUG oslo_concurrency.lockutils [None req-0494b0de-2763-4924-b120-573399bb7c91 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.171 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.171 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.171 232432 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:47:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3324425674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2991815555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3208943668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4181092242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.265 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.266 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.266 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.304 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.321 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.321 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.322 232432 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.383 232432 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:47:27 compute-2 podman[243573]: 2025-11-29 07:47:27.648065113 +0000 UTC m=+0.076070278 container create 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 07:47:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:27 compute-2 systemd[1]: Started libpod-conmon-1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d.scope.
Nov 29 07:47:27 compute-2 podman[243573]: 2025-11-29 07:47:27.615737969 +0000 UTC m=+0.043743154 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:47:27 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:47:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1856b34fadcb9566a00b308fc6d98b72f9def963387e83c19cf55b896ef25e92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:27 compute-2 podman[243573]: 2025-11-29 07:47:27.795889302 +0000 UTC m=+0.223894467 container init 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:47:27 compute-2 podman[243573]: 2025-11-29 07:47:27.802030664 +0000 UTC m=+0.230035819 container start 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:47:27 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [NOTICE]   (243593) : New worker (243595) forked
Nov 29 07:47:27 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [NOTICE]   (243593) : Loading success.
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.864 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.867 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.884 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c9681d-b996-46a9-9b0a-5d06aba1ac85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.889 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce6bdb9b-81 in ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.891 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce6bdb9b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.892 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5ca3d7-ff65-4391-a23a-ebe4d4a1af5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.893 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc86ab32-f3ad-4e5c-a5a3-0368e17d7cd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.909 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[1688ab1c-dd35-4ab2-a41e-b46a28a267d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292696451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292696451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:27 compute-2 nova_compute[232428]: 2025-11-29 07:47:27.941 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.943 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a93386-99ee-4e11-9cba-188ef1382300]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.988 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2c105a-e7b5-418e-8207-5130c09156f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 NetworkManager[48993]: <info>  [1764402448.0001] manager: (tapce6bdb9b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:27.999 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7da5a68a-9356-4ca0-bd8c-d0ef4d9fd737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 systemd-udevd[243521]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.043 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8c9e37-c214-4a4a-9104-9eb597341b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.047 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a80ba085-37ed-4657-a35d-7677b9685fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 NetworkManager[48993]: <info>  [1764402448.0761] device (tapce6bdb9b-80): carrier: link connected
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.083 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[baff99a6-4227-449e-a426-c8d909059b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.097 232432 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.108 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae434533-a3a1-4409-a3ca-86a6266b4af5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce6bdb9b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:fc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541434, 'reachable_time': 19206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243617, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:28.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.116 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.117 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.118 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.118 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.128 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a33f08-d622-4943-aa78-565c2bbf8431]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:fc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541434, 'tstamp': 541434}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243618, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.158 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4db27dce-0b0c-48c1-8b11-a2fa7bcb4636]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce6bdb9b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:fc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541434, 'reachable_time': 19206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243619, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.207 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3ceab2-e81b-4a34-bb08-bc59d9c1a618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ceph-mon[77138]: pgmap v1286: 305 pgs: 305 active+clean; 378 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 202 op/s
Nov 29 07:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1055571765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3267796039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2570882985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4292696451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4292696451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.220 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.221 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Creating file /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/7ed4b132f68f455aa3bc7ec0e343ca7f.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.221 232432 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/7ed4b132f68f455aa3bc7ec0e343ca7f.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.308 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9520ae-37d3-4d5e-8dfa-eb626659e914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.310 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce6bdb9b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.310 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.311 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce6bdb9b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:28 compute-2 kernel: tapce6bdb9b-80: entered promiscuous mode
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:28 compute-2 NetworkManager[48993]: <info>  [1764402448.3140] manager: (tapce6bdb9b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.317 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce6bdb9b-80, col_values=(('external_ids', {'iface-id': 'ef275590-b3a5-476c-87e4-00a73179899a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.318 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:28 compute-2 ovn_controller[134375]: 2025-11-29T07:47:28Z|00076|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.326 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.327 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[38b46b95-67c2-4aac-8229-e96e843535fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.328 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-ce6bdb9b-87f6-4011-9a56-230cbc6f4771
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID ce6bdb9b-87f6-4011-9a56-230cbc6f4771
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:28.328 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'env', 'PROCESS_TAG=haproxy-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:47:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.336 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.340 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:47:28 compute-2 podman[243653]: 2025-11-29 07:47:28.710393787 +0000 UTC m=+0.058755834 container create a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.725 232432 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/7ed4b132f68f455aa3bc7ec0e343ca7f.tmp" returned: 1 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.726 232432 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/7ed4b132f68f455aa3bc7ec0e343ca7f.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.726 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Creating directory /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.726 232432 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:28 compute-2 systemd[1]: Started libpod-conmon-a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4.scope.
Nov 29 07:47:28 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:47:28 compute-2 podman[243653]: 2025-11-29 07:47:28.679175627 +0000 UTC m=+0.027537694 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:47:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95757e79af54b925c34daec65bc5cd08a9303e6d0c7eba27ccec2b38ff0bcdab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:28 compute-2 podman[243653]: 2025-11-29 07:47:28.802536638 +0000 UTC m=+0.150898705 container init a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:47:28 compute-2 podman[243653]: 2025-11-29 07:47:28.809786415 +0000 UTC m=+0.158148462 container start a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:47:28 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [NOTICE]   (243674) : New worker (243676) forked
Nov 29 07:47:28 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [NOTICE]   (243674) : Loading success.
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.968 232432 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:28 compute-2 nova_compute[232428]: 2025-11-29 07:47:28.976 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.428 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.448 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.449 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.450 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.450 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.451 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:29 compute-2 nova_compute[232428]: 2025-11-29 07:47:29.451 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:47:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2776053813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2339003117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:30.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:30.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.510 232432 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.647 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.677 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.678 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.678 232432 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:30 compute-2 nova_compute[232428]: 2025-11-29 07:47:30.683 232432 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Nov 29 07:47:30 compute-2 virtqemud[231977]: Domain id=7 name='instance-0000000f' uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 is tainted: custom-monitor
Nov 29 07:47:30 compute-2 ceph-mon[77138]: pgmap v1287: 305 pgs: 305 active+clean; 394 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.1 MiB/s wr, 224 op/s
Nov 29 07:47:31 compute-2 nova_compute[232428]: 2025-11-29 07:47:31.694 232432 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Nov 29 07:47:31 compute-2 nova_compute[232428]: 2025-11-29 07:47:31.898 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402436.8952591, b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:31 compute-2 nova_compute[232428]: 2025-11-29 07:47:31.899 232432 INFO nova.compute.manager [-] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] VM Stopped (Lifecycle Event)
Nov 29 07:47:32 compute-2 nova_compute[232428]: 2025-11-29 07:47:32.050 232432 DEBUG nova.compute.manager [None req-0733b10b-cc03-4ba9-afc6-9df51112fab4 - - - - - -] [instance: b7bb1f43-fe7f-46ff-a8eb-b5e422f17f45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:32.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:32 compute-2 ceph-mon[77138]: pgmap v1288: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 285 op/s
Nov 29 07:47:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:32 compute-2 nova_compute[232428]: 2025-11-29 07:47:32.705 232432 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Nov 29 07:47:32 compute-2 nova_compute[232428]: 2025-11-29 07:47:32.713 232432 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:32 compute-2 nova_compute[232428]: 2025-11-29 07:47:32.734 232432 DEBUG nova.objects.instance [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 07:47:32 compute-2 nova_compute[232428]: 2025-11-29 07:47:32.943 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4147426160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:34.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:34 compute-2 nova_compute[232428]: 2025-11-29 07:47:34.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:36 compute-2 sudo[243689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:36 compute-2 sudo[243689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:36 compute-2 sudo[243689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:36 compute-2 sudo[243714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:36 compute-2 sudo[243714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:36 compute-2 sudo[243714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:36 compute-2 ceph-mon[77138]: pgmap v1289: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 263 op/s
Nov 29 07:47:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4229400981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:37 compute-2 podman[243740]: 2025-11-29 07:47:37.733772481 +0000 UTC m=+0.125479859 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:47:38 compute-2 nova_compute[232428]: 2025-11-29 07:47:38.008 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:38 compute-2 ceph-mon[77138]: pgmap v1290: 305 pgs: 305 active+clean; 375 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 340 op/s
Nov 29 07:47:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2469695789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:38.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:39 compute-2 nova_compute[232428]: 2025-11-29 07:47:39.036 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 07:47:39 compute-2 nova_compute[232428]: 2025-11-29 07:47:39.359 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:39 compute-2 ceph-mon[77138]: pgmap v1291: 305 pgs: 305 active+clean; 381 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 242 op/s
Nov 29 07:47:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:40 compute-2 ceph-mon[77138]: pgmap v1292: 305 pgs: 305 active+clean; 388 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 206 op/s
Nov 29 07:47:41 compute-2 podman[243762]: 2025-11-29 07:47:41.709110028 +0000 UTC m=+0.093388802 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:47:41 compute-2 ceph-mon[77138]: pgmap v1293: 305 pgs: 305 active+clean; 329 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 182 op/s
Nov 29 07:47:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:42.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:43 compute-2 nova_compute[232428]: 2025-11-29 07:47:43.011 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:44.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:44 compute-2 nova_compute[232428]: 2025-11-29 07:47:44.366 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:44 compute-2 ceph-mon[77138]: pgmap v1294: 305 pgs: 305 active+clean; 329 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 118 op/s
Nov 29 07:47:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2181592901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:45 compute-2 ceph-mon[77138]: pgmap v1295: 305 pgs: 305 active+clean; 336 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Nov 29 07:47:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/346466577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:48 compute-2 nova_compute[232428]: 2025-11-29 07:47:48.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:48 compute-2 ceph-mon[77138]: pgmap v1296: 305 pgs: 305 active+clean; 345 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 131 op/s
Nov 29 07:47:49 compute-2 nova_compute[232428]: 2025-11-29 07:47:49.369 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3164445492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:49 compute-2 ceph-mon[77138]: pgmap v1297: 305 pgs: 305 active+clean; 288 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 163 op/s
Nov 29 07:47:50 compute-2 nova_compute[232428]: 2025-11-29 07:47:50.095 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 07:47:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:47:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:47:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:50.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:51.342 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:51 compute-2 nova_compute[232428]: 2025-11-29 07:47:51.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:51.343 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:47:52 compute-2 ceph-mon[77138]: pgmap v1298: 305 pgs: 305 active+clean; 223 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 216 op/s
Nov 29 07:47:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:52.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:52.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:52 compute-2 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 07:47:52 compute-2 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000010.scope: Consumed 15.786s CPU time.
Nov 29 07:47:52 compute-2 systemd-machined[194747]: Machine qemu-6-instance-00000010 terminated.
Nov 29 07:47:52 compute-2 podman[243787]: 2025-11-29 07:47:52.569579585 +0000 UTC m=+0.115938888 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.111 232432 INFO nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance shutdown successfully after 24 seconds.
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.116 232432 INFO nova.virt.libvirt.driver [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance destroyed successfully.
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.121 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.121 232432 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.283 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.283 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:53 compute-2 nova_compute[232428]: 2025-11-29 07:47:53.284 232432 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:54.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:47:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:54.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:47:54 compute-2 nova_compute[232428]: 2025-11-29 07:47:54.416 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:54 compute-2 ceph-mon[77138]: pgmap v1299: 305 pgs: 305 active+clean; 223 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Nov 29 07:47:55 compute-2 ovn_controller[134375]: 2025-11-29T07:47:55Z|00077|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 07:47:55 compute-2 ovn_controller[134375]: 2025-11-29T07:47:55Z|00078|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 07:47:55 compute-2 nova_compute[232428]: 2025-11-29 07:47:55.144 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:47:55.346 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:55 compute-2 ceph-mon[77138]: pgmap v1300: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 200 op/s
Nov 29 07:47:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 07:47:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:56 compute-2 sudo[243819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:56 compute-2 sudo[243819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:56 compute-2 sudo[243819]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:56 compute-2 sudo[243844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:56 compute-2 sudo[243844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:56 compute-2 sudo[243844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:56 compute-2 ceph-mon[77138]: osdmap e147: 3 total, 3 up, 3 in
Nov 29 07:47:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1628963730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3147788635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2723236095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:57 compute-2 ceph-mon[77138]: pgmap v1302: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 182 KiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 29 07:47:58 compute-2 nova_compute[232428]: 2025-11-29 07:47:58.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:47:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:47:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:47:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:47:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:47:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:58 compute-2 sudo[243870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:58 compute-2 sudo[243870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:58 compute-2 sudo[243870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:58 compute-2 sudo[243895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:58 compute-2 sudo[243895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:58 compute-2 sudo[243895]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:58 compute-2 sudo[243920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:58 compute-2 sudo[243920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:58 compute-2 sudo[243920]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:58 compute-2 sudo[243945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:47:58 compute-2 sudo[243945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:59 compute-2 sudo[243945]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:59 compute-2 nova_compute[232428]: 2025-11-29 07:47:59.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:59 compute-2 ceph-mon[77138]: pgmap v1303: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 264 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 07:47:59 compute-2 nova_compute[232428]: 2025-11-29 07:47:59.874 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "dc42f6b3-eda5-409e-aac8-68275e50922e" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-2 nova_compute[232428]: 2025-11-29 07:47:59.875 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-2 nova_compute[232428]: 2025-11-29 07:47:59.875 232432 DEBUG nova.compute.manager [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Going to confirm migration 2 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Nov 29 07:48:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779743987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.210 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.210 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.211 232432 DEBUG nova.network.neutron [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.211 232432 DEBUG nova.objects.instance [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'info_cache' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:00.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.602 232432 DEBUG nova.network.neutron [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3779743987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.921 232432 DEBUG nova.network.neutron [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.943 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:00 compute-2 nova_compute[232428]: 2025-11-29 07:48:00.944 232432 DEBUG nova.objects.instance [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:01 compute-2 nova_compute[232428]: 2025-11-29 07:48:01.095 232432 DEBUG nova.storage.rbd_utils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] removing snapshot(nova-resize) on rbd image(dc42f6b3-eda5-409e-aac8-68275e50922e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:48:01 compute-2 ceph-mon[77138]: pgmap v1304: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 96 op/s
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:48:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 07:48:01 compute-2 nova_compute[232428]: 2025-11-29 07:48:01.849 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:01 compute-2 nova_compute[232428]: 2025-11-29 07:48:01.850 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:01 compute-2 nova_compute[232428]: 2025-11-29 07:48:01.942 232432 DEBUG oslo_concurrency.processutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3703988908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.387 232432 DEBUG oslo_concurrency.processutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.396 232432 DEBUG nova.compute.provider_tree [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.416 232432 DEBUG nova.scheduler.client.report [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.465 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.558 232432 INFO nova.scheduler.client.report [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocation for migration e3caf3fb-c703-4009-b623-b79ea4aabd7b
Nov 29 07:48:02 compute-2 nova_compute[232428]: 2025-11-29 07:48:02.616 232432 DEBUG oslo_concurrency.lockutils [None req-3c723f58-04ad-495c-b3be-488c115520ad e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:03 compute-2 nova_compute[232428]: 2025-11-29 07:48:03.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:03.294 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:03.295 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:03.296 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:04.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:04 compute-2 nova_compute[232428]: 2025-11-29 07:48:04.467 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:04 compute-2 ceph-mon[77138]: osdmap e148: 3 total, 3 up, 3 in
Nov 29 07:48:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2397694028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3703988908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3648557177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.490 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.491 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.576 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.651 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.651 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.659 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.659 232432 INFO nova.compute.claims [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:48:05 compute-2 nova_compute[232428]: 2025-11-29 07:48:05.852 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:06 compute-2 ceph-mon[77138]: pgmap v1306: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.4 KiB/s wr, 112 op/s
Nov 29 07:48:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:06.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2099552561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.317 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.324 232432 DEBUG nova.compute.provider_tree [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.369 232432 DEBUG nova.scheduler.client.report [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.409 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.410 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.471 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.472 232432 DEBUG nova.network.neutron [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.494 232432 INFO nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.513 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.605 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.607 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.607 232432 INFO nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Creating image(s)
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.639 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.672 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.701 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.707 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.774 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.776 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.777 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.777 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.811 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.817 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.935 232432 DEBUG nova.network.neutron [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:48:06 compute-2 nova_compute[232428]: 2025-11-29 07:48:06.937 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:48:07 compute-2 ceph-mon[77138]: pgmap v1307: 305 pgs: 305 active+clean; 249 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 123 KiB/s wr, 108 op/s
Nov 29 07:48:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2099552561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/205804548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:07 compute-2 nova_compute[232428]: 2025-11-29 07:48:07.161 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:07 compute-2 nova_compute[232428]: 2025-11-29 07:48:07.250 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] resizing rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:48:07 compute-2 nova_compute[232428]: 2025-11-29 07:48:07.614 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402472.6128085, dc42f6b3-eda5-409e-aac8-68275e50922e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:07 compute-2 nova_compute[232428]: 2025-11-29 07:48:07.614 232432 INFO nova.compute.manager [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Stopped (Lifecycle Event)
Nov 29 07:48:07 compute-2 nova_compute[232428]: 2025-11-29 07:48:07.859 232432 DEBUG nova.compute.manager [None req-e86291f1-dbf7-48a4-ba83-72d3bfa15905 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:08 compute-2 nova_compute[232428]: 2025-11-29 07:48:08.100 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:08.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1184317448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:08 compute-2 ceph-mon[77138]: pgmap v1308: 305 pgs: 305 active+clean; 253 MiB data, 452 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 347 KiB/s wr, 131 op/s
Nov 29 07:48:08 compute-2 podman[244233]: 2025-11-29 07:48:08.675567564 +0000 UTC m=+0.069282325 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 07:48:08 compute-2 nova_compute[232428]: 2025-11-29 07:48:08.938 232432 DEBUG nova.objects.instance [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.001 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.002 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Ensure instance console log exists: /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.002 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.003 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.003 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.004 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.009 232432 WARNING nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.013 232432 DEBUG nova.virt.libvirt.host [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.013 232432 DEBUG nova.virt.libvirt.host [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.016 232432 DEBUG nova.virt.libvirt.host [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.016 232432 DEBUG nova.virt.libvirt.host [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.018 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.018 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.019 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.019 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.019 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.019 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.020 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.020 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.020 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.020 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.021 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.021 232432 DEBUG nova.virt.hardware [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.024 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 07:48:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3228809288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.470 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.473 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.642 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:09 compute-2 nova_compute[232428]: 2025-11-29 07:48:09.647 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1505377741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:10.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.179 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.182 232432 DEBUG nova.objects.instance [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.208 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <uuid>3efe6bb4-36be-4a30-832d-8da05e5baa50</uuid>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <name>instance-00000014</name>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-65742869</nova:name>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:48:09</nova:creationTime>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <system>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="serial">3efe6bb4-36be-4a30-832d-8da05e5baa50</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="uuid">3efe6bb4-36be-4a30-832d-8da05e5baa50</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </system>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <os>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </os>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <features>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </features>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/3efe6bb4-36be-4a30-832d-8da05e5baa50_disk">
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config">
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:48:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/console.log" append="off"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <video>
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </video>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:48:10 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:48:10 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:48:10 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:48:10 compute-2 nova_compute[232428]: </domain>
Nov 29 07:48:10 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:48:10 compute-2 ceph-mon[77138]: osdmap e149: 3 total, 3 up, 3 in
Nov 29 07:48:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3228809288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:10 compute-2 ceph-mon[77138]: pgmap v1310: 305 pgs: 305 active+clean; 291 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Nov 29 07:48:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1505377741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.660 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.660 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.661 232432 INFO nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Using config drive
Nov 29 07:48:10 compute-2 nova_compute[232428]: 2025-11-29 07:48:10.720 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:11 compute-2 nova_compute[232428]: 2025-11-29 07:48:11.654 232432 INFO nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Creating config drive at /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config
Nov 29 07:48:11 compute-2 nova_compute[232428]: 2025-11-29 07:48:11.663 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsq6xvd73 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:11 compute-2 ceph-mon[77138]: pgmap v1311: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.4 MiB/s wr, 210 op/s
Nov 29 07:48:11 compute-2 nova_compute[232428]: 2025-11-29 07:48:11.810 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsq6xvd73" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:11 compute-2 nova_compute[232428]: 2025-11-29 07:48:11.847 232432 DEBUG nova.storage.rbd_utils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:11 compute-2 nova_compute[232428]: 2025-11-29 07:48:11.853 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:12.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.302 232432 DEBUG oslo_concurrency.processutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config 3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.303 232432 INFO nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Deleting local config drive /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/disk.config because it was imported into RBD.
Nov 29 07:48:12 compute-2 systemd-machined[194747]: New machine qemu-8-instance-00000014.
Nov 29 07:48:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:12.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:12 compute-2 systemd[1]: Started Virtual Machine qemu-8-instance-00000014.
Nov 29 07:48:12 compute-2 podman[244399]: 2025-11-29 07:48:12.501667599 +0000 UTC m=+0.104933014 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:48:12 compute-2 sudo[244424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:12 compute-2 sudo[244424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:12 compute-2 sudo[244424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:12 compute-2 sudo[244453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:48:12 compute-2 sudo[244453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:12 compute-2 sudo[244453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.876 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402492.8754597, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.877 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Resumed (Lifecycle Event)
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.880 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.880 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.882 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating tmpfile /var/lib/nova/instances/tmp28xflw8r to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.883 232432 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.893 232432 INFO nova.virt.libvirt.driver [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance spawned successfully.
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.894 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.898 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.901 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.943 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.943 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.944 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.944 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.944 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:12 compute-2 nova_compute[232428]: 2025-11-29 07:48:12.945 232432 DEBUG nova.virt.libvirt.driver [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.003 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.003 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402492.8758261, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.003 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Started (Lifecycle Event)
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.102 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.241 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.245 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:48:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:48:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.352 232432 INFO nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Took 6.75 seconds to spawn the instance on the hypervisor.
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.353 232432 DEBUG nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.363 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:48:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.444 232432 INFO nova.compute.manager [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Took 7.81 seconds to build instance.
Nov 29 07:48:13 compute-2 nova_compute[232428]: 2025-11-29 07:48:13.590 232432 DEBUG oslo_concurrency.lockutils [None req-50cf7789-1702-4e1a-9204-0246ad556f96 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:14.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:14.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:14 compute-2 sshd-session[244521]: Invalid user sol from 45.148.10.240 port 53012
Nov 29 07:48:14 compute-2 nova_compute[232428]: 2025-11-29 07:48:14.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:14 compute-2 sshd-session[244521]: Connection closed by invalid user sol 45.148.10.240 port 53012 [preauth]
Nov 29 07:48:14 compute-2 ceph-mon[77138]: pgmap v1312: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Nov 29 07:48:15 compute-2 nova_compute[232428]: 2025-11-29 07:48:15.806 232432 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Nov 29 07:48:15 compute-2 nova_compute[232428]: 2025-11-29 07:48:15.833 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:15 compute-2 nova_compute[232428]: 2025-11-29 07:48:15.834 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:15 compute-2 nova_compute[232428]: 2025-11-29 07:48:15.834 232432 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:15 compute-2 ceph-mon[77138]: pgmap v1313: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.2 MiB/s wr, 373 op/s
Nov 29 07:48:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:16.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:16 compute-2 sudo[244524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:16 compute-2 sudo[244524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:16 compute-2 sudo[244524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:48:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:16.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:48:16 compute-2 sudo[244549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:16 compute-2 sudo[244549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:16 compute-2 sudo[244549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.826 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.827 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.827 232432 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.829 232432 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.880 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.883 232432 DEBUG os_brick.utils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:48:17 compute-2 nova_compute[232428]: 2025-11-29 07:48:17.884 232432 INFO oslo.privsep.daemon [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpi08uqwsz/privsep.sock']
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.157 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:18.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.371 232432 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:18.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:18 compute-2 ceph-mon[77138]: pgmap v1314: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 4.0 MiB/s wr, 358 op/s
Nov 29 07:48:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1083834170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.618 232432 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.645 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.659 232432 INFO oslo.privsep.daemon [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.477 244579 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.482 244579 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.485 244579 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.485 244579 INFO oslo.privsep.daemon [-] privsep daemon running as pid 244579
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.662 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[8c66eb55-c421-4133-965e-059115e7545f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.753 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.769 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.769 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[84558672-a529-4549-a5ff-670c37473a96]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.770 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.782 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.782 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[05c656ff-4c41-4a36-939a-da8839a917c6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.785 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.798 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.798 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[25fd69db-4f82-4598-9380-1a9f2ba0f4a6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.801 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c64f1c-8022-470a-8e70-f2ffb29864dd]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.801 232432 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.834 232432 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.842 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.845 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.845 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.846 232432 DEBUG os_brick.utils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] <== get_connector_properties: return (962ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.858 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.859 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Creating file /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/ba30e2396e3644cc8fee22f6003db0ee.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Nov 29 07:48:18 compute-2 nova_compute[232428]: 2025-11-29 07:48:18.860 232432 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/ba30e2396e3644cc8fee22f6003db0ee.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.292 232432 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/ba30e2396e3644cc8fee22f6003db0ee.tmp" returned: 1 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.293 232432 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/ba30e2396e3644cc8fee22f6003db0ee.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.293 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Creating directory /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.293 232432 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.506 232432 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.510 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 07:48:19 compute-2 nova_compute[232428]: 2025-11-29 07:48:19.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1635298860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2507573465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.123 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='e8b3488f-f1e2-453e-93e6-a153b3c09a17'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.125 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating instance directory: /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.125 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Ensure instance console log exists: /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.125 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.126 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.127 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.129 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.137 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.138 232432 DEBUG nova.virt.libvirt.vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:06Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.139 232432 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.142 232432 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.144 232432 DEBUG os_vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.144 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.145 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.145 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.150 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.151 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83ec9820-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.152 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83ec9820-37, col_values=(('external_ids', {'iface-id': '83ec9820-3713-4570-ab8a-a88fba3f29c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:30:0e', 'vm-uuid': '738ca4a4-91f6-4476-a500-4d85c8eb00ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:48:20 compute-2 NetworkManager[48993]: <info>  [1764402500.1559] manager: (tap83ec9820-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.165 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.167 232432 INFO os_vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.172 232432 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Nov 29 07:48:20 compute-2 nova_compute[232428]: 2025-11-29 07:48:20.173 232432 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='e8b3488f-f1e2-453e-93e6-a153b3c09a17'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Nov 29 07:48:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:20.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:20.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:20 compute-2 ceph-mon[77138]: pgmap v1315: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 292 op/s
Nov 29 07:48:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1051929412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2507573465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:22 compute-2 nova_compute[232428]: 2025-11-29 07:48:22.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:22 compute-2 nova_compute[232428]: 2025-11-29 07:48:22.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:22 compute-2 ceph-mon[77138]: pgmap v1316: 305 pgs: 305 active+clean; 389 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 302 op/s
Nov 29 07:48:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:22.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3341268384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/634316336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:23 compute-2 nova_compute[232428]: 2025-11-29 07:48:23.157 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:23 compute-2 nova_compute[232428]: 2025-11-29 07:48:23.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:23 compute-2 podman[244596]: 2025-11-29 07:48:23.753173656 +0000 UTC m=+0.136785114 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:48:24 compute-2 ceph-mon[77138]: pgmap v1317: 305 pgs: 305 active+clean; 389 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 29 07:48:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/249590342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:48:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:24.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.476 232432 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.662 232432 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='e8b3488f-f1e2-453e-93e6-a153b3c09a17'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Nov 29 07:48:24 compute-2 kernel: tap83ec9820-37: entered promiscuous mode
Nov 29 07:48:24 compute-2 NetworkManager[48993]: <info>  [1764402504.9109] manager: (tap83ec9820-37): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 07:48:24 compute-2 ovn_controller[134375]: 2025-11-29T07:48:24Z|00079|binding|INFO|Claiming lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 for this additional chassis.
Nov 29 07:48:24 compute-2 ovn_controller[134375]: 2025-11-29T07:48:24Z|00080|binding|INFO|83ec9820-3713-4570-ab8a-a88fba3f29c9: Claiming fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.913 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:24 compute-2 ovn_controller[134375]: 2025-11-29T07:48:24Z|00081|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 ovn-installed in OVS
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.930 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:24 compute-2 nova_compute[232428]: 2025-11-29 07:48:24.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:24 compute-2 systemd-machined[194747]: New machine qemu-9-instance-00000012.
Nov 29 07:48:24 compute-2 systemd-udevd[244635]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:48:24 compute-2 NetworkManager[48993]: <info>  [1764402504.9697] device (tap83ec9820-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:48:24 compute-2 systemd[1]: Started Virtual Machine qemu-9-instance-00000012.
Nov 29 07:48:24 compute-2 NetworkManager[48993]: <info>  [1764402504.9716] device (tap83ec9820-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.326 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.326 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.327 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:48:25 compute-2 nova_compute[232428]: 2025-11-29 07:48:25.327 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:48:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:48:26 compute-2 nova_compute[232428]: 2025-11-29 07:48:26.388 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.280 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.522 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.523 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.523 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.523 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.523 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.523 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.721 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.721 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.722 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.722 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:48:27 compute-2 nova_compute[232428]: 2025-11-29 07:48:27.722 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2135189243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1011885570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1011885570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.159 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3981386382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.230 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.295 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402508.2950647, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.295 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Started (Lifecycle Event)
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.414 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.428 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.429 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.434 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.434 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.438 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.438 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:28.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.674 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.675 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4440MB free_disk=20.844467163085938GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.675 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.675 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.761 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration for instance 738ca4a4-91f6-4476-a500-4d85c8eb00ef refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.784 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating resource usage from migration ae907aa6-ecdf-428a-a07c-0e5499b61879
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.785 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Starting to track incoming migration ae907aa6-ecdf-428a-a07c-0e5499b61879 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.799 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating resource usage from migration b40c1dc1-dccd-49f3-9f64-b6116e439e87
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.828 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.848 232432 WARNING nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 738ca4a4-91f6-4476-a500-4d85c8eb00ef has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.849 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration b40c1dc1-dccd-49f3-9f64-b6116e439e87 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.849 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.849 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:48:28 compute-2 nova_compute[232428]: 2025-11-29 07:48:28.922 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.236 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402509.2360444, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.237 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Resumed (Lifecycle Event)
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.347 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.355 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:48:29 compute-2 ceph-mon[77138]: pgmap v1318: 305 pgs: 305 active+clean; 467 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 339 op/s
Nov 29 07:48:29 compute-2 ceph-mon[77138]: pgmap v1319: 305 pgs: 305 active+clean; 492 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 7.7 MiB/s wr, 192 op/s
Nov 29 07:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3671205973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1011885570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1011885570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3981386382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/608384582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3034862695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.464 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.481 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.488 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.559 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.603 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.706 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:48:29 compute-2 nova_compute[232428]: 2025-11-29 07:48:29.706 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:30 compute-2 nova_compute[232428]: 2025-11-29 07:48:30.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:30 compute-2 nova_compute[232428]: 2025-11-29 07:48:30.385 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1539478808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3938264456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3034862695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:30 compute-2 ceph-mon[77138]: pgmap v1320: 305 pgs: 305 active+clean; 493 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.8 MiB/s wr, 231 op/s
Nov 29 07:48:30 compute-2 ovn_controller[134375]: 2025-11-29T07:48:30Z|00082|binding|INFO|Claiming lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 for this chassis.
Nov 29 07:48:30 compute-2 ovn_controller[134375]: 2025-11-29T07:48:30Z|00083|binding|INFO|83ec9820-3713-4570-ab8a-a88fba3f29c9: Claiming fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 07:48:30 compute-2 ovn_controller[134375]: 2025-11-29T07:48:30Z|00084|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 up in Southbound
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.785 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.786 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 bound to our chassis
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.787 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.803 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1db0d260-de93-4233-8602-1bc397b25eaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.842 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a082cc55-e83b-4544-b8d5-a9aedde8741c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.846 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3ab2a8-b335-4055-ac9b-39a4dcc105a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.879 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[333c73e9-be30-40c4-b9b2-c1a30e2e4ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.906 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[30c5724e-e7de-4609-9ccf-edbd2bd49588]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541317, 'reachable_time': 38357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244739, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.926 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df53ba18-e022-4c7a-a4e1-5e473ddfd877]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64f65ccd-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541332, 'tstamp': 541332}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244740, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64f65ccd-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541336, 'tstamp': 541336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244740, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.928 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:30 compute-2 nova_compute[232428]: 2025-11-29 07:48:30.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:30 compute-2 nova_compute[232428]: 2025-11-29 07:48:30.930 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.931 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.931 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.931 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:30.932 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:48:31 compute-2 nova_compute[232428]: 2025-11-29 07:48:31.041 232432 INFO nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Post operation of migration started
Nov 29 07:48:31 compute-2 nova_compute[232428]: 2025-11-29 07:48:31.582 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:31 compute-2 nova_compute[232428]: 2025-11-29 07:48:31.583 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:31 compute-2 nova_compute[232428]: 2025-11-29 07:48:31.583 232432 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:31 compute-2 ceph-mon[77138]: pgmap v1321: 305 pgs: 305 active+clean; 504 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 8.9 MiB/s wr, 280 op/s
Nov 29 07:48:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:32 compute-2 nova_compute[232428]: 2025-11-29 07:48:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.516 232432 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.544 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.567 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.568 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.569 232432 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:33 compute-2 nova_compute[232428]: 2025-11-29 07:48:33.576 232432 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Nov 29 07:48:33 compute-2 virtqemud[231977]: Domain id=9 name='instance-00000012' uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef is tainted: custom-monitor
Nov 29 07:48:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:34.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:34 compute-2 ceph-mon[77138]: pgmap v1322: 305 pgs: 305 active+clean; 504 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.5 MiB/s wr, 226 op/s
Nov 29 07:48:34 compute-2 nova_compute[232428]: 2025-11-29 07:48:34.585 232432 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.201 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.537 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "5ad16860-47e5-45db-91e0-9e1c943dba38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.538 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.589 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.593 232432 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.599 232432 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.630 232432 DEBUG nova.objects.instance [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.720 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.721 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.727 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.727 232432 INFO nova.compute.claims [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:48:35 compute-2 nova_compute[232428]: 2025-11-29 07:48:35.870 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:36.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2900025033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.326 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.334 232432 DEBUG nova.compute.provider_tree [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.359 232432 DEBUG nova.scheduler.client.report [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.392 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.424 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.424 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:36.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.501 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] No node specified, defaulting to compute-2.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.534 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.535 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:48:36 compute-2 ceph-mon[77138]: pgmap v1323: 305 pgs: 305 active+clean; 529 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.5 MiB/s wr, 316 op/s
Nov 29 07:48:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3710328202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2900025033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:36 compute-2 sudo[244766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:36 compute-2 sudo[244766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:36 compute-2 sudo[244766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.644 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.646 232432 DEBUG nova.network.neutron [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.682 232432 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:48:36 compute-2 sudo[244791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:36 compute-2 sudo[244791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.710 232432 DEBUG nova.compute.manager [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.710 232432 DEBUG oslo_concurrency.lockutils [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.711 232432 DEBUG oslo_concurrency.lockutils [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.711 232432 DEBUG oslo_concurrency.lockutils [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.711 232432 DEBUG nova.compute.manager [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.711 232432 WARNING nova.compute.manager [req-1ddec554-7a81-4ece-83eb-f0bbe300c706 req-b9a2ab8e-6e57-44d1-989b-824222a1d299 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state None.
Nov 29 07:48:36 compute-2 sudo[244791]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.715 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.845 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.847 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.848 232432 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Creating image(s)
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.888 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.931 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.970 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:36 compute-2 nova_compute[232428]: 2025-11-29 07:48:36.977 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.072 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.074 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.075 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.075 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.117 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.122 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ad16860-47e5-45db-91e0-9e1c943dba38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.214 232432 DEBUG nova.network.neutron [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:48:37 compute-2 nova_compute[232428]: 2025-11-29 07:48:37.215 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.646 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Check if temp file /var/lib/nova/instances/tmp6kfl16am exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.647 232432 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.809 232432 DEBUG nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.809 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.809 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 DEBUG nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 WARNING nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 DEBUG nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.810 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.811 232432 DEBUG oslo_concurrency.lockutils [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.811 232432 DEBUG nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:38 compute-2 nova_compute[232428]: 2025-11-29 07:48:38.811 232432 WARNING nova.compute.manager [req-e39ae268-cc0d-4ed0-a53e-9516c47d72ce req-8ce8061c-58b4-4702-9486-6bd6417392d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:39 compute-2 podman[244912]: 2025-11-29 07:48:39.685560599 +0000 UTC m=+0.064342460 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:48:40 compute-2 nova_compute[232428]: 2025-11-29 07:48:40.204 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:40.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4154217637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:40 compute-2 nova_compute[232428]: 2025-11-29 07:48:40.633 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 07:48:41 compute-2 nova_compute[232428]: 2025-11-29 07:48:41.770 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ad16860-47e5-45db-91e0-9e1c943dba38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:41 compute-2 nova_compute[232428]: 2025-11-29 07:48:41.839 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] resizing rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:48:41 compute-2 nova_compute[232428]: 2025-11-29 07:48:41.942 232432 DEBUG nova.objects.instance [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ad16860-47e5-45db-91e0-9e1c943dba38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.035 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.036 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Ensure instance console log exists: /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.036 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.037 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.037 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.041 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.046 232432 WARNING nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.051 232432 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.052 232432 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.055 232432 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.055 232432 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.056 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.057 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.057 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.057 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.057 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.058 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.058 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.058 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.058 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.058 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.059 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.059 232432 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.061 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:42.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:42 compute-2 ceph-mon[77138]: pgmap v1324: 305 pgs: 305 active+clean; 532 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 232 op/s
Nov 29 07:48:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1437457782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:42 compute-2 ceph-mon[77138]: pgmap v1325: 305 pgs: 305 active+clean; 532 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Nov 29 07:48:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:42.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2068126667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:42 compute-2 podman[245026]: 2025-11-29 07:48:42.666914056 +0000 UTC m=+0.060532851 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.862 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.902 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:42 compute-2 nova_compute[232428]: 2025-11-29 07:48:42.912 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050765512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.169 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1059299206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.871 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.959s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.873 232432 DEBUG nova.objects.instance [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ad16860-47e5-45db-91e0-9e1c943dba38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.886 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <uuid>5ad16860-47e5-45db-91e0-9e1c943dba38</uuid>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <name>instance-00000018</name>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersOnMultiNodesTest-server-966081188-2</nova:name>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:48:42</nova:creationTime>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:user uuid="386584ea971049e3b0c06b8237710848">tempest-ServersOnMultiNodesTest-893669333-project-member</nova:user>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <nova:project uuid="c80f8d4661784e8faaf78d28df3fb677">tempest-ServersOnMultiNodesTest-893669333</nova:project>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <system>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="serial">5ad16860-47e5-45db-91e0-9e1c943dba38</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="uuid">5ad16860-47e5-45db-91e0-9e1c943dba38</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </system>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <os>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </os>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <features>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </features>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5ad16860-47e5-45db-91e0-9e1c943dba38_disk">
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </source>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config">
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </source>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:48:43 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/console.log" append="off"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <video>
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </video>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:48:43 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:48:43 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:48:43 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:48:43 compute-2 nova_compute[232428]: </domain>
Nov 29 07:48:43 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.967 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.967 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:48:43 compute-2 nova_compute[232428]: 2025-11-29 07:48:43.968 232432 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Using config drive
Nov 29 07:48:44 compute-2 nova_compute[232428]: 2025-11-29 07:48:44.001 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:44.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:44 compute-2 ceph-mon[77138]: pgmap v1326: 305 pgs: 305 active+clean; 646 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.8 MiB/s wr, 225 op/s
Nov 29 07:48:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2068126667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2050765512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.207 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.638 232432 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Creating config drive at /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.644 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8dlmpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.781 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8dlmpq" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.822 232432 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.827 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config 5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.860 232432 DEBUG nova.compute.manager [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.861 232432 DEBUG oslo_concurrency.lockutils [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.861 232432 DEBUG oslo_concurrency.lockutils [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.862 232432 DEBUG oslo_concurrency.lockutils [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.862 232432 DEBUG nova.compute.manager [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:45 compute-2 nova_compute[232428]: 2025-11-29 07:48:45.862 232432 DEBUG nova.compute.manager [req-a6d5db94-c61e-47c7-84eb-46cf8f1014cd req-7328fe21-26ec-4215-9dff-bd50db23c11a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:48:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1059299206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:45 compute-2 ceph-mon[77138]: pgmap v1327: 305 pgs: 305 active+clean; 646 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 171 op/s
Nov 29 07:48:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/379740943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.006 232432 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config 5ad16860-47e5-45db-91e0-9e1c943dba38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.007 232432 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Deleting local config drive /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38/disk.config because it was imported into RBD.
Nov 29 07:48:46 compute-2 systemd-machined[194747]: New machine qemu-10-instance-00000018.
Nov 29 07:48:46 compute-2 systemd[1]: Started Virtual Machine qemu-10-instance-00000018.
Nov 29 07:48:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:46.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:46.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.622 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402526.6212633, 5ad16860-47e5-45db-91e0-9e1c943dba38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.624 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] VM Resumed (Lifecycle Event)
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.626 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.626 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.630 232432 INFO nova.virt.libvirt.driver [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance spawned successfully.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.630 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.676 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.677 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.679 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.679 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.680 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.680 232432 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.685 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.690 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.736 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.739 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402526.6233635, 5ad16860-47e5-45db-91e0-9e1c943dba38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.739 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] VM Started (Lifecycle Event)
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.775 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.779 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.786 232432 INFO nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Took 9.94 seconds to spawn the instance on the hypervisor.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.786 232432 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.814 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.825 232432 INFO nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 7.01 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.826 232432 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.856 232432 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(9286f335-ba58-4309-8f17-66a3cac541bd),old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='0293ab39-88ee-4fc9-98a5-4614a458bb2c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.859 232432 INFO nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Took 11.18 seconds to build instance.
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.861 232432 DEBUG nova.objects.instance [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lazy-loading 'migration_context' on Instance uuid 738ca4a4-91f6-4476-a500-4d85c8eb00ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.862 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.863 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.864 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.878 232432 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.885 232432 DEBUG nova.virt.libvirt.migration [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Find same serial number: pos=1, serial=1c382493-5718-4c6c-93b8-8f2562c0a68a _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.887 232432 DEBUG nova.virt.libvirt.vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:35Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.888 232432 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.888 232432 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.889 232432 DEBUG nova.virt.libvirt.migration [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 07:48:46 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:eb:30:0e"/>
Nov 29 07:48:46 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 07:48:46 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:48:46 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 07:48:46 compute-2 nova_compute[232428]:   <target dev="tap83ec9820-37"/>
Nov 29 07:48:46 compute-2 nova_compute[232428]: </interface>
Nov 29 07:48:46 compute-2 nova_compute[232428]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Nov 29 07:48:46 compute-2 nova_compute[232428]: 2025-11-29 07:48:46.890 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Nov 29 07:48:47 compute-2 nova_compute[232428]: 2025-11-29 07:48:47.368 232432 DEBUG nova.virt.libvirt.migration [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 07:48:47 compute-2 nova_compute[232428]: 2025-11-29 07:48:47.369 232432 INFO nova.virt.libvirt.migration [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Increasing downtime to 50 ms after 0 sec elapsed time
Nov 29 07:48:47 compute-2 ceph-mon[77138]: pgmap v1328: 305 pgs: 305 active+clean; 653 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.0 MiB/s wr, 214 op/s
Nov 29 07:48:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2523292697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.170 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:48.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:48 compute-2 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 07:48:48 compute-2 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Consumed 14.347s CPU time.
Nov 29 07:48:48 compute-2 systemd-machined[194747]: Machine qemu-8-instance-00000014 terminated.
Nov 29 07:48:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.699 232432 INFO nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance shutdown successfully after 29 seconds.
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.705 232432 INFO nova.virt.libvirt.driver [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance destroyed successfully.
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.708 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.708 232432 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.843 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402528.8432596, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:48 compute-2 nova_compute[232428]: 2025-11-29 07:48:48.844 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Paused (Lifecycle Event)
Nov 29 07:48:48 compute-2 ceph-mon[77138]: pgmap v1329: 305 pgs: 305 active+clean; 659 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 709 KiB/s rd, 6.8 MiB/s wr, 149 op/s
Nov 29 07:48:49 compute-2 kernel: tap83ec9820-37 (unregistering): left promiscuous mode
Nov 29 07:48:49 compute-2 NetworkManager[48993]: <info>  [1764402529.5482] device (tap83ec9820-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.558 232432 DEBUG nova.compute.manager [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.558 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.559 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.559 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.559 232432 DEBUG nova.compute.manager [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.559 232432 WARNING nova.compute.manager [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.560 232432 DEBUG nova.compute.manager [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.560 232432 DEBUG nova.compute.manager [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing instance network info cache due to event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.560 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.560 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.560 232432 DEBUG nova.network.neutron [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:48:49 compute-2 ovn_controller[134375]: 2025-11-29T07:48:49Z|00085|binding|INFO|Releasing lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 from this chassis (sb_readonly=0)
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.563 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:49 compute-2 ovn_controller[134375]: 2025-11-29T07:48:49Z|00086|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 down in Southbound
Nov 29 07:48:49 compute-2 ovn_controller[134375]: 2025-11-29T07:48:49Z|00087|binding|INFO|Removing iface tap83ec9820-37 ovn-installed in OVS
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.573 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cb98fb5a-8fde-4aab-9a19-a76cfc927075'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '16', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.575 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.576 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.607 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[52788722-f28e-444f-813c-521524db6f75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.615 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:49 compute-2 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 07:48:49 compute-2 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 2.975s CPU time.
Nov 29 07:48:49 compute-2 systemd-machined[194747]: Machine qemu-9-instance-00000012 terminated.
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.644 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7479e45d-32bc-4e72-b91c-89868e933be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.648 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1281fe56-3bdc-4cea-8eae-5dc7f362a151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.688 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[32c1f626-df97-4f0d-bd86-328af40e32cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.708 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80d58ada-1ebb-4a3c-9eb4-aa7c268556e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541317, 'reachable_time': 38357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245222, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.714 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.714 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.714 232432 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.724 232432 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.727 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[97cff500-8c14-4f48-a95b-0b0a189da2cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64f65ccd-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541332, 'tstamp': 541332}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245223, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64f65ccd-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541336, 'tstamp': 541336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245223, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.729 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.731 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.735 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.736 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.736 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.737 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:48:49.737 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:48:49 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a: No such file or directory
Nov 29 07:48:49 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a: No such file or directory
Nov 29 07:48:49 compute-2 virtqemud[231977]: Cannot recv data: Input/output error
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.834 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.834 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Nov 29 07:48:49 compute-2 nova_compute[232428]: 2025-11-29 07:48:49.835 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Nov 29 07:48:49 compute-2 ceph-mon[77138]: pgmap v1330: 305 pgs: 305 active+clean; 659 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 6.8 MiB/s wr, 125 op/s
Nov 29 07:48:50 compute-2 nova_compute[232428]: 2025-11-29 07:48:50.208 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:50 compute-2 nova_compute[232428]: 2025-11-29 07:48:50.227 232432 DEBUG nova.virt.libvirt.guest [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '738ca4a4-91f6-4476-a500-4d85c8eb00ef' (instance-00000012) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Nov 29 07:48:50 compute-2 nova_compute[232428]: 2025-11-29 07:48:50.228 232432 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation has completed
Nov 29 07:48:50 compute-2 nova_compute[232428]: 2025-11-29 07:48:50.228 232432 INFO nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] _post_live_migration() is started..
Nov 29 07:48:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.659 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.659 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.660 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.661 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.661 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.661 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.661 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.662 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.662 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.662 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.662 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.663 232432 WARNING nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.663 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.663 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.664 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.664 232432 DEBUG oslo_concurrency.lockutils [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.664 232432 DEBUG nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.664 232432 WARNING nova.compute.manager [req-711c678a-949e-4b38-ada4-66a36f3bde95 req-d49f3880-314d-4144-bee5-ab909cb1d132 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.789 232432 DEBUG nova.compute.manager [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.789 232432 DEBUG oslo_concurrency.lockutils [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.790 232432 DEBUG oslo_concurrency.lockutils [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.790 232432 DEBUG oslo_concurrency.lockutils [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.790 232432 DEBUG nova.compute.manager [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:51 compute-2 nova_compute[232428]: 2025-11-29 07:48:51.791 232432 DEBUG nova.compute.manager [req-e62cbd5d-e3a9-4f4c-bc42-4128370184c2 req-dfcff532-f2c8-4e24-9ffe-ac46c3b06eb3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:48:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:52 compute-2 ceph-mon[77138]: pgmap v1331: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.9 MiB/s wr, 299 op/s
Nov 29 07:48:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 07:48:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:52 compute-2 nova_compute[232428]: 2025-11-29 07:48:52.621 232432 DEBUG nova.network.neutron [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updated VIF entry in instance network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:48:52 compute-2 nova_compute[232428]: 2025-11-29 07:48:52.622 232432 DEBUG nova.network.neutron [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:52 compute-2 nova_compute[232428]: 2025-11-29 07:48:52.643 232432 DEBUG oslo_concurrency.lockutils [req-85d55b27-2e93-450f-a3ef-e50354656775 req-1aded465-2536-448a-b05b-d16acd251742 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.135 232432 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Activated binding for port 83ec9820-3713-4570-ab8a-a88fba3f29c9 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.135 232432 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.137 232432 DEBUG nova.virt.libvirt.vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:37Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.137 232432 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.139 232432 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.139 232432 DEBUG os_vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.142 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.143 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83ec9820-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.146 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.152 232432 INFO os_vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.152 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.153 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.153 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.153 232432 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.154 232432 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deleting instance files /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.154 232432 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deletion of /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del complete
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.173 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:53 compute-2 ceph-mon[77138]: osdmap e150: 3 total, 3 up, 3 in
Nov 29 07:48:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2309254414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.559 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "5ad16860-47e5-45db-91e0-9e1c943dba38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.560 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.561 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "5ad16860-47e5-45db-91e0-9e1c943dba38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.562 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.562 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.565 232432 INFO nova.compute.manager [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Terminating instance
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.566 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "refresh_cache-5ad16860-47e5-45db-91e0-9e1c943dba38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.567 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquired lock "refresh_cache-5ad16860-47e5-45db-91e0-9e1c943dba38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.567 232432 DEBUG nova.network.neutron [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.994 232432 DEBUG nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.995 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.995 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.995 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.995 232432 DEBUG nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.996 232432 WARNING nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.996 232432 DEBUG nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.996 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.996 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.997 232432 DEBUG oslo_concurrency.lockutils [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.997 232432 DEBUG nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:48:53 compute-2 nova_compute[232428]: 2025-11-29 07:48:53.997 232432 WARNING nova.compute.manager [req-b005f22a-1d03-4455-a95c-e4c194c93e83 req-b04b067c-928c-43aa-b039-9b418e5977c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.
Nov 29 07:48:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:48:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:54.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:48:54 compute-2 ceph-mon[77138]: pgmap v1333: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 290 op/s
Nov 29 07:48:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/702586891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:54 compute-2 nova_compute[232428]: 2025-11-29 07:48:54.554 232432 DEBUG nova.network.neutron [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:54 compute-2 podman[245239]: 2025-11-29 07:48:54.724995011 +0000 UTC m=+0.130521167 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:48:55 compute-2 nova_compute[232428]: 2025-11-29 07:48:55.556 232432 DEBUG nova.network.neutron [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:55 compute-2 nova_compute[232428]: 2025-11-29 07:48:55.574 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Releasing lock "refresh_cache-5ad16860-47e5-45db-91e0-9e1c943dba38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:55 compute-2 nova_compute[232428]: 2025-11-29 07:48:55.575 232432 DEBUG nova.compute.manager [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:48:55 compute-2 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 29 07:48:55 compute-2 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000018.scope: Consumed 9.731s CPU time.
Nov 29 07:48:55 compute-2 systemd-machined[194747]: Machine qemu-10-instance-00000018 terminated.
Nov 29 07:48:55 compute-2 nova_compute[232428]: 2025-11-29 07:48:55.806 232432 INFO nova.virt.libvirt.driver [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance destroyed successfully.
Nov 29 07:48:55 compute-2 nova_compute[232428]: 2025-11-29 07:48:55.808 232432 DEBUG nova.objects.instance [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'resources' on Instance uuid 5ad16860-47e5-45db-91e0-9e1c943dba38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:56.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.257 232432 INFO nova.virt.libvirt.driver [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Deleting instance files /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38_del
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.258 232432 INFO nova.virt.libvirt.driver [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Deletion of /var/lib/nova/instances/5ad16860-47e5-45db-91e0-9e1c943dba38_del complete
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.329 232432 INFO nova.compute.manager [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.329 232432 DEBUG oslo.service.loopingcall [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.330 232432 DEBUG nova.compute.manager [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.330 232432 DEBUG nova.network.neutron [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:48:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:56.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.539 232432 DEBUG nova.network.neutron [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:56 compute-2 ceph-mon[77138]: pgmap v1334: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 290 op/s
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.550 232432 DEBUG nova.network.neutron [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.567 232432 INFO nova.compute.manager [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Took 0.24 seconds to deallocate network for instance.
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.633 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.634 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.793 232432 DEBUG oslo_concurrency.processutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:56 compute-2 sudo[245289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:56 compute-2 sudo[245289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:56 compute-2 sudo[245289]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.835 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.837 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:56 compute-2 nova_compute[232428]: 2025-11-29 07:48:56.838 232432 DEBUG nova.compute.manager [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Going to confirm migration 4 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Nov 29 07:48:56 compute-2 sudo[245315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:56 compute-2 sudo[245315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:56 compute-2 sudo[245315]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.106 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.106 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.106 232432 DEBUG nova.network.neutron [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.107 232432 DEBUG nova.objects.instance [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'info_cache' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1325290595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.269 232432 DEBUG oslo_concurrency.processutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.277 232432 DEBUG nova.compute.provider_tree [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.295 232432 DEBUG nova.scheduler.client.report [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.329 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.344 232432 DEBUG nova.network.neutron [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.361 232432 INFO nova.scheduler.client.report [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Deleted allocations for instance 5ad16860-47e5-45db-91e0-9e1c943dba38
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.420 232432 DEBUG oslo_concurrency.lockutils [None req-27d00f37-6907-4a28-992e-705c6bc6cd89 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "5ad16860-47e5-45db-91e0-9e1c943dba38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3564154323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1325290595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.808 232432 DEBUG nova.network.neutron [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.823 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.824 232432 DEBUG nova.objects.instance [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:48:57 compute-2 nova_compute[232428]: 2025-11-29 07:48:57.932 232432 DEBUG nova.storage.rbd_utils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] removing snapshot(nova-resize) on rbd image(3efe6bb4-36be-4a30-832d-8da05e5baa50_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:48:58 compute-2 nova_compute[232428]: 2025-11-29 07:48:58.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:58 compute-2 nova_compute[232428]: 2025-11-29 07:48:58.176 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:48:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:48:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:48:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:48:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:48:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 07:48:58 compute-2 ceph-mon[77138]: pgmap v1335: 305 pgs: 305 active+clean; 667 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.3 MiB/s wr, 328 op/s
Nov 29 07:48:58 compute-2 nova_compute[232428]: 2025-11-29 07:48:58.636 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:58 compute-2 nova_compute[232428]: 2025-11-29 07:48:58.637 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:58 compute-2 nova_compute[232428]: 2025-11-29 07:48:58.777 232432 DEBUG oslo_concurrency.processutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/827341742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:59 compute-2 nova_compute[232428]: 2025-11-29 07:48:59.225 232432 DEBUG oslo_concurrency.processutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:59 compute-2 nova_compute[232428]: 2025-11-29 07:48:59.232 232432 DEBUG nova.compute.provider_tree [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:59 compute-2 nova_compute[232428]: 2025-11-29 07:48:59.248 232432 DEBUG nova.scheduler.client.report [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:59 compute-2 nova_compute[232428]: 2025-11-29 07:48:59.307 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:59 compute-2 ceph-mon[77138]: osdmap e151: 3 total, 3 up, 3 in
Nov 29 07:48:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/827341742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:59 compute-2 ceph-mon[77138]: pgmap v1337: 305 pgs: 305 active+clean; 667 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 39 KiB/s wr, 150 op/s
Nov 29 07:48:59 compute-2 nova_compute[232428]: 2025-11-29 07:48:59.908 232432 INFO nova.scheduler.client.report [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocation for migration b40c1dc1-dccd-49f3-9f64-b6116e439e87
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.004 232432 DEBUG oslo_concurrency.lockutils [None req-abe66130-b20e-44fb-af27-e56b0017af6f e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:00.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:49:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.905 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.905 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.906 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.944 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.945 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.945 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.946 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:49:00 compute-2 nova_compute[232428]: 2025-11-29 07:49:00.946 232432 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:01.254 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:01.256 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:49:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2180674508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.391 232432 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.468 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.469 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.651 232432 WARNING nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.653 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4591MB free_disk=20.686481475830078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.654 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.655 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.736 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration for instance 738ca4a4-91f6-4476-a500-4d85c8eb00ef refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.774 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.795 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.796 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration 9286f335-ba58-4309-8f17-66a3cac541bd is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.796 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.796 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:49:01 compute-2 nova_compute[232428]: 2025-11-29 07:49:01.853 232432 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:02.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2180674508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3662945304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.290 232432 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.298 232432 DEBUG nova.compute.provider_tree [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.346 232432 DEBUG nova.scheduler.client.report [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.385 232432 DEBUG nova.compute.resource_tracker [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.385 232432 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.390 232432 INFO nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migrating instance to compute-0.ctlplane.example.com finished successfully.
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.477 232432 INFO nova.scheduler.client.report [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Deleted allocation for migration 9286f335-ba58-4309-8f17-66a3cac541bd
Nov 29 07:49:02 compute-2 nova_compute[232428]: 2025-11-29 07:49:02.477 232432 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Nov 29 07:49:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:02.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:03 compute-2 nova_compute[232428]: 2025-11-29 07:49:03.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:03 compute-2 nova_compute[232428]: 2025-11-29 07:49:03.178 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:03.295 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:03.296 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:03.296 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:03 compute-2 ceph-mon[77138]: pgmap v1338: 305 pgs: 305 active+clean; 598 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 38 KiB/s wr, 280 op/s
Nov 29 07:49:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3662945304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1874588273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:03 compute-2 nova_compute[232428]: 2025-11-29 07:49:03.636 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402528.6343586, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:03 compute-2 nova_compute[232428]: 2025-11-29 07:49:03.636 232432 INFO nova.compute.manager [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Stopped (Lifecycle Event)
Nov 29 07:49:03 compute-2 nova_compute[232428]: 2025-11-29 07:49:03.700 232432 DEBUG nova.compute.manager [None req-0ef1bb23-0345-4196-bf85-347a788fb0b4 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:04 compute-2 ceph-mon[77138]: pgmap v1339: 305 pgs: 305 active+clean; 598 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 35 KiB/s wr, 255 op/s
Nov 29 07:49:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:49:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:49:04 compute-2 nova_compute[232428]: 2025-11-29 07:49:04.832 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402529.8304913, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:04 compute-2 nova_compute[232428]: 2025-11-29 07:49:04.832 232432 INFO nova.compute.manager [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Stopped (Lifecycle Event)
Nov 29 07:49:04 compute-2 nova_compute[232428]: 2025-11-29 07:49:04.860 232432 DEBUG nova.compute.manager [None req-a3c71855-b3f6-4ffa-b39c-db322ff78bc1 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:05.258 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 07:49:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.424 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.425 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.425 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.425 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.425 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.426 232432 INFO nova.compute.manager [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Terminating instance
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.427 232432 DEBUG nova.compute.manager [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:49:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:06.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.730 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.731 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.764 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.867 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.867 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.878 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:49:06 compute-2 nova_compute[232428]: 2025-11-29 07:49:06.878 232432 INFO nova.compute.claims [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.019 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1858478865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.467 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.479 232432 DEBUG nova.compute.provider_tree [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.495 232432 DEBUG nova.scheduler.client.report [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.523 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.525 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.597 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.598 232432 DEBUG nova.network.neutron [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.620 232432 INFO nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.638 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.727 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.728 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.729 232432 INFO nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Creating image(s)
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.761 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.794 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.826 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.831 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2816915665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.900 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.901 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.902 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.903 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.934 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:07 compute-2 nova_compute[232428]: 2025-11-29 07:49:07.938 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:08 compute-2 nova_compute[232428]: 2025-11-29 07:49:08.042 232432 DEBUG nova.network.neutron [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:49:08 compute-2 nova_compute[232428]: 2025-11-29 07:49:08.044 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:49:08 compute-2 nova_compute[232428]: 2025-11-29 07:49:08.154 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:08 compute-2 nova_compute[232428]: 2025-11-29 07:49:08.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:09 compute-2 ceph-mon[77138]: pgmap v1340: 305 pgs: 305 active+clean; 554 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 729 KiB/s wr, 239 op/s
Nov 29 07:49:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4216977296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:09 compute-2 ceph-mon[77138]: osdmap e152: 3 total, 3 up, 3 in
Nov 29 07:49:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1858478865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:09 compute-2 ceph-mon[77138]: pgmap v1342: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 259 op/s
Nov 29 07:49:09 compute-2 kernel: tapda69d7f6-de (unregistering): left promiscuous mode
Nov 29 07:49:09 compute-2 NetworkManager[48993]: <info>  [1764402549.6302] device (tapda69d7f6-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.652 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00088|binding|INFO|Releasing lport da69d7f6-de64-485f-96a1-c51ad9274372 from this chassis (sb_readonly=0)
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00089|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 down in Southbound
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00090|binding|INFO|Releasing lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b from this chassis (sb_readonly=0)
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00091|binding|INFO|Setting lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b down in Southbound
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00092|binding|INFO|Removing iface tapda69d7f6-de ovn-installed in OVS
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00093|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 07:49:09 compute-2 ovn_controller[134375]: 2025-11-29T07:49:09Z|00094|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.664 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '12', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.668 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.670 143801 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.673 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.675 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5503e17a-c80a-46e4-86b8-870ce55c8965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:09.676 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace which is not needed anymore
Nov 29 07:49:09 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 29 07:49:09 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000f.scope: Consumed 6.836s CPU time.
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.703 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:09 compute-2 systemd-machined[194747]: Machine qemu-7-instance-0000000f terminated.
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.784 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:09 compute-2 podman[245599]: 2025-11-29 07:49:09.814462293 +0000 UTC m=+0.073028172 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [NOTICE]   (243593) : haproxy version is 2.8.14-c23fe91
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [NOTICE]   (243593) : path to executable is /usr/sbin/haproxy
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [WARNING]  (243593) : Exiting Master process...
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [WARNING]  (243593) : Exiting Master process...
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [ALERT]    (243593) : Current worker (243595) exited with code 143 (Terminated)
Nov 29 07:49:09 compute-2 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[243589]: [WARNING]  (243593) : All workers exited. Exiting... (0)
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.876 232432 INFO nova.virt.libvirt.driver [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance destroyed successfully.
Nov 29 07:49:09 compute-2 systemd[1]: libpod-1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d.scope: Deactivated successfully.
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.878 232432 DEBUG nova.objects.instance [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lazy-loading 'resources' on Instance uuid bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:09 compute-2 podman[245630]: 2025-11-29 07:49:09.88321905 +0000 UTC m=+0.065197957 container died 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.897 232432 DEBUG nova.virt.libvirt.vif [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:32Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.899 232432 DEBUG nova.network.os_vif_util [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.900 232432 DEBUG nova.network.os_vif_util [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.901 232432 DEBUG os_vif [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.904 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.905 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda69d7f6-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:49:09 compute-2 nova_compute[232428]: 2025-11-29 07:49:09.915 232432 INFO os_vif [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de')
Nov 29 07:49:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d-userdata-shm.mount: Deactivated successfully.
Nov 29 07:49:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-1856b34fadcb9566a00b308fc6d98b72f9def963387e83c19cf55b896ef25e92-merged.mount: Deactivated successfully.
Nov 29 07:49:09 compute-2 podman[245630]: 2025-11-29 07:49:09.94504262 +0000 UTC m=+0.127021527 container cleanup 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 07:49:09 compute-2 systemd[1]: libpod-conmon-1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d.scope: Deactivated successfully.
Nov 29 07:49:10 compute-2 podman[245685]: 2025-11-29 07:49:10.030245004 +0000 UTC m=+0.051583390 container remove 1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.037 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2168d2f9-06c4-423e-82b8-ef5c65bb6aff]: (4, ('Sat Nov 29 07:49:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d)\n1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d\nSat Nov 29 07:49:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d)\n1bf97e16c8d2fa8a40e07998f2a79d78c3753bea9279856f3068799a9f04c85d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.041 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9554eca4-2d70-4153-a9a4-09b5a9345454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.043 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 kernel: tap64f65ccd-70: left promiscuous mode
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.062 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.066 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1da7a15b-c500-4571-b0f3-6c08087d663f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.086 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c6da83-b4aa-4367-8966-51a80cc4c1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.088 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[55c22286-b172-4b44-be2c-eed8b2e2d0c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.108 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2dd812e-86bc-4573-befc-39212e6aa8e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541306, 'reachable_time': 44647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245703, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 systemd[1]: run-netns-ovnmeta\x2d64f65ccd\x2d7749\x2d48ca\x2dba36\x2d8eb6d9ce3610.mount: Deactivated successfully.
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.114 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.115 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[64c31394-e5f5-4f37-9060-392e335ae773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.117 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.118 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.119 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c747b224-36fe-47a0-bae7-d31ba480dd5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.120 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 namespace which is not needed anymore
Nov 29 07:49:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:10.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [NOTICE]   (243674) : haproxy version is 2.8.14-c23fe91
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [NOTICE]   (243674) : path to executable is /usr/sbin/haproxy
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [WARNING]  (243674) : Exiting Master process...
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [WARNING]  (243674) : Exiting Master process...
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [ALERT]    (243674) : Current worker (243676) exited with code 143 (Terminated)
Nov 29 07:49:10 compute-2 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[243670]: [WARNING]  (243674) : All workers exited. Exiting... (0)
Nov 29 07:49:10 compute-2 systemd[1]: libpod-a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4.scope: Deactivated successfully.
Nov 29 07:49:10 compute-2 podman[245721]: 2025-11-29 07:49:10.318443517 +0000 UTC m=+0.071544217 container died a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:10 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4-userdata-shm.mount: Deactivated successfully.
Nov 29 07:49:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-95757e79af54b925c34daec65bc5cd08a9303e6d0c7eba27ccec2b38ff0bcdab-merged.mount: Deactivated successfully.
Nov 29 07:49:10 compute-2 podman[245721]: 2025-11-29 07:49:10.366912987 +0000 UTC m=+0.120013657 container cleanup a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:10 compute-2 systemd[1]: libpod-conmon-a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4.scope: Deactivated successfully.
Nov 29 07:49:10 compute-2 podman[245751]: 2025-11-29 07:49:10.470135376 +0000 UTC m=+0.066190688 container remove a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.477 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9d814d-14ed-448b-8241-240214f107e2]: (4, ('Sat Nov 29 07:49:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 (a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4)\na479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4\nSat Nov 29 07:49:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 (a479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4)\na479e606729c17894e193af07b5cac696194d7dda06172536019d2f270bc22b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.479 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e38df610-0193-44f3-a398-237ba3e66094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.481 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce6bdb9b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.484 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 kernel: tapce6bdb9b-80: left promiscuous mode
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.498 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.500 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.502 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5fb0cd-1809-4184-84df-6afea35b3471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:10.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.528 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f82375de-a581-42ed-8812-d3e718ec2cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.530 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[11dfb8db-e8bb-44ee-b1b2-643198fc28e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.555 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fffcfb32-ebee-4dfc-b249-d35ea6822b38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541424, 'reachable_time': 39546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245766, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.559 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:49:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:49:10.559 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d6023dd7-153b-4035-963f-6d6fb2bf914b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.803 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402535.801987, 5ad16860-47e5-45db-91e0-9e1c943dba38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.804 232432 INFO nova.compute.manager [-] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] VM Stopped (Lifecycle Event)
Nov 29 07:49:10 compute-2 nova_compute[232428]: 2025-11-29 07:49:10.823 232432 DEBUG nova.compute.manager [None req-556300b7-ea4a-4c15-936e-f188090e8e19 - - - - - -] [instance: 5ad16860-47e5-45db-91e0-9e1c943dba38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:10 compute-2 systemd[1]: run-netns-ovnmeta\x2dce6bdb9b\x2d87f6\x2d4011\x2d9a56\x2d230cbc6f4771.mount: Deactivated successfully.
Nov 29 07:49:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:12.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:12 compute-2 ceph-mon[77138]: pgmap v1343: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Nov 29 07:49:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 07:49:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 07:49:12 compute-2 sudo[245768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:12 compute-2 sudo[245768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:12 compute-2 sudo[245768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:12 compute-2 sudo[245794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:12 compute-2 sudo[245794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:12 compute-2 sudo[245794]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:12 compute-2 podman[245792]: 2025-11-29 07:49:12.900171815 +0000 UTC m=+0.115326208 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:49:12 compute-2 sudo[245837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:12 compute-2 sudo[245837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:12 compute-2 sudo[245837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:13 compute-2 sudo[245864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:49:13 compute-2 sudo[245864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.182 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:13 compute-2 sudo[245864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.637 232432 DEBUG nova.compute.manager [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.637 232432 DEBUG oslo_concurrency.lockutils [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.638 232432 DEBUG oslo_concurrency.lockutils [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.638 232432 DEBUG oslo_concurrency.lockutils [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.638 232432 DEBUG nova.compute.manager [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.639 232432 DEBUG nova.compute.manager [req-7d159881-9a84-4e08-9251-a26517a4defb req-120d37c2-8df8-4b77-8774-5c29e6a8657a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:49:13 compute-2 ceph-mon[77138]: pgmap v1344: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 07:49:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/552190905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1753198554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:13 compute-2 nova_compute[232428]: 2025-11-29 07:49:13.958 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:14 compute-2 nova_compute[232428]: 2025-11-29 07:49:14.068 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] resizing rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:49:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:14.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:14 compute-2 nova_compute[232428]: 2025-11-29 07:49:14.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.062 232432 DEBUG nova.objects.instance [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.080 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.080 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Ensure instance console log exists: /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.081 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.081 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.081 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.083 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.088 232432 WARNING nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.093 232432 DEBUG nova.virt.libvirt.host [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.094 232432 DEBUG nova.virt.libvirt.host [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.097 232432 DEBUG nova.virt.libvirt.host [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.098 232432 DEBUG nova.virt.libvirt.host [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.099 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.099 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:49:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c0070fa-87a7-4b22-ba75-a8074bc210ec',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1639744064',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.100 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.100 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.100 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.101 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.101 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.101 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.102 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.102 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.102 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.103 232432 DEBUG nova.virt.hardware [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.106 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:15 compute-2 ceph-mon[77138]: pgmap v1345: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 07:49:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3851344992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.703 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.732 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:15 compute-2 nova_compute[232428]: 2025-11-29 07:49:15.736 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2453999259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.659 232432 DEBUG nova.compute.manager [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.660 232432 DEBUG oslo_concurrency.lockutils [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.660 232432 DEBUG oslo_concurrency.lockutils [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.661 232432 DEBUG oslo_concurrency.lockutils [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.661 232432 DEBUG nova.compute.manager [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.661 232432 WARNING nova.compute.manager [req-33ea9f8f-a096-4fad-894e-97f767fcbd51 req-154024be-7ed6-4182-a8e8-3c097476942e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state active and task_state deleting.
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.710 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.713 232432 DEBUG nova.objects.instance [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:16 compute-2 ceph-mon[77138]: pgmap v1346: 305 pgs: 305 active+clean; 494 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Nov 29 07:49:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3851344992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:49:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.730 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <uuid>b00071fa-b5cc-4219-97e7-f88445b8c5d7</uuid>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <name>instance-0000001a</name>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-571949222</nova:name>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:49:15</nova:creationTime>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:flavor name="tempest-test_resize_flavor_-1639744064">
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <system>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="serial">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="uuid">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </system>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <os>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </os>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <features>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </features>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk">
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </source>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config">
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </source>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:49:16 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/console.log" append="off"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <video>
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </video>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:49:16 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:49:16 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:49:16 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:49:16 compute-2 nova_compute[232428]: </domain>
Nov 29 07:49:16 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.784 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.785 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.785 232432 INFO nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Using config drive
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.813 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.982 232432 INFO nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Creating config drive at /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config
Nov 29 07:49:16 compute-2 nova_compute[232428]: 2025-11-29 07:49:16.989 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8vkck_2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:16 compute-2 sudo[246077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:16 compute-2 sudo[246077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:17 compute-2 sudo[246077]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:17 compute-2 sudo[246103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:17 compute-2 sudo[246103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:17 compute-2 sudo[246103]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:17 compute-2 nova_compute[232428]: 2025-11-29 07:49:17.135 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8vkck_2b" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:17 compute-2 nova_compute[232428]: 2025-11-29 07:49:17.174 232432 DEBUG nova.storage.rbd_utils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:17 compute-2 nova_compute[232428]: 2025-11-29 07:49:17.178 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:18 compute-2 nova_compute[232428]: 2025-11-29 07:49:18.184 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:18.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:18.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2453999259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:19 compute-2 ceph-mon[77138]: pgmap v1347: 305 pgs: 305 active+clean; 510 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 1.9 MiB/s wr, 104 op/s
Nov 29 07:49:19 compute-2 nova_compute[232428]: 2025-11-29 07:49:19.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:49:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:20.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:49:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:20.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:22.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:22.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:23 compute-2 nova_compute[232428]: 2025-11-29 07:49:23.185 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:23 compute-2 nova_compute[232428]: 2025-11-29 07:49:23.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:23 compute-2 nova_compute[232428]: 2025-11-29 07:49:23.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:23 compute-2 ceph-mon[77138]: pgmap v1348: 305 pgs: 305 active+clean; 510 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 560 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 07:49:23 compute-2 ceph-mon[77138]: pgmap v1349: 305 pgs: 305 active+clean; 298 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:24.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.494 232432 DEBUG oslo_concurrency.processutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.495 232432 INFO nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Deleting local config drive /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/disk.config because it was imported into RBD.
Nov 29 07:49:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:24.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:24 compute-2 systemd-machined[194747]: New machine qemu-11-instance-0000001a.
Nov 29 07:49:24 compute-2 systemd[1]: Started Virtual Machine qemu-11-instance-0000001a.
Nov 29 07:49:24 compute-2 ceph-mon[77138]: pgmap v1350: 305 pgs: 305 active+clean; 298 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Nov 29 07:49:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/670377070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/670377070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.875 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402549.8696375, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.877 232432 INFO nova.compute.manager [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Stopped (Lifecycle Event)
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.915 232432 DEBUG nova.compute.manager [None req-fca20d93-d193-4a8a-9015-65b1df0e7502 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.921 232432 DEBUG nova.compute.manager [None req-fca20d93-d193-4a8a-9015-65b1df0e7502 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:24 compute-2 nova_compute[232428]: 2025-11-29 07:49:24.940 232432 INFO nova.compute.manager [None req-fca20d93-d193-4a8a-9015-65b1df0e7502 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.541 232432 INFO nova.virt.libvirt.driver [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deleting instance files /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_del
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.543 232432 INFO nova.virt.libvirt.driver [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deletion of /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_del complete
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.597 232432 INFO nova.compute.manager [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Took 19.17 seconds to destroy the instance on the hypervisor.
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.598 232432 DEBUG oslo.service.loopingcall [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.598 232432 DEBUG nova.compute.manager [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:49:25 compute-2 nova_compute[232428]: 2025-11-29 07:49:25.598 232432 DEBUG nova.network.neutron [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:49:25 compute-2 podman[246203]: 2025-11-29 07:49:25.741540399 +0000 UTC m=+0.132183479 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.232 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402566.231704, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.232 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Resumed (Lifecycle Event)
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.234 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.234 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.237 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance spawned successfully.
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.237 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:49:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:26.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.270 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.271 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.298 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.303 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.323 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.324 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.324 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.325 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.325 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.326 232432 DEBUG nova.virt.libvirt.driver [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.356 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.357 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402566.233195, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.357 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Started (Lifecycle Event)
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.402 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.405 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.410 232432 INFO nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Took 18.68 seconds to spawn the instance on the hypervisor.
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.410 232432 DEBUG nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.439 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.474 232432 INFO nova.compute.manager [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Took 19.65 seconds to build instance.
Nov 29 07:49:26 compute-2 nova_compute[232428]: 2025-11-29 07:49:26.488 232432 DEBUG oslo_concurrency.lockutils [None req-a5fcca38-8474-4657-ac68-19a646faf4da e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:49:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:26.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.229 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2612304135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.699 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.761 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.762 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4116964501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4116964501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.926 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.928 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4695MB free_disk=20.85523223876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.929 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:27 compute-2 nova_compute[232428]: 2025-11-29 07:49:27.929 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.012 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.012 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance b00071fa-b5cc-4219-97e7-f88445b8c5d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.013 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.013 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.051 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:28.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4267146816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.485 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.497 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.516 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.541 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:49:28 compute-2 nova_compute[232428]: 2025-11-29 07:49:28.541 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:29 compute-2 ceph-mon[77138]: pgmap v1351: 305 pgs: 305 active+clean; 298 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.542 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.786 232432 DEBUG nova.network.neutron [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.810 232432 INFO nova.compute.manager [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Took 4.21 seconds to deallocate network for instance.
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.857 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.858 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:29 compute-2 nova_compute[232428]: 2025-11-29 07:49:29.970 232432 DEBUG oslo_concurrency.processutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:30.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1888033956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:30 compute-2 nova_compute[232428]: 2025-11-29 07:49:30.423 232432 DEBUG oslo_concurrency.processutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:30 compute-2 nova_compute[232428]: 2025-11-29 07:49:30.430 232432 DEBUG nova.compute.provider_tree [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:30 compute-2 nova_compute[232428]: 2025-11-29 07:49:30.734 232432 DEBUG nova.scheduler.client.report [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:30 compute-2 nova_compute[232428]: 2025-11-29 07:49:30.863 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:30 compute-2 nova_compute[232428]: 2025-11-29 07:49:30.897 232432 INFO nova.scheduler.client.report [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Deleted allocations for instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.008 232432 DEBUG oslo_concurrency.lockutils [None req-0400c21e-55a0-4221-8040-7e8ce02d5d87 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 24.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.769 232432 DEBUG nova.compute.manager [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.834 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.836 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.865 232432 DEBUG nova.objects.instance [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_requests' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.884 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.884 232432 INFO nova.compute.claims [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.885 232432 DEBUG nova.objects.instance [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.900 232432 DEBUG nova.objects.instance [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:31 compute-2 nova_compute[232428]: 2025-11-29 07:49:31.953 232432 INFO nova.compute.resource_tracker [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating resource usage from migration 04b4e207-0340-47fe-a457-f526a5462a18
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.087 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1158889372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:32.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.581 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.587 232432 DEBUG nova.compute.provider_tree [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.603 232432 DEBUG nova.scheduler.client.report [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.634 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.635 232432 INFO nova.compute.manager [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Migrating
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.669 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.669 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:32 compute-2 nova_compute[232428]: 2025-11-29 07:49:32.670 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.001 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.191 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.546 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.561 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.662 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 07:49:33 compute-2 nova_compute[232428]: 2025-11-29 07:49:33.667 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 07:49:34 compute-2 nova_compute[232428]: 2025-11-29 07:49:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:34.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:34.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:34 compute-2 ceph-mon[77138]: pgmap v1352: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 165 op/s
Nov 29 07:49:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2612304135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4116964501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4116964501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4267146816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:34 compute-2 ceph-mon[77138]: pgmap v1353: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 121 op/s
Nov 29 07:49:34 compute-2 nova_compute[232428]: 2025-11-29 07:49:34.920 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:36.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:36.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:37 compute-2 sudo[246350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:37 compute-2 sudo[246350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:37 compute-2 sudo[246350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:37 compute-2 sudo[246375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:37 compute-2 sudo[246375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:37 compute-2 sudo[246375]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:38 compute-2 nova_compute[232428]: 2025-11-29 07:49:38.193 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:38.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3239411733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2466301257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1888033956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1141279700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1578617882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: pgmap v1354: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 43 KiB/s wr, 193 op/s
Nov 29 07:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/407377134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:38 compute-2 ceph-mon[77138]: pgmap v1355: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 25 KiB/s wr, 120 op/s
Nov 29 07:49:39 compute-2 nova_compute[232428]: 2025-11-29 07:49:39.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1158889372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:39 compute-2 ceph-mon[77138]: pgmap v1356: 305 pgs: 305 active+clean; 297 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 383 KiB/s wr, 129 op/s
Nov 29 07:49:39 compute-2 ceph-mon[77138]: pgmap v1357: 305 pgs: 305 active+clean; 297 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 371 KiB/s wr, 108 op/s
Nov 29 07:49:39 compute-2 ceph-mon[77138]: pgmap v1358: 305 pgs: 305 active+clean; 302 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 87 op/s
Nov 29 07:49:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:40.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:40 compute-2 podman[246401]: 2025-11-29 07:49:40.685715573 +0000 UTC m=+0.083624722 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:49:41 compute-2 ceph-mon[77138]: pgmap v1359: 305 pgs: 305 active+clean; 336 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Nov 29 07:49:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2258598670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:42.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:42.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:43 compute-2 nova_compute[232428]: 2025-11-29 07:49:43.196 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:43 compute-2 sudo[246422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:43 compute-2 sudo[246422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:43 compute-2 sudo[246422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:43 compute-2 sudo[246453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:49:43 compute-2 sudo[246453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:43 compute-2 sudo[246453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:43 compute-2 podman[246423]: 2025-11-29 07:49:43.708182584 +0000 UTC m=+0.092592892 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 07:49:43 compute-2 nova_compute[232428]: 2025-11-29 07:49:43.723 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 07:49:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:49:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:49:44 compute-2 ceph-mon[77138]: pgmap v1360: 305 pgs: 305 active+clean; 336 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Nov 29 07:49:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/239262292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1630575762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:44 compute-2 nova_compute[232428]: 2025-11-29 07:49:44.927 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:46 compute-2 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 07:49:46 compute-2 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001a.scope: Consumed 15.201s CPU time.
Nov 29 07:49:46 compute-2 systemd-machined[194747]: Machine qemu-11-instance-0000001a terminated.
Nov 29 07:49:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.739 232432 INFO nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance shutdown successfully after 13 seconds.
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.746 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance destroyed successfully.
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.751 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.751 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.839 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.840 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:46 compute-2 nova_compute[232428]: 2025-11-29 07:49:46.840 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:47 compute-2 nova_compute[232428]: 2025-11-29 07:49:47.570 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:47 compute-2 nova_compute[232428]: 2025-11-29 07:49:47.570 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:47 compute-2 nova_compute[232428]: 2025-11-29 07:49:47.571 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:49:47 compute-2 nova_compute[232428]: 2025-11-29 07:49:47.718 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.059 232432 DEBUG nova.network.neutron [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:48 compute-2 ceph-mon[77138]: pgmap v1361: 305 pgs: 305 active+clean; 380 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 575 KiB/s rd, 4.9 MiB/s wr, 146 op/s
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.087 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.196 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.198 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.199 232432 INFO nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Creating image(s)
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.237 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:48 compute-2 nova_compute[232428]: 2025-11-29 07:49:48.244 232432 DEBUG nova.storage.rbd_utils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] creating snapshot(nova-resize) on rbd image(b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:49:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:49 compute-2 ceph-mon[77138]: pgmap v1362: 305 pgs: 305 active+clean; 408 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 29 07:49:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.101838) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589101952, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2306, "num_deletes": 253, "total_data_size": 5377370, "memory_usage": 5450768, "flush_reason": "Manual Compaction"}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589137242, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3511905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24060, "largest_seqno": 26361, "table_properties": {"data_size": 3502473, "index_size": 5862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20700, "raw_average_key_size": 20, "raw_value_size": 3483220, "raw_average_value_size": 3507, "num_data_blocks": 259, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402379, "oldest_key_time": 1764402379, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 35625 microseconds, and 16225 cpu microseconds.
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.137444) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3511905 bytes OK
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.137502) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.139177) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.139203) EVENT_LOG_v1 {"time_micros": 1764402589139194, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.139230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5367095, prev total WAL file size 5367095, number of live WAL files 2.
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.141545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3429KB)], [48(8821KB)]
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589141645, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 12545555, "oldest_snapshot_seqno": -1}
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.194 232432 DEBUG nova.objects.instance [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'trusted_certs' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5469 keys, 10467278 bytes, temperature: kUnknown
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589245123, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 10467278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10429212, "index_size": 23300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 139404, "raw_average_key_size": 25, "raw_value_size": 10329033, "raw_average_value_size": 1888, "num_data_blocks": 953, "num_entries": 5469, "num_filter_entries": 5469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.245509) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10467278 bytes
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.247163) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.1 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5993, records dropped: 524 output_compression: NoCompression
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.247186) EVENT_LOG_v1 {"time_micros": 1764402589247174, "job": 28, "event": "compaction_finished", "compaction_time_micros": 103586, "compaction_time_cpu_micros": 50780, "output_level": 6, "num_output_files": 1, "total_output_size": 10467278, "num_input_records": 5993, "num_output_records": 5469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589248231, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589251830, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.141451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.251899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.251909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.251913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.251917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:49:49.251921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.312 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.312 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Ensure instance console log exists: /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.313 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.313 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.313 232432 DEBUG oslo_concurrency.lockutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.315 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.319 232432 WARNING nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.325 232432 DEBUG nova.virt.libvirt.host [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.325 232432 DEBUG nova.virt.libvirt.host [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.331 232432 DEBUG nova.virt.libvirt.host [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.332 232432 DEBUG nova.virt.libvirt.host [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.333 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.333 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.334 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.334 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.334 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.334 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.334 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.335 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.335 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.335 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.335 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.335 232432 DEBUG nova.virt.hardware [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.336 232432 DEBUG nova.objects.instance [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'vcpu_model' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.352 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/32897915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.837 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.881 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:49 compute-2 nova_compute[232428]: 2025-11-29 07:49:49.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:50 compute-2 ceph-mon[77138]: osdmap e153: 3 total, 3 up, 3 in
Nov 29 07:49:50 compute-2 ceph-mon[77138]: pgmap v1364: 305 pgs: 305 active+clean; 409 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 204 op/s
Nov 29 07:49:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/32897915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/855104170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:50 compute-2 nova_compute[232428]: 2025-11-29 07:49:50.481 232432 DEBUG oslo_concurrency.processutils [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:50 compute-2 nova_compute[232428]: 2025-11-29 07:49:50.486 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <uuid>b00071fa-b5cc-4219-97e7-f88445b8c5d7</uuid>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <name>instance-0000001a</name>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-571949222</nova:name>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:49:49</nova:creationTime>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <system>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="serial">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="uuid">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </system>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <os>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </os>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <features>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </features>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk">
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </source>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config">
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </source>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:49:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/console.log" append="off"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <video>
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </video>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:49:50 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:49:50 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:49:50 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:49:50 compute-2 nova_compute[232428]: </domain>
Nov 29 07:49:50 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:49:50 compute-2 nova_compute[232428]: 2025-11-29 07:49:50.550 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:50 compute-2 nova_compute[232428]: 2025-11-29 07:49:50.551 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:50 compute-2 nova_compute[232428]: 2025-11-29 07:49:50.551 232432 INFO nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Using config drive
Nov 29 07:49:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:50.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:50 compute-2 systemd-machined[194747]: New machine qemu-12-instance-0000001a.
Nov 29 07:49:50 compute-2 systemd[1]: Started Virtual Machine qemu-12-instance-0000001a.
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.269 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for b00071fa-b5cc-4219-97e7-f88445b8c5d7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.270 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402591.268819, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.270 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Resumed (Lifecycle Event)
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.272 232432 DEBUG nova.compute.manager [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.278 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance running successfully.
Nov 29 07:49:51 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.282 232432 DEBUG nova.virt.libvirt.guest [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.282 232432 DEBUG nova.virt.libvirt.driver [None req-b5f8ffb3-c28f-43d0-8317-453dd8b59344 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.309 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.314 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.370 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.371 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402591.2718382, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.372 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Started (Lifecycle Event)
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.420 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:51 compute-2 nova_compute[232428]: 2025-11-29 07:49:51.424 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/855104170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:52 compute-2 ceph-mon[77138]: pgmap v1365: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Nov 29 07:49:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:52.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:53 compute-2 nova_compute[232428]: 2025-11-29 07:49:53.201 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:53 compute-2 nova_compute[232428]: 2025-11-29 07:49:53.575 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:53 compute-2 nova_compute[232428]: 2025-11-29 07:49:53.576 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:53 compute-2 nova_compute[232428]: 2025-11-29 07:49:53.576 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:49:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:54 compute-2 ceph-mon[77138]: pgmap v1366: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Nov 29 07:49:54 compute-2 nova_compute[232428]: 2025-11-29 07:49:54.579 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:49:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:54.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:54 compute-2 nova_compute[232428]: 2025-11-29 07:49:54.930 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.574 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.603 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:55 compute-2 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 07:49:55 compute-2 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001a.scope: Consumed 5.062s CPU time.
Nov 29 07:49:55 compute-2 systemd-machined[194747]: Machine qemu-12-instance-0000001a terminated.
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.860 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance destroyed successfully.
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.861 232432 DEBUG nova.objects.instance [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.881 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.881 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:55 compute-2 nova_compute[232428]: 2025-11-29 07:49:55.902 232432 DEBUG nova.objects.instance [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.008 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:49:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:49:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/47207765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.497 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.508 232432 DEBUG nova.compute.provider_tree [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.532 232432 DEBUG nova.scheduler.client.report [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:56 compute-2 ceph-mon[77138]: pgmap v1367: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.5 MiB/s wr, 189 op/s
Nov 29 07:49:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/47207765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.600 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:56.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:56 compute-2 podman[246739]: 2025-11-29 07:49:56.737872525 +0000 UTC m=+0.133373041 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.780 232432 INFO nova.compute.manager [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Swapping old allocation on dict_keys(['77f31ad1-818f-4610-8dd1-3fbcd25133f2']) held by migration 04b4e207-0340-47fe-a457-f526a5462a18 for instance
Nov 29 07:49:56 compute-2 nova_compute[232428]: 2025-11-29 07:49:56.816 232432 DEBUG nova.scheduler.client.report [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Overwriting current allocation {'allocations': {'77f31ad1-818f-4610-8dd1-3fbcd25133f2': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 26}}, 'project_id': 'f7e8ae9fdefb4049959228954fb4250e', 'user_id': 'e1c26cd8138e4114b4801d377b39933a', 'consumer_generation': 1} on consumer b00071fa-b5cc-4219-97e7-f88445b8c5d7 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.032 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.033 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.033 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.228 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:49:57 compute-2 sudo[246767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:57 compute-2 sudo[246767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:57 compute-2 sudo[246767]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:57 compute-2 sudo[246792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:57 compute-2 sudo[246792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.484 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:57 compute-2 sudo[246792]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.485 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.503 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.522 232432 DEBUG nova.network.neutron [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.562 232432 DEBUG oslo_concurrency.lockutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.563 232432 DEBUG nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.668 232432 DEBUG nova.storage.rbd_utils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rolling back rbd image(b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.678 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.679 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.688 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.688 232432 INFO nova.compute.claims [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:49:57 compute-2 nova_compute[232428]: 2025-11-29 07:49:57.850 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:58 compute-2 ceph-mon[77138]: pgmap v1368: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 190 op/s
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.203 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:58.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4167272805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.375 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.383 232432 DEBUG nova.compute.provider_tree [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.410 232432 DEBUG nova.scheduler.client.report [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.438 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.439 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:49:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.492 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.493 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.525 232432 INFO nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:49:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:49:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:49:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.712 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:49:58 compute-2 nova_compute[232428]: 2025-11-29 07:49:58.881 232432 DEBUG nova.policy [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93506ec26b16451c91dc820b139e8707', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b2c58ae2e706424fa3147694fc571db0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:49:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4167272805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.207 232432 DEBUG nova.storage.rbd_utils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] removing snapshot(nova-resize) on rbd image(b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.362 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.364 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.364 232432 INFO nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Creating image(s)
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.392 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.422 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.450 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.454 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.523 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.524 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.525 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.525 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.556 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.561 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8dccacdf-63b1-4789-b72a-763e95713f24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:59 compute-2 nova_compute[232428]: 2025-11-29 07:49:59.933 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:00 compute-2 ceph-mon[77138]: pgmap v1369: 305 pgs: 305 active+clean; 411 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 448 KiB/s wr, 159 op/s
Nov 29 07:50:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.277 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Successfully created port: 0c33657b-e644-48aa-83dd-c0311b9ffd6e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:50:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 07:50:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:00.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.328 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8dccacdf-63b1-4789-b72a-763e95713f24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.767s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.422 232432 DEBUG nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.427 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] resizing rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.479 232432 WARNING nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.484 232432 DEBUG nova.virt.libvirt.host [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.485 232432 DEBUG nova.virt.libvirt.host [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.487 232432 DEBUG nova.virt.libvirt.host [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.488 232432 DEBUG nova.virt.libvirt.host [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.489 232432 DEBUG nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.489 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:49:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c0070fa-87a7-4b22-ba75-a8074bc210ec',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1639744064',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.490 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.490 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.491 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.491 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.491 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.491 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.492 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.492 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.492 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.492 232432 DEBUG nova.virt.hardware [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.493 232432 DEBUG nova.objects.instance [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'vcpu_model' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.518 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.621 232432 DEBUG nova.objects.instance [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'migration_context' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.640 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.641 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Ensure instance console log exists: /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.641 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.642 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:00 compute-2 nova_compute[232428]: 2025-11-29 07:50:00.642 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/836498526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.000 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.056 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:01 compute-2 ceph-mon[77138]: osdmap e154: 3 total, 3 up, 3 in
Nov 29 07:50:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/836498526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/746483395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.552 232432 DEBUG oslo_concurrency.processutils [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.557 232432 DEBUG nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <uuid>b00071fa-b5cc-4219-97e7-f88445b8c5d7</uuid>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <name>instance-0000001a</name>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-571949222</nova:name>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:50:00</nova:creationTime>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:flavor name="tempest-test_resize_flavor_-1639744064">
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <system>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="serial">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="uuid">b00071fa-b5cc-4219-97e7-f88445b8c5d7</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </system>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <os>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </os>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <features>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </features>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk">
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b00071fa-b5cc-4219-97e7-f88445b8c5d7_disk.config">
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:01 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7/console.log" append="off"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <video>
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </video>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:50:01 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:50:01 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:50:01 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:50:01 compute-2 nova_compute[232428]: </domain>
Nov 29 07:50:01 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.585 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Successfully updated port: 0c33657b-e644-48aa-83dd-c0311b9ffd6e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.604 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.604 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquired lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.604 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:01 compute-2 systemd-machined[194747]: New machine qemu-13-instance-0000001a.
Nov 29 07:50:01 compute-2 systemd[1]: Started Virtual Machine qemu-13-instance-0000001a.
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.713 232432 DEBUG nova.compute.manager [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-changed-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.715 232432 DEBUG nova.compute.manager [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Refreshing instance network info cache due to event network-changed-0c33657b-e644-48aa-83dd-c0311b9ffd6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.715 232432 DEBUG oslo_concurrency.lockutils [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:01 compute-2 nova_compute[232428]: 2025-11-29 07:50:01.788 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:02 compute-2 ceph-mon[77138]: pgmap v1371: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 449 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Nov 29 07:50:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/746483395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:02.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.529 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for b00071fa-b5cc-4219-97e7-f88445b8c5d7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.530 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402602.5282805, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.531 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Resumed (Lifecycle Event)
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.534 232432 DEBUG nova.compute.manager [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.541 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance running successfully.
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.541 232432 DEBUG nova.virt.libvirt.driver [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.554 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.560 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.609 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.609 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402602.5297058, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.610 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Started (Lifecycle Event)
Nov 29 07:50:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:02.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.646 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.651 232432 INFO nova.compute.manager [None req-4e9f2ece-62c5-445e-b07a-39a84fb5bc67 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance to original state: 'active'
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.656 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:02 compute-2 nova_compute[232428]: 2025-11-29 07:50:02.695 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.141 232432 DEBUG nova.network.neutron [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updating instance_info_cache with network_info: [{"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.164 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Releasing lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.165 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Instance network_info: |[{"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.166 232432 DEBUG oslo_concurrency.lockutils [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.166 232432 DEBUG nova.network.neutron [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Refreshing network info cache for port 0c33657b-e644-48aa-83dd-c0311b9ffd6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.169 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Start _get_guest_xml network_info=[{"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.173 232432 WARNING nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.183 232432 DEBUG nova.virt.libvirt.host [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.184 232432 DEBUG nova.virt.libvirt.host [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.192 232432 DEBUG nova.virt.libvirt.host [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.192 232432 DEBUG nova.virt.libvirt.host [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.194 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.194 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.194 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.195 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.195 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.195 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.196 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.196 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.196 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.196 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.197 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.197 232432 DEBUG nova.virt.hardware [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.200 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.231 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:03.296 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:03.297 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:03.297 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1667132660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.721 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.758 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:03 compute-2 nova_compute[232428]: 2025-11-29 07:50:03.762 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/587344794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.213 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.215 232432 DEBUG nova.virt.libvirt.vif [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1426395669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1426395669',id=28,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-qh58bk29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=8dccacdf-63b1-4789-b72a-763e95713f24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.215 232432 DEBUG nova.network.os_vif_util [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.216 232432 DEBUG nova.network.os_vif_util [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.218 232432 DEBUG nova.objects.instance [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.242 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <uuid>8dccacdf-63b1-4789-b72a-763e95713f24</uuid>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <name>instance-0000001c</name>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1426395669</nova:name>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:50:03</nova:creationTime>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:user uuid="93506ec26b16451c91dc820b139e8707">tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member</nova:user>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:project uuid="b2c58ae2e706424fa3147694fc571db0">tempest-UpdateMultiattachVolumeNegativeTest-1774120772</nova:project>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <nova:port uuid="0c33657b-e644-48aa-83dd-c0311b9ffd6e">
Nov 29 07:50:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <system>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="serial">8dccacdf-63b1-4789-b72a-763e95713f24</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="uuid">8dccacdf-63b1-4789-b72a-763e95713f24</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </system>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <os>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </os>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <features>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </features>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8dccacdf-63b1-4789-b72a-763e95713f24_disk">
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8dccacdf-63b1-4789-b72a-763e95713f24_disk.config">
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:2c:0e:ed"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <target dev="tap0c33657b-e6"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/console.log" append="off"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <video>
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </video>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:50:04 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:50:04 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:50:04 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:50:04 compute-2 nova_compute[232428]: </domain>
Nov 29 07:50:04 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.243 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Preparing to wait for external event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.243 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.243 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.243 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.244 232432 DEBUG nova.virt.libvirt.vif [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1426395669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1426395669',id=28,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-qh58bk29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=8dccacdf-63b1-4789-b72a-763e95713f24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.245 232432 DEBUG nova.network.os_vif_util [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.245 232432 DEBUG nova.network.os_vif_util [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.246 232432 DEBUG os_vif [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.246 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.247 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.247 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.250 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.251 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c33657b-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.251 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c33657b-e6, col_values=(('external_ids', {'iface-id': '0c33657b-e644-48aa-83dd-c0311b9ffd6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:0e:ed', 'vm-uuid': '8dccacdf-63b1-4789-b72a-763e95713f24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:04 compute-2 NetworkManager[48993]: <info>  [1764402604.2540] manager: (tap0c33657b-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.256 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.260 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.261 232432 INFO os_vif [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6')
Nov 29 07:50:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.382 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.382 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.382 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No VIF found with MAC fa:16:3e:2c:0e:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.383 232432 INFO nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Using config drive
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.412 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:04 compute-2 ceph-mon[77138]: pgmap v1372: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 449 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Nov 29 07:50:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1667132660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/587344794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.974 232432 INFO nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Creating config drive at /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config
Nov 29 07:50:04 compute-2 nova_compute[232428]: 2025-11-29 07:50:04.982 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlm0ed4u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.120 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlm0ed4u" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.155 232432 DEBUG nova.storage.rbd_utils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 8dccacdf-63b1-4789-b72a-763e95713f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.159 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config 8dccacdf-63b1-4789-b72a-763e95713f24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.353 232432 DEBUG oslo_concurrency.processutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config 8dccacdf-63b1-4789-b72a-763e95713f24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.354 232432 INFO nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Deleting local config drive /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24/disk.config because it was imported into RBD.
Nov 29 07:50:05 compute-2 kernel: tap0c33657b-e6: entered promiscuous mode
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.4068] manager: (tap0c33657b-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 systemd-udevd[247179]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:05 compute-2 ovn_controller[134375]: 2025-11-29T07:50:05Z|00095|binding|INFO|Claiming lport 0c33657b-e644-48aa-83dd-c0311b9ffd6e for this chassis.
Nov 29 07:50:05 compute-2 ovn_controller[134375]: 2025-11-29T07:50:05Z|00096|binding|INFO|0c33657b-e644-48aa-83dd-c0311b9ffd6e: Claiming fa:16:3e:2c:0e:ed 10.100.0.13
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.4292] device (tap0c33657b-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.4306] device (tap0c33657b-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.4414] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.4422] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.440 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:0e:ed 10.100.0.13'], port_security=['fa:16:3e:2c:0e:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8dccacdf-63b1-4789-b72a-763e95713f24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2c58ae2e706424fa3147694fc571db0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad0084cd-f9d9-4dc4-8cfd-f48e086021ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61f2e0e3-be06-454a-8b4e-1d0721b87b15, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0c33657b-e644-48aa-83dd-c0311b9ffd6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.441 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0c33657b-e644-48aa-83dd-c0311b9ffd6e in datapath 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d bound to our chassis
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.442 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d
Nov 29 07:50:05 compute-2 systemd-machined[194747]: New machine qemu-14-instance-0000001c.
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4090b295-a9b3-46f4-8a3e-d995870f182f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.469 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ccff1f0-61 in ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.471 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ccff1f0-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.471 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3294a0-658e-4a4c-91b2-b77b6ee31722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.472 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f363c3c5-6990-479f-b057-ac0b45fecdaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 systemd[1]: Started Virtual Machine qemu-14-instance-0000001c.
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.498 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8d6514-dbce-4818-96b8-7f4c7055c397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.530 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[54715c10-89eb-4c80-ae06-0d152e606c11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.574 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3c77a86d-f33b-4ea1-835d-dea0d7d3c521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.602 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c51a70d-f27b-47d4-8aa3-b88ce49e5040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.6072] manager: (tap5ccff1f0-60): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 07:50:05 compute-2 systemd-udevd[247334]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.636 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7af7b85a-c368-4be8-bc48-9555eef752c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.644 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c3845d91-1be9-4a03-84f9-5495994eb19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.670 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.6810] device (tap5ccff1f0-60): carrier: link connected
Nov 29 07:50:05 compute-2 ovn_controller[134375]: 2025-11-29T07:50:05Z|00097|binding|INFO|Setting lport 0c33657b-e644-48aa-83dd-c0311b9ffd6e ovn-installed in OVS
Nov 29 07:50:05 compute-2 ovn_controller[134375]: 2025-11-29T07:50:05Z|00098|binding|INFO|Setting lport 0c33657b-e644-48aa-83dd-c0311b9ffd6e up in Southbound
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.688 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[eea7d2f2-3bce-4a4d-87dd-2cccb3a08152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.708 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbe5c1b-e20f-4181-94be-571d0043ab44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ccff1f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ab:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557194, 'reachable_time': 20070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247353, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.729 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[469f119f-56f8-42db-b525-a36f82c66474]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:ab1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557194, 'tstamp': 557194}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247354, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.751 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf145bc2-31ec-4a08-8505-ddbe09de92f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ccff1f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ab:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557194, 'reachable_time': 20070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247365, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b83fc1de-42a0-4202-b2f9-dec3f3c46dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.884 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a151f2f5-bff2-4d93-8873-f140ba31a999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.885 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ccff1f0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.886 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.886 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ccff1f0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:05 compute-2 NetworkManager[48993]: <info>  [1764402605.8885] manager: (tap5ccff1f0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 kernel: tap5ccff1f0-60: entered promiscuous mode
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.891 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ccff1f0-60, col_values=(('external_ids', {'iface-id': '09a417c6-99cf-4665-bfe6-2a3bd0914a3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.892 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 ovn_controller[134375]: 2025-11-29T07:50:05Z|00099|binding|INFO|Releasing lport 09a417c6-99cf-4665-bfe6-2a3bd0914a3c from this chassis (sb_readonly=0)
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.910 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.912 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[20dd3c57-9749-4159-a155-b18eda0e3d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.913 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:50:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:05.914 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'env', 'PROCESS_TAG=haproxy-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.952 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402605.9513953, 8dccacdf-63b1-4789-b72a-763e95713f24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.952 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] VM Started (Lifecycle Event)
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.982 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.988 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402605.9517276, 8dccacdf-63b1-4789-b72a-763e95713f24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:05 compute-2 nova_compute[232428]: 2025-11-29 07:50:05.989 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] VM Paused (Lifecycle Event)
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.003 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.007 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.030 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.039 232432 DEBUG nova.network.neutron [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updated VIF entry in instance network info cache for port 0c33657b-e644-48aa-83dd-c0311b9ffd6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.040 232432 DEBUG nova.network.neutron [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updating instance_info_cache with network_info: [{"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:06 compute-2 nova_compute[232428]: 2025-11-29 07:50:06.064 232432 DEBUG oslo_concurrency.lockutils [req-045ed142-247b-4b92-9258-037a9c3c4abb req-d71ca336-320a-48e6-9b2e-5526eb50307e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:06.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:06 compute-2 podman[247429]: 2025-11-29 07:50:06.347752 +0000 UTC m=+0.058918577 container create 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:50:06 compute-2 systemd[1]: Started libpod-conmon-1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9.scope.
Nov 29 07:50:06 compute-2 podman[247429]: 2025-11-29 07:50:06.315788429 +0000 UTC m=+0.026955026 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:50:06 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:50:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc9797b33cfac36d97bc2fcf3cc71a686ae731f593fe4d32a29aee95fa77c40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:06 compute-2 podman[247429]: 2025-11-29 07:50:06.446916217 +0000 UTC m=+0.158082804 container init 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:50:06 compute-2 podman[247429]: 2025-11-29 07:50:06.454849366 +0000 UTC m=+0.166015943 container start 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:50:06 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [NOTICE]   (247449) : New worker (247451) forked
Nov 29 07:50:06 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [NOTICE]   (247449) : Loading success.
Nov 29 07:50:06 compute-2 ceph-mon[77138]: pgmap v1373: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 487 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 184 op/s
Nov 29 07:50:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:07 compute-2 ceph-mon[77138]: pgmap v1374: 305 pgs: 305 active+clean; 488 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 188 op/s
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.820 232432 DEBUG nova.compute.manager [req-46639a60-f481-41f2-b940-26ecb0037ba2 req-04a798e1-1276-4ac0-bb78-d47916ed3b21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.821 232432 DEBUG oslo_concurrency.lockutils [req-46639a60-f481-41f2-b940-26ecb0037ba2 req-04a798e1-1276-4ac0-bb78-d47916ed3b21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.821 232432 DEBUG oslo_concurrency.lockutils [req-46639a60-f481-41f2-b940-26ecb0037ba2 req-04a798e1-1276-4ac0-bb78-d47916ed3b21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.822 232432 DEBUG oslo_concurrency.lockutils [req-46639a60-f481-41f2-b940-26ecb0037ba2 req-04a798e1-1276-4ac0-bb78-d47916ed3b21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.822 232432 DEBUG nova.compute.manager [req-46639a60-f481-41f2-b940-26ecb0037ba2 req-04a798e1-1276-4ac0-bb78-d47916ed3b21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Processing event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.823 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.833 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402607.832991, 8dccacdf-63b1-4789-b72a-763e95713f24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.834 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] VM Resumed (Lifecycle Event)
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.836 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.842 232432 INFO nova.virt.libvirt.driver [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Instance spawned successfully.
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.842 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.877 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.881 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.897 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.897 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.898 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.898 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.899 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.899 232432 DEBUG nova.virt.libvirt.driver [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.936 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.995 232432 INFO nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Took 8.63 seconds to spawn the instance on the hypervisor.
Nov 29 07:50:07 compute-2 nova_compute[232428]: 2025-11-29 07:50:07.996 232432 DEBUG nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:08.008 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:08 compute-2 nova_compute[232428]: 2025-11-29 07:50:08.008 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:08.010 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:50:08 compute-2 nova_compute[232428]: 2025-11-29 07:50:08.085 232432 INFO nova.compute.manager [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Took 10.47 seconds to build instance.
Nov 29 07:50:08 compute-2 nova_compute[232428]: 2025-11-29 07:50:08.118 232432 DEBUG oslo_concurrency.lockutils [None req-4d5bde62-1a77-41d6-b62f-9c1760d97665 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:08 compute-2 nova_compute[232428]: 2025-11-29 07:50:08.208 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:08.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4081036947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.258 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:09 compute-2 ceph-mon[77138]: pgmap v1375: 305 pgs: 305 active+clean; 499 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Nov 29 07:50:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3704285363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.979 232432 DEBUG nova.compute.manager [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.980 232432 DEBUG oslo_concurrency.lockutils [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.981 232432 DEBUG oslo_concurrency.lockutils [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.981 232432 DEBUG oslo_concurrency.lockutils [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.981 232432 DEBUG nova.compute.manager [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] No waiting events found dispatching network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:50:09 compute-2 nova_compute[232428]: 2025-11-29 07:50:09.982 232432 WARNING nova.compute.manager [req-689a83c8-7900-4640-9f6c-2ac651535126 req-45cf2fe5-f315-49b3-b10e-6cad23ddc995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received unexpected event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e for instance with vm_state active and task_state None.
Nov 29 07:50:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:10.012 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:10.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/291087062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 07:50:11 compute-2 podman[247463]: 2025-11-29 07:50:11.72740581 +0000 UTC m=+0.108015685 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:50:12 compute-2 ceph-mon[77138]: osdmap e155: 3 total, 3 up, 3 in
Nov 29 07:50:12 compute-2 ceph-mon[77138]: pgmap v1377: 305 pgs: 305 active+clean; 530 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Nov 29 07:50:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:12.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:13 compute-2 nova_compute[232428]: 2025-11-29 07:50:13.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.186 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.256 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:14.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.447 232432 DEBUG nova.compute.manager [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-changed-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.448 232432 DEBUG nova.compute.manager [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Refreshing instance network info cache due to event network-changed-0c33657b-e644-48aa-83dd-c0311b9ffd6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.448 232432 DEBUG oslo_concurrency.lockutils [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.448 232432 DEBUG oslo_concurrency.lockutils [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:14 compute-2 nova_compute[232428]: 2025-11-29 07:50:14.448 232432 DEBUG nova.network.neutron [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Refreshing network info cache for port 0c33657b-e644-48aa-83dd-c0311b9ffd6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:50:14 compute-2 ceph-mon[77138]: pgmap v1378: 305 pgs: 305 active+clean; 530 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Nov 29 07:50:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 07:50:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:14.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:14 compute-2 podman[247482]: 2025-11-29 07:50:14.688008615 +0000 UTC m=+0.081337729 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.247 232432 DEBUG nova.compute.manager [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.379 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.379 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.400 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'pci_requests' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.421 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.422 232432 INFO nova.compute.claims [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.423 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'resources' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.435 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'numa_topology' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.459 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'pci_devices' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.508 232432 INFO nova.compute.resource_tracker [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating resource usage from migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.509 232432 DEBUG nova.compute.resource_tracker [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting to track incoming migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.593 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.875 232432 DEBUG nova.network.neutron [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updated VIF entry in instance network info cache for port 0c33657b-e644-48aa-83dd-c0311b9ffd6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.876 232432 DEBUG nova.network.neutron [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updating instance_info_cache with network_info: [{"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:15 compute-2 nova_compute[232428]: 2025-11-29 07:50:15.909 232432 DEBUG oslo_concurrency.lockutils [req-9af5db8e-e219-4914-a4f4-ffb57bd7805b req-fc754b37-37a5-498d-9f0c-47528a9fb215 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8dccacdf-63b1-4789-b72a-763e95713f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2740876082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:16 compute-2 nova_compute[232428]: 2025-11-29 07:50:16.028 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:16 compute-2 nova_compute[232428]: 2025-11-29 07:50:16.034 232432 DEBUG nova.compute.provider_tree [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:16 compute-2 nova_compute[232428]: 2025-11-29 07:50:16.051 232432 DEBUG nova.scheduler.client.report [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:16 compute-2 nova_compute[232428]: 2025-11-29 07:50:16.104 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:16 compute-2 nova_compute[232428]: 2025-11-29 07:50:16.105 232432 INFO nova.compute.manager [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Migrating
Nov 29 07:50:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:16.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:16 compute-2 ceph-mon[77138]: osdmap e156: 3 total, 3 up, 3 in
Nov 29 07:50:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:16 compute-2 sshd-session[247526]: Accepted publickey for nova from 192.168.122.101 port 44524 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 07:50:16 compute-2 systemd-logind[787]: New session 51 of user nova.
Nov 29 07:50:16 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 07:50:16 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 07:50:16 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 07:50:17 compute-2 systemd[1]: Starting User Manager for UID 42436...
Nov 29 07:50:17 compute-2 systemd[247530]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 07:50:17 compute-2 systemd[247530]: Queued start job for default target Main User Target.
Nov 29 07:50:17 compute-2 systemd[247530]: Created slice User Application Slice.
Nov 29 07:50:17 compute-2 systemd[247530]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:50:17 compute-2 systemd[247530]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:50:17 compute-2 systemd[247530]: Reached target Paths.
Nov 29 07:50:17 compute-2 systemd[247530]: Reached target Timers.
Nov 29 07:50:17 compute-2 systemd[247530]: Starting D-Bus User Message Bus Socket...
Nov 29 07:50:17 compute-2 systemd[247530]: Starting Create User's Volatile Files and Directories...
Nov 29 07:50:17 compute-2 systemd[247530]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:50:17 compute-2 systemd[247530]: Reached target Sockets.
Nov 29 07:50:17 compute-2 systemd[247530]: Finished Create User's Volatile Files and Directories.
Nov 29 07:50:17 compute-2 systemd[247530]: Reached target Basic System.
Nov 29 07:50:17 compute-2 systemd[1]: Started User Manager for UID 42436.
Nov 29 07:50:17 compute-2 systemd[247530]: Reached target Main User Target.
Nov 29 07:50:17 compute-2 systemd[247530]: Startup finished in 196ms.
Nov 29 07:50:17 compute-2 systemd[1]: Started Session 51 of User nova.
Nov 29 07:50:17 compute-2 sshd-session[247526]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 07:50:17 compute-2 sshd-session[247546]: Received disconnect from 192.168.122.101 port 44524:11: disconnected by user
Nov 29 07:50:17 compute-2 sshd-session[247546]: Disconnected from user nova 192.168.122.101 port 44524
Nov 29 07:50:17 compute-2 sshd-session[247526]: pam_unix(sshd:session): session closed for user nova
Nov 29 07:50:17 compute-2 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 07:50:17 compute-2 systemd-logind[787]: Session 51 logged out. Waiting for processes to exit.
Nov 29 07:50:17 compute-2 systemd-logind[787]: Removed session 51.
Nov 29 07:50:17 compute-2 sshd-session[247548]: Accepted publickey for nova from 192.168.122.101 port 44530 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 07:50:17 compute-2 systemd-logind[787]: New session 53 of user nova.
Nov 29 07:50:17 compute-2 systemd[1]: Started Session 53 of User nova.
Nov 29 07:50:17 compute-2 sshd-session[247548]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 07:50:17 compute-2 sshd-session[247551]: Received disconnect from 192.168.122.101 port 44530:11: disconnected by user
Nov 29 07:50:17 compute-2 sshd-session[247551]: Disconnected from user nova 192.168.122.101 port 44530
Nov 29 07:50:17 compute-2 sshd-session[247548]: pam_unix(sshd:session): session closed for user nova
Nov 29 07:50:17 compute-2 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 07:50:17 compute-2 systemd-logind[787]: Session 53 logged out. Waiting for processes to exit.
Nov 29 07:50:17 compute-2 systemd-logind[787]: Removed session 53.
Nov 29 07:50:17 compute-2 sudo[247552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:17 compute-2 sudo[247552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:17 compute-2 sudo[247552]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:17 compute-2 sudo[247578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:17 compute-2 sudo[247578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:17 compute-2 sudo[247578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:17 compute-2 ceph-mon[77138]: pgmap v1380: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.7 MiB/s wr, 282 op/s
Nov 29 07:50:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2740876082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:18 compute-2 nova_compute[232428]: 2025-11-29 07:50:18.214 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:18.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:19 compute-2 ceph-mon[77138]: pgmap v1381: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 268 op/s
Nov 29 07:50:19 compute-2 nova_compute[232428]: 2025-11-29 07:50:19.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:20 compute-2 ceph-mon[77138]: pgmap v1382: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 537 KiB/s wr, 192 op/s
Nov 29 07:50:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:20.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:20 compute-2 sshd-session[247604]: Invalid user sol from 45.148.10.240 port 53598
Nov 29 07:50:20 compute-2 sshd-session[247604]: Connection closed by invalid user sol 45.148.10.240 port 53598 [preauth]
Nov 29 07:50:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 07:50:21 compute-2 ovn_controller[134375]: 2025-11-29T07:50:21Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:0e:ed 10.100.0.13
Nov 29 07:50:21 compute-2 ovn_controller[134375]: 2025-11-29T07:50:21Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:0e:ed 10.100.0.13
Nov 29 07:50:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 07:50:22 compute-2 ceph-mon[77138]: osdmap e157: 3 total, 3 up, 3 in
Nov 29 07:50:22 compute-2 ceph-mon[77138]: pgmap v1384: 305 pgs: 305 active+clean; 595 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.5 MiB/s wr, 275 op/s
Nov 29 07:50:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:22.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:23 compute-2 ceph-mon[77138]: osdmap e158: 3 total, 3 up, 3 in
Nov 29 07:50:23 compute-2 nova_compute[232428]: 2025-11-29 07:50:23.193 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:23 compute-2 nova_compute[232428]: 2025-11-29 07:50:23.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:23 compute-2 nova_compute[232428]: 2025-11-29 07:50:23.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:24 compute-2 ceph-mon[77138]: pgmap v1386: 305 pgs: 305 active+clean; 595 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 126 op/s
Nov 29 07:50:24 compute-2 nova_compute[232428]: 2025-11-29 07:50:24.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:24 compute-2 nova_compute[232428]: 2025-11-29 07:50:24.267 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:24.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:24.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:50:26 compute-2 ceph-mon[77138]: pgmap v1387: 305 pgs: 305 active+clean; 622 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 11 MiB/s wr, 353 op/s
Nov 29 07:50:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:26.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.607 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.607 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.607 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.607 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:26.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:26 compute-2 nova_compute[232428]: 2025-11-29 07:50:26.886 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3480715856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:27 compute-2 ceph-mon[77138]: osdmap e159: 3 total, 3 up, 3 in
Nov 29 07:50:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1167011889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.632 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.646 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.646 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.647 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.647 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.647 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:27 compute-2 nova_compute[232428]: 2025-11-29 07:50:27.647 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:50:27 compute-2 podman[247610]: 2025-11-29 07:50:27.761264 +0000 UTC m=+0.153811371 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:50:27 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 07:50:27 compute-2 systemd[247530]: Activating special unit Exit the Session...
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped target Main User Target.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped target Basic System.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped target Paths.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped target Sockets.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped target Timers.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:50:27 compute-2 systemd[247530]: Closed D-Bus User Message Bus Socket.
Nov 29 07:50:27 compute-2 systemd[247530]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:50:27 compute-2 systemd[247530]: Removed slice User Application Slice.
Nov 29 07:50:27 compute-2 systemd[247530]: Reached target Shutdown.
Nov 29 07:50:27 compute-2 systemd[247530]: Finished Exit the Session.
Nov 29 07:50:27 compute-2 systemd[247530]: Reached target Exit the Session.
Nov 29 07:50:27 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 07:50:27 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 07:50:27 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 07:50:27 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 07:50:27 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 07:50:27 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 07:50:27 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.230 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.231 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:28.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:28 compute-2 ceph-mon[77138]: pgmap v1389: 305 pgs: 305 active+clean; 622 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.8 MiB/s wr, 302 op/s
Nov 29 07:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/184874410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2852710428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2852710428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2745027761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:28.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2896903068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.700 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.801 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.802 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.805 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.805 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.972 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.973 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4383MB free_disk=20.704662322998047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.974 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:28 compute-2 nova_compute[232428]: 2025-11-29 07:50:28.974 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.043 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration for instance 68def5bd-3a13-48c4-abe2-a7d5282f493b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.079 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating resource usage from migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.080 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting to track incoming migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.126 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance b00071fa-b5cc-4219-97e7-f88445b8c5d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.127 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 8dccacdf-63b1-4789-b72a-763e95713f24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.270 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.368 232432 WARNING nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 68def5bd-3a13-48c4-abe2-a7d5282f493b has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.368 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.368 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:50:29 compute-2 nova_compute[232428]: 2025-11-29 07:50:29.498 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2896903068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3165290517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/721227256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:29 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 07:50:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3812965692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2140946692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.001 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.012 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.037 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.071 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.072 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:30.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:30 compute-2 ceph-mon[77138]: pgmap v1390: 305 pgs: 305 active+clean; 608 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.2 MiB/s wr, 286 op/s
Nov 29 07:50:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3812965692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1045105352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2140946692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:30.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.964 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.964 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquired lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:30 compute-2 nova_compute[232428]: 2025-11-29 07:50:30.965 232432 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.072 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:31 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.188 232432 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.580 232432 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.593 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Releasing lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:31 compute-2 ceph-mon[77138]: pgmap v1391: 305 pgs: 305 active+clean; 648 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.4 MiB/s wr, 297 op/s
Nov 29 07:50:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2253559021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.674 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.676 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.676 232432 INFO nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Creating image(s)
Nov 29 07:50:31 compute-2 nova_compute[232428]: 2025-11-29 07:50:31.724 232432 DEBUG nova.storage.rbd_utils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] creating snapshot(nova-resize) on rbd image(68def5bd-3a13-48c4-abe2-a7d5282f493b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.100 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.101 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.115 232432 DEBUG nova.objects.instance [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.150 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.862 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.863 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:32 compute-2 nova_compute[232428]: 2025-11-29 07:50:32.863 232432 INFO nova.compute.manager [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Attaching volume 3466633d-ae13-4d07-b35e-af08eaa91384 to /dev/vdb
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.048 232432 DEBUG os_brick.utils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.049 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.064 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.065 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[ab115dd1-56a9-4e6a-bdd6-aea0c60cb54e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.066 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.075 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.075 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[549117f2-05ad-4fd2-a681-870d103fda30]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.077 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.086 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.086 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae44a02-9b2e-49b2-a16e-a07455f7558a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.088 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[8ceda5b6-859d-4471-bf81-2c28e9e4c057]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.089 232432 DEBUG oslo_concurrency.processutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.122 232432 DEBUG oslo_concurrency.processutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.125 232432 DEBUG os_brick.initiator.connectors.lightos [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.126 232432 DEBUG os_brick.initiator.connectors.lightos [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.126 232432 DEBUG os_brick.initiator.connectors.lightos [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.126 232432 DEBUG os_brick.utils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:50:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1714154324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1150943183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1953835541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.127 232432 DEBUG nova.virt.block_device [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updating existing volume attachment record: e78f34e7-94c4-464f-888d-fd7bd7eb66a4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:50:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.221 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.401 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.527 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.527 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Ensure instance console log exists: /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.528 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.528 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.529 232432 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.531 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.537 232432 WARNING nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.541 232432 DEBUG nova.virt.libvirt.host [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.542 232432 DEBUG nova.virt.libvirt.host [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.545 232432 DEBUG nova.virt.libvirt.host [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.546 232432 DEBUG nova.virt.libvirt.host [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.547 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.547 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.548 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.548 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.549 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.549 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.549 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.549 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.550 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.550 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.550 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.551 232432 DEBUG nova.virt.hardware [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.551 232432 DEBUG nova.objects.instance [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.568 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2109071385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.851 232432 DEBUG nova.objects.instance [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.879 232432 DEBUG nova.virt.libvirt.driver [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Attempting to attach volume 3466633d-ae13-4d07-b35e-af08eaa91384 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:50:33 compute-2 nova_compute[232428]: 2025-11-29 07:50:33.883 232432 DEBUG nova.virt.libvirt.guest [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 07:50:33 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   </source>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 07:50:33 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   </auth>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 07:50:33 compute-2 nova_compute[232428]:   <shareable/>
Nov 29 07:50:33 compute-2 nova_compute[232428]: </disk>
Nov 29 07:50:33 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:50:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2042320260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.043 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.088 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.123 232432 DEBUG nova.virt.libvirt.driver [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.125 232432 DEBUG nova.virt.libvirt.driver [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.125 232432 DEBUG nova.virt.libvirt.driver [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.125 232432 DEBUG nova.virt.libvirt.driver [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No VIF found with MAC fa:16:3e:2c:0e:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:34 compute-2 ceph-mon[77138]: osdmap e160: 3 total, 3 up, 3 in
Nov 29 07:50:34 compute-2 ceph-mon[77138]: pgmap v1393: 305 pgs: 305 active+clean; 648 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 216 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Nov 29 07:50:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2109071385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2042320260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.273 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:34.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.500 232432 DEBUG oslo_concurrency.lockutils [None req-368fc972-29ee-4426-9c9f-275257de7670 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2579650902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.546 232432 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.550 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <uuid>68def5bd-3a13-48c4-abe2-a7d5282f493b</uuid>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <name>instance-0000001d</name>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:name>tempest-MigrationsAdminTest-server-1556132117</nova:name>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:50:33</nova:creationTime>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <system>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="serial">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="uuid">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </system>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <os>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </os>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <features>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </features>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk">
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config">
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </source>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:50:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/console.log" append="off"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <video>
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </video>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:50:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:50:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:50:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:50:34 compute-2 nova_compute[232428]: </domain>
Nov 29 07:50:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.597 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.598 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:34 compute-2 nova_compute[232428]: 2025-11-29 07:50:34.599 232432 INFO nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Using config drive
Nov 29 07:50:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:34 compute-2 systemd-machined[194747]: New machine qemu-15-instance-0000001d.
Nov 29 07:50:34 compute-2 systemd[1]: Started Virtual Machine qemu-15-instance-0000001d.
Nov 29 07:50:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 07:50:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2579650902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.623 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402635.622064, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.624 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Resumed (Lifecycle Event)
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.628 232432 DEBUG nova.compute.manager [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.634 232432 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance running successfully.
Nov 29 07:50:35 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.637 232432 DEBUG nova.virt.libvirt.guest [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.637 232432 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.662 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.666 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.708 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.708 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402635.6243825, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.708 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Started (Lifecycle Event)
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.733 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:35 compute-2 nova_compute[232428]: 2025-11-29 07:50:35.737 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:36 compute-2 ceph-mon[77138]: osdmap e161: 3 total, 3 up, 3 in
Nov 29 07:50:36 compute-2 ceph-mon[77138]: pgmap v1395: 305 pgs: 305 active+clean; 728 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.5 MiB/s wr, 330 op/s
Nov 29 07:50:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:36.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:36.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.004 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.004 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.004 232432 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.151 232432 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.453 232432 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.468 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:37 compute-2 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 07:50:37 compute-2 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Consumed 2.789s CPU time.
Nov 29 07:50:37 compute-2 systemd-machined[194747]: Machine qemu-15-instance-0000001d terminated.
Nov 29 07:50:37 compute-2 ceph-mon[77138]: pgmap v1396: 305 pgs: 305 active+clean; 728 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.6 MiB/s wr, 289 op/s
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.710 232432 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance destroyed successfully.
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.711 232432 DEBUG nova.objects.instance [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.728 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.729 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.746 232432 DEBUG nova.objects.instance [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:37 compute-2 sudo[247927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:37 compute-2 sudo[247927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:37 compute-2 sudo[247927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:37 compute-2 nova_compute[232428]: 2025-11-29 07:50:37.851 232432 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:37 compute-2 sudo[247952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:37 compute-2 sudo[247952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:37 compute-2 sudo[247952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.223 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3116162584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.289 232432 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.297 232432 DEBUG nova.compute.provider_tree [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.310 232432 DEBUG oslo_concurrency.lockutils [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.310 232432 DEBUG oslo_concurrency.lockutils [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.328 232432 DEBUG nova.scheduler.client.report [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.334 232432 INFO nova.compute.manager [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Detaching volume 3466633d-ae13-4d07-b35e-af08eaa91384
Nov 29 07:50:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.409 232432 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.585 232432 INFO nova.virt.block_device [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Attempting to driver detach volume 3466633d-ae13-4d07-b35e-af08eaa91384 from mountpoint /dev/vdb
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.595 232432 DEBUG nova.virt.libvirt.driver [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Attempting to detach device vdb from instance 8dccacdf-63b1-4789-b72a-763e95713f24 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.596 232432 DEBUG nova.virt.libvirt.guest [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   </source>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <shareable/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]: </disk>
Nov 29 07:50:38 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.604 232432 INFO nova.virt.libvirt.driver [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully detached device vdb from instance 8dccacdf-63b1-4789-b72a-763e95713f24 from the persistent domain config.
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.605 232432 DEBUG nova.virt.libvirt.driver [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8dccacdf-63b1-4789-b72a-763e95713f24 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.605 232432 DEBUG nova.virt.libvirt.guest [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   </source>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <shareable/>
Nov 29 07:50:38 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:50:38 compute-2 nova_compute[232428]: </disk>
Nov 29 07:50:38 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:50:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:38.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.709 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Creating tmpfile /var/lib/nova/instances/tmp01fw4h8b to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.710 232432 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.714 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764402638.7139087, 8dccacdf-63b1-4789-b72a-763e95713f24 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.715 232432 DEBUG nova.virt.libvirt.driver [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8dccacdf-63b1-4789-b72a-763e95713f24 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:50:38 compute-2 nova_compute[232428]: 2025-11-29 07:50:38.718 232432 INFO nova.virt.libvirt.driver [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully detached device vdb from instance 8dccacdf-63b1-4789-b72a-763e95713f24 from the live domain config.
Nov 29 07:50:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3116162584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.063 232432 DEBUG nova.objects.instance [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.107 232432 DEBUG oslo_concurrency.lockutils [None req-8b3f2946-f298-421b-99b4-94782250c596 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.277 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.562 232432 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.589 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.590 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:39 compute-2 nova_compute[232428]: 2025-11-29 07:50:39.590 232432 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:39 compute-2 ceph-mon[77138]: pgmap v1397: 305 pgs: 305 active+clean; 707 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 5.9 MiB/s wr, 317 op/s
Nov 29 07:50:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 07:50:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:40.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.923 232432 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.946 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.948 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.949 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Creating instance directory: /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.949 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Ensure instance console log exists: /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.950 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.950 232432 DEBUG nova.virt.libvirt.vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:34Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.951 232432 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.951 232432 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.952 232432 DEBUG os_vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.953 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.953 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.959 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.960 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape347928a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.961 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape347928a-5a, col_values=(('external_ids', {'iface-id': 'e347928a-5a81-4fdb-a7df-4ac039bb8bb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:42:48', 'vm-uuid': 'aca637ac-6ef0-42f8-aacf-e022e990aeba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.964 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:40 compute-2 NetworkManager[48993]: <info>  [1764402640.9657] manager: (tape347928a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:50:40 compute-2 ceph-mon[77138]: osdmap e162: 3 total, 3 up, 3 in
Nov 29 07:50:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/757320924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1453412454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.975 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.976 232432 INFO os_vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a')
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.976 232432 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Nov 29 07:50:40 compute-2 nova_compute[232428]: 2025-11-29 07:50:40.976 232432 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.741 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.742 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.742 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.742 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.743 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.744 232432 INFO nova.compute.manager [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Terminating instance
Nov 29 07:50:41 compute-2 nova_compute[232428]: 2025-11-29 07:50:41.745 232432 DEBUG nova.compute.manager [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:50:42 compute-2 kernel: tap0c33657b-e6 (unregistering): left promiscuous mode
Nov 29 07:50:42 compute-2 NetworkManager[48993]: <info>  [1764402642.0843] device (tap0c33657b-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 ovn_controller[134375]: 2025-11-29T07:50:42Z|00100|binding|INFO|Releasing lport 0c33657b-e644-48aa-83dd-c0311b9ffd6e from this chassis (sb_readonly=0)
Nov 29 07:50:42 compute-2 ovn_controller[134375]: 2025-11-29T07:50:42Z|00101|binding|INFO|Setting lport 0c33657b-e644-48aa-83dd-c0311b9ffd6e down in Southbound
Nov 29 07:50:42 compute-2 ovn_controller[134375]: 2025-11-29T07:50:42Z|00102|binding|INFO|Removing iface tap0c33657b-e6 ovn-installed in OVS
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.101 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.107 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:0e:ed 10.100.0.13'], port_security=['fa:16:3e:2c:0e:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8dccacdf-63b1-4789-b72a-763e95713f24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2c58ae2e706424fa3147694fc571db0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad0084cd-f9d9-4dc4-8cfd-f48e086021ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61f2e0e3-be06-454a-8b4e-1d0721b87b15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0c33657b-e644-48aa-83dd-c0311b9ffd6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.109 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0c33657b-e644-48aa-83dd-c0311b9ffd6e in datapath 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d unbound from our chassis
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.110 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.112 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3b8f5f-430c-4cf0-a06e-ad787da25b61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.113 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d namespace which is not needed anymore
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.119 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 29 07:50:42 compute-2 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Consumed 15.285s CPU time.
Nov 29 07:50:42 compute-2 systemd-machined[194747]: Machine qemu-14-instance-0000001c terminated.
Nov 29 07:50:42 compute-2 ceph-mon[77138]: pgmap v1399: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 648 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 5.9 MiB/s wr, 524 op/s
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.192 232432 INFO nova.virt.libvirt.driver [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Instance destroyed successfully.
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.193 232432 DEBUG nova.objects.instance [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'resources' on Instance uuid 8dccacdf-63b1-4789-b72a-763e95713f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:42 compute-2 podman[248007]: 2025-11-29 07:50:42.199609776 +0000 UTC m=+0.079862204 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.219 232432 DEBUG nova.virt.libvirt.vif [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1426395669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1426395669',id=28,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:08Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-qh58bk29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=8dccacdf-63b1-4789-b72a-763e95713f24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.220 232432 DEBUG nova.network.os_vif_util [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "address": "fa:16:3e:2c:0e:ed", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c33657b-e6", "ovs_interfaceid": "0c33657b-e644-48aa-83dd-c0311b9ffd6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.221 232432 DEBUG nova.network.os_vif_util [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.222 232432 DEBUG os_vif [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.224 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.224 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c33657b-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.226 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.237 232432 INFO os_vif [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:0e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0c33657b-e644-48aa-83dd-c0311b9ffd6e,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c33657b-e6')
Nov 29 07:50:42 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [NOTICE]   (247449) : haproxy version is 2.8.14-c23fe91
Nov 29 07:50:42 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [NOTICE]   (247449) : path to executable is /usr/sbin/haproxy
Nov 29 07:50:42 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [WARNING]  (247449) : Exiting Master process...
Nov 29 07:50:42 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [ALERT]    (247449) : Current worker (247451) exited with code 143 (Terminated)
Nov 29 07:50:42 compute-2 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[247445]: [WARNING]  (247449) : All workers exited. Exiting... (0)
Nov 29 07:50:42 compute-2 systemd[1]: libpod-1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9.scope: Deactivated successfully.
Nov 29 07:50:42 compute-2 podman[248056]: 2025-11-29 07:50:42.294422747 +0000 UTC m=+0.059251808 container died 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:50:42 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9-userdata-shm.mount: Deactivated successfully.
Nov 29 07:50:42 compute-2 systemd[1]: var-lib-containers-storage-overlay-bcc9797b33cfac36d97bc2fcf3cc71a686ae731f593fe4d32a29aee95fa77c40-merged.mount: Deactivated successfully.
Nov 29 07:50:42 compute-2 podman[248056]: 2025-11-29 07:50:42.340576393 +0000 UTC m=+0.105405444 container cleanup 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:50:42 compute-2 systemd[1]: libpod-conmon-1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9.scope: Deactivated successfully.
Nov 29 07:50:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:42.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:42 compute-2 podman[248103]: 2025-11-29 07:50:42.411055851 +0000 UTC m=+0.044818455 container remove 1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.418 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[947d0607-64eb-477a-8f77-c87588e921ce]: (4, ('Sat Nov 29 07:50:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d (1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9)\n1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9\nSat Nov 29 07:50:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d (1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9)\n1564f8515d76d2d7a8a4a981d3230660f0ba1351f8fa938ce63bb5703164cbb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.419 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4f02625f-5b23-4e2b-a38c-ab20db451bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.420 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ccff1f0-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.422 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 kernel: tap5ccff1f0-60: left promiscuous mode
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.443 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.446 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2507353-a75a-4483-857f-fecd15562714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.460 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c6dd01-e9dd-4371-9905-6435e516bfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.462 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10eb1589-1e41-400e-8d6d-94106d7a9ee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.480 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9d01891e-9cd1-4d6b-8e60-72c46f355a52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557183, 'reachable_time': 40004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248118, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 systemd[1]: run-netns-ovnmeta\x2d5ccff1f0\x2d6b4b\x2d41d4\x2da60d\x2d18a7eff7fe9d.mount: Deactivated successfully.
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.484 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:50:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:42.485 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1b7236-4e9a-4fa1-8d97-7bf0d3c45fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.685 232432 INFO nova.virt.libvirt.driver [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Deleting instance files /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24_del
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.686 232432 INFO nova.virt.libvirt.driver [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Deletion of /var/lib/nova/instances/8dccacdf-63b1-4789-b72a-763e95713f24_del complete
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.884 232432 INFO nova.compute.manager [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Took 1.14 seconds to destroy the instance on the hypervisor.
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.885 232432 DEBUG oslo.service.loopingcall [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.885 232432 DEBUG nova.compute.manager [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:50:42 compute-2 nova_compute[232428]: 2025-11-29 07:50:42.885 232432 DEBUG nova.network.neutron [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.044 232432 DEBUG nova.compute.manager [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-unplugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.045 232432 DEBUG oslo_concurrency.lockutils [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.046 232432 DEBUG oslo_concurrency.lockutils [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.046 232432 DEBUG oslo_concurrency.lockutils [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.047 232432 DEBUG nova.compute.manager [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] No waiting events found dispatching network-vif-unplugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.047 232432 DEBUG nova.compute.manager [req-b0088ce6-d981-4d90-9d3b-19ad6d6500d5 req-cf0a7666-2d4b-40c0-b73d-011f9655af9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-unplugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.226 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:43 compute-2 ceph-mon[77138]: pgmap v1400: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 648 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 2.5 MiB/s wr, 400 op/s
Nov 29 07:50:43 compute-2 sudo[248121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:43 compute-2 sudo[248121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:43 compute-2 sudo[248121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:43 compute-2 sudo[248146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:43 compute-2 sudo[248146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:43 compute-2 sudo[248146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.944 232432 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Nov 29 07:50:43 compute-2 nova_compute[232428]: 2025-11-29 07:50:43.947 232432 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Nov 29 07:50:43 compute-2 sudo[248171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:43 compute-2 sudo[248171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:43 compute-2 sudo[248171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-2 systemd[1]: Starting libvirt proxy daemon...
Nov 29 07:50:44 compute-2 sudo[248196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:50:44 compute-2 sudo[248196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-2 systemd[1]: Started libvirt proxy daemon.
Nov 29 07:50:44 compute-2 kernel: tape347928a-5a: entered promiscuous mode
Nov 29 07:50:44 compute-2 NetworkManager[48993]: <info>  [1764402644.2568] manager: (tape347928a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 07:50:44 compute-2 systemd-udevd[248016]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:44 compute-2 NetworkManager[48993]: <info>  [1764402644.2751] device (tape347928a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:50:44 compute-2 NetworkManager[48993]: <info>  [1764402644.2759] device (tape347928a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:50:44 compute-2 ovn_controller[134375]: 2025-11-29T07:50:44Z|00103|binding|INFO|Claiming lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for this additional chassis.
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:44 compute-2 ovn_controller[134375]: 2025-11-29T07:50:44Z|00104|binding|INFO|e347928a-5a81-4fdb-a7df-4ac039bb8bb3: Claiming fa:16:3e:93:42:48 10.100.0.9
Nov 29 07:50:44 compute-2 ovn_controller[134375]: 2025-11-29T07:50:44Z|00105|binding|INFO|Claiming lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b for this additional chassis.
Nov 29 07:50:44 compute-2 ovn_controller[134375]: 2025-11-29T07:50:44Z|00106|binding|INFO|9ecac803-0ffe-4cdf-a724-cbb61954b01b: Claiming fa:16:3e:f4:69:f6 19.80.0.160
Nov 29 07:50:44 compute-2 systemd-machined[194747]: New machine qemu-16-instance-0000001e.
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:44 compute-2 systemd[1]: Started Virtual Machine qemu-16-instance-0000001e.
Nov 29 07:50:44 compute-2 ovn_controller[134375]: 2025-11-29T07:50:44Z|00107|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 ovn-installed in OVS
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:44.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.585 232432 DEBUG nova.network.neutron [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.605 232432 INFO nova.compute.manager [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Took 1.72 seconds to deallocate network for instance.
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.647 232432 DEBUG nova.compute.manager [req-2bbeaf97-9678-447b-8b78-26d112507af9 req-2f1d1479-4d67-4ffe-8f49-f8fa90f77bad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-deleted-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.666 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.667 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:44 compute-2 sudo[248196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-2 nova_compute[232428]: 2025-11-29 07:50:44.774 232432 DEBUG oslo_concurrency.processutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.154 232432 DEBUG nova.compute.manager [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.155 232432 DEBUG oslo_concurrency.lockutils [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.155 232432 DEBUG oslo_concurrency.lockutils [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.155 232432 DEBUG oslo_concurrency.lockutils [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.155 232432 DEBUG nova.compute.manager [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] No waiting events found dispatching network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.156 232432 WARNING nova.compute.manager [req-68d30a5f-d093-4d93-b8fe-6acae72c8f81 req-77981451-81a7-4871-b578-874ddbdc6ab3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Received unexpected event network-vif-plugged-0c33657b-e644-48aa-83dd-c0311b9ffd6e for instance with vm_state deleted and task_state None.
Nov 29 07:50:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4274841309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.261 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402645.2611072, aca637ac-6ef0-42f8-aacf-e022e990aeba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.262 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Started (Lifecycle Event)
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.267 232432 DEBUG oslo_concurrency.processutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.274 232432 DEBUG nova.compute.provider_tree [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.301 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.303 232432 DEBUG nova.scheduler.client.report [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.334 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.434 232432 INFO nova.scheduler.client.report [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Deleted allocations for instance 8dccacdf-63b1-4789-b72a-763e95713f24
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.523 232432 DEBUG oslo_concurrency.lockutils [None req-32ada1a9-9e14-4faf-88e4-87084d572ad7 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "8dccacdf-63b1-4789-b72a-763e95713f24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:45 compute-2 podman[248355]: 2025-11-29 07:50:45.73183548 +0000 UTC m=+0.103249476 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.794 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402645.7941704, aca637ac-6ef0-42f8-aacf-e022e990aeba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.795 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Resumed (Lifecycle Event)
Nov 29 07:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4274841309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:45 compute-2 ceph-mon[77138]: pgmap v1401: 305 pgs: 305 active+clean; 569 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.9 KiB/s wr, 366 op/s
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.822 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.828 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:45 compute-2 nova_compute[232428]: 2025-11-29 07:50:45.846 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 07:50:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.226 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00108|binding|INFO|Claiming lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for this chassis.
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00109|binding|INFO|e347928a-5a81-4fdb-a7df-4ac039bb8bb3: Claiming fa:16:3e:93:42:48 10.100.0.9
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00110|binding|INFO|Claiming lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b for this chassis.
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00111|binding|INFO|9ecac803-0ffe-4cdf-a724-cbb61954b01b: Claiming fa:16:3e:f4:69:f6 19.80.0.160
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00112|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 up in Southbound
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00113|binding|INFO|Setting lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b up in Southbound
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.500 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:42:48 10.100.0.9'], port_security=['fa:16:3e:93:42:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1284960005', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aca637ac-6ef0-42f8-aacf-e022e990aeba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1284960005', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e347928a-5a81-4fdb-a7df-4ac039bb8bb3) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.502 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:69:f6 19.80.0.160'], port_security=['fa:16:3e:f4:69:f6 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['e347928a-5a81-4fdb-a7df-4ac039bb8bb3'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1926672861', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1926672861', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c4847c33-f725-4948-8187-3e41c1ea344f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ecac803-0ffe-4cdf-a724-cbb61954b01b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.503 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.504 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.525 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ca50566b-4cf0-4112-a738-8b4961e71bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.526 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb746034c-01 in ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.528 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb746034c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.528 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd63ae5-3bab-43fa-92b3-d61694e0bc81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.529 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[70991a75-e456-4bf6-a53f-8e0b227b2cc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.551 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[983d5ee9-fa6e-4ae6-ac2e-ee675bfa82a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.568 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[64621699-dc72-4b08-a0f9-268b875877c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.601 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3d93bf-1d53-4a98-a1dd-843bd725a585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.608 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7cea0b-d2b5-40c1-bc3c-a59384c764dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 NetworkManager[48993]: <info>  [1764402647.6101] manager: (tapb746034c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 07:50:47 compute-2 systemd-udevd[248384]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.652 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fc53b447-b97e-4328-8bc9-c389c8b5b479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.655 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8258e0-7991-43df-8156-72a58c19dc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.674 232432 INFO nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Post operation of migration started
Nov 29 07:50:47 compute-2 NetworkManager[48993]: <info>  [1764402647.6880] device (tapb746034c-00): carrier: link connected
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.693 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b4e159-c836-4ab1-b7d7-b87f73bd0a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.712 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef40e1a9-e0a8-4be0-a8ad-eca45502d0ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 33136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248403, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.730 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccec24bc-4e95-41d7-8fd0-442ad8d63722]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:8cc1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561395, 'tstamp': 561395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248404, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.752 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36b38233-dc0d-474e-9150-045f93a8accf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 33136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248405, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.798 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[694719a1-1d55-4b2a-a1d6-1c2b6aaf22af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.865 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[67875a64-ee1f-453b-8120-666b0a2b1a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 NetworkManager[48993]: <info>  [1764402647.8696] manager: (tapb746034c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 07:50:47 compute-2 kernel: tapb746034c-00: entered promiscuous mode
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.874 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.875 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 ovn_controller[134375]: 2025-11-29T07:50:47Z|00114|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.876 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.877 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7346f51d-aa73-4764-9009-67d8773bfb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.878 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-b746034c-0143-4024-986c-673efea114a3
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID b746034c-0143-4024-986c-673efea114a3
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:50:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:47.878 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'env', 'PROCESS_TAG=haproxy-b746034c-0143-4024-986c-673efea114a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b746034c-0143-4024-986c-673efea114a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:50:47 compute-2 nova_compute[232428]: 2025-11-29 07:50:47.890 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 07:50:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1980099757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.094 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.095 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.095 232432 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.227 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:48 compute-2 podman[248438]: 2025-11-29 07:50:48.263379962 +0000 UTC m=+0.058004059 container create 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:50:48 compute-2 systemd[1]: Started libpod-conmon-9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5.scope.
Nov 29 07:50:48 compute-2 podman[248438]: 2025-11-29 07:50:48.231394719 +0000 UTC m=+0.026018846 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:50:48 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:50:48 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791219dc6bc483f099d20d1d3bc3e1b4fcfe77afea899c4e6e0b2fd877655b48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:48 compute-2 podman[248438]: 2025-11-29 07:50:48.357080127 +0000 UTC m=+0.151704244 container init 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:48 compute-2 podman[248438]: 2025-11-29 07:50:48.363578031 +0000 UTC m=+0.158202128 container start 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:50:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.384 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.385 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.386 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.386 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.386 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:48 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [NOTICE]   (248457) : New worker (248459) forked
Nov 29 07:50:48 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [NOTICE]   (248457) : Loading success.
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.388 232432 INFO nova.compute.manager [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Terminating instance
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.390 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.390 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.391 232432 DEBUG nova.network.neutron [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.435 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecac803-0ffe-4cdf-a724-cbb61954b01b in datapath 5791158c-7fc4-4c56-891c-c8aa0c79ed59 unbound from our chassis
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.438 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5791158c-7fc4-4c56-891c-c8aa0c79ed59
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.453 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[68860354-6bb8-4734-b634-3ffb0d1c7730]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.455 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5791158c-71 in ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.460 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5791158c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.460 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e70c7def-54fc-44fa-b212-477623d78d74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.463 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[57650b2e-98a2-4707-be17-f4d9da1bea06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.478 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[56cbd7f9-ea0c-4e35-aeda-04db5c0863fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.504 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9f78ed9e-159b-4064-88b8-3d389f749b0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.537 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[917311e7-44a0-41ae-b701-41511b675fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 NetworkManager[48993]: <info>  [1764402648.5459] manager: (tap5791158c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.545 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3b972e49-f6fe-461a-b088-8c45b144e372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 systemd-udevd[248394]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.587 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[16788270-4d7d-461f-b0c3-3499452c791f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.591 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1bae21-c0a8-4e48-8d3d-9d4775fc364a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.615 232432 DEBUG nova.network.neutron [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:48 compute-2 NetworkManager[48993]: <info>  [1764402648.6158] device (tap5791158c-70): carrier: link connected
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.625 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d0f40c-7ebb-40c6-9a16-1045e9d537da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.655 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[06f0f608-4da1-416a-8121-55178ea572be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5791158c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561488, 'reachable_time': 43107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248478, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.686 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[baa1116d-11ff-471f-ad19-357f67e2645b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:cdab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561488, 'tstamp': 561488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248479, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.716 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[939a4dce-fbc5-40b3-8dc8-ec2a3a4b7c5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5791158c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561488, 'reachable_time': 43107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248480, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.766 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf1de10-b6a5-4831-bc14-9336f384b2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.821 232432 DEBUG nova.network.neutron [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.840 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-b00071fa-b5cc-4219-97e7-f88445b8c5d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.842 232432 DEBUG nova.compute.manager [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.861 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1d75b32a-bb78-4b09-b5da-4513aa0de3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.863 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5791158c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.863 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.863 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5791158c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:48 compute-2 NetworkManager[48993]: <info>  [1764402648.8661] manager: (tap5791158c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 07:50:48 compute-2 kernel: tap5791158c-70: entered promiscuous mode
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5791158c-70, col_values=(('external_ids', {'iface-id': '898f98e2-e0cf-47a4-905a-1825318afc76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:48 compute-2 ovn_controller[134375]: 2025-11-29T07:50:48Z|00115|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.871 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.873 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.874 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[752c5974-7bc3-406c-90d5-44e0656e0e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.875 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-5791158c-7fc4-4c56-891c-c8aa0c79ed59
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 5791158c-7fc4-4c56-891c-c8aa0c79ed59
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:50:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:50:48.876 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'env', 'PROCESS_TAG=haproxy-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5791158c-7fc4-4c56-891c-c8aa0c79ed59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:50:48 compute-2 nova_compute[232428]: 2025-11-29 07:50:48.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:49 compute-2 ceph-mon[77138]: pgmap v1402: 305 pgs: 305 active+clean; 569 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.9 KiB/s wr, 366 op/s
Nov 29 07:50:49 compute-2 ceph-mon[77138]: osdmap e163: 3 total, 3 up, 3 in
Nov 29 07:50:49 compute-2 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 07:50:49 compute-2 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Consumed 15.976s CPU time.
Nov 29 07:50:49 compute-2 systemd-machined[194747]: Machine qemu-13-instance-0000001a terminated.
Nov 29 07:50:49 compute-2 nova_compute[232428]: 2025-11-29 07:50:49.277 232432 INFO nova.virt.libvirt.driver [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance destroyed successfully.
Nov 29 07:50:49 compute-2 nova_compute[232428]: 2025-11-29 07:50:49.277 232432 DEBUG nova.objects.instance [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid b00071fa-b5cc-4219-97e7-f88445b8c5d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:49 compute-2 podman[248513]: 2025-11-29 07:50:49.295964425 +0000 UTC m=+0.062905911 container create 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:49 compute-2 systemd[1]: Started libpod-conmon-2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318.scope.
Nov 29 07:50:49 compute-2 podman[248513]: 2025-11-29 07:50:49.265195421 +0000 UTC m=+0.032136957 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:50:49 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:50:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2febf88f4d4b5758b306e81fb1a52cf5bb5374d0ef4873651aebb2c5f56f3ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:49 compute-2 podman[248513]: 2025-11-29 07:50:49.55205412 +0000 UTC m=+0.318995626 container init 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 07:50:49 compute-2 podman[248513]: 2025-11-29 07:50:49.558348936 +0000 UTC m=+0.325290422 container start 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:50:49 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [NOTICE]   (248550) : New worker (248552) forked
Nov 29 07:50:49 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [NOTICE]   (248550) : Loading success.
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.007 232432 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.035 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.050 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.050 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.050 232432 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:50 compute-2 nova_compute[232428]: 2025-11-29 07:50:50.055 232432 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Nov 29 07:50:50 compute-2 virtqemud[231977]: Domain id=16 name='instance-0000001e' uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba is tainted: custom-monitor
Nov 29 07:50:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:50 compute-2 ceph-mon[77138]: pgmap v1404: 305 pgs: 305 active+clean; 554 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 698 KiB/s wr, 348 op/s
Nov 29 07:50:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:51 compute-2 nova_compute[232428]: 2025-11-29 07:50:51.063 232432 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Nov 29 07:50:51 compute-2 ovn_controller[134375]: 2025-11-29T07:50:51Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:42:48 10.100.0.9
Nov 29 07:50:51 compute-2 ovn_controller[134375]: 2025-11-29T07:50:51Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:42:48 10.100.0.9
Nov 29 07:50:51 compute-2 ceph-mon[77138]: pgmap v1405: 305 pgs: 305 active+clean; 459 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 299 op/s
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.068 232432 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.073 232432 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.185 232432 DEBUG nova.objects.instance [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.228 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.345 232432 INFO nova.virt.libvirt.driver [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Deleting instance files /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7_del
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.346 232432 INFO nova.virt.libvirt.driver [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Deletion of /var/lib/nova/instances/b00071fa-b5cc-4219-97e7-f88445b8c5d7_del complete
Nov 29 07:50:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:50:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:52.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.452 232432 INFO nova.compute.manager [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Took 3.61 seconds to destroy the instance on the hypervisor.
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.454 232432 DEBUG oslo.service.loopingcall [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.454 232432 DEBUG nova.compute.manager [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.454 232432 DEBUG nova.network.neutron [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.614 232432 DEBUG nova.network.neutron [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.633 232432 DEBUG nova.network.neutron [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.650 232432 INFO nova.compute.manager [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Took 0.20 seconds to deallocate network for instance.
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.690 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.691 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.707 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402637.7061093, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.707 232432 INFO nova.compute.manager [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Stopped (Lifecycle Event)
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.729 232432 DEBUG nova.compute.manager [None req-b736fedb-a916-42e6-96e4-2416a8de8a5a - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:52 compute-2 nova_compute[232428]: 2025-11-29 07:50:52.762 232432 DEBUG oslo_concurrency.processutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3211507857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.219 232432 DEBUG oslo_concurrency.processutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.228 232432 DEBUG nova.compute.provider_tree [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.254 232432 DEBUG nova.scheduler.client.report [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.280 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3211507857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.310 232432 INFO nova.scheduler.client.report [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocations for instance b00071fa-b5cc-4219-97e7-f88445b8c5d7
Nov 29 07:50:53 compute-2 nova_compute[232428]: 2025-11-29 07:50:53.390 232432 DEBUG oslo_concurrency.lockutils [None req-b8229186-37a5-426c-9d5f-8f546860322a e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "b00071fa-b5cc-4219-97e7-f88445b8c5d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:53 compute-2 sudo[248587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:53 compute-2 sudo[248587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:53 compute-2 sudo[248587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:53 compute-2 sudo[248612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:50:53 compute-2 sudo[248612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:53 compute-2 sudo[248612]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:50:54 compute-2 ceph-mon[77138]: pgmap v1406: 305 pgs: 305 active+clean; 459 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 299 op/s
Nov 29 07:50:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3616219678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:54.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 07:50:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:50:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:54.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:50:55 compute-2 ceph-mon[77138]: osdmap e164: 3 total, 3 up, 3 in
Nov 29 07:50:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1374294681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/231599509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:56.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:56 compute-2 ceph-mon[77138]: pgmap v1408: 305 pgs: 305 active+clean; 364 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 311 op/s
Nov 29 07:50:57 compute-2 nova_compute[232428]: 2025-11-29 07:50:57.190 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402642.1888463, 8dccacdf-63b1-4789-b72a-763e95713f24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:57 compute-2 nova_compute[232428]: 2025-11-29 07:50:57.191 232432 INFO nova.compute.manager [-] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] VM Stopped (Lifecycle Event)
Nov 29 07:50:57 compute-2 nova_compute[232428]: 2025-11-29 07:50:57.213 232432 DEBUG nova.compute.manager [None req-df87e9e1-2291-4087-acc8-45c332b96f8d - - - - - -] [instance: 8dccacdf-63b1-4789-b72a-763e95713f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:57 compute-2 nova_compute[232428]: 2025-11-29 07:50:57.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:57 compute-2 ceph-mon[77138]: pgmap v1409: 305 pgs: 305 active+clean; 364 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 258 op/s
Nov 29 07:50:58 compute-2 sudo[248639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:58 compute-2 sudo[248639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:58 compute-2 sudo[248639]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:58 compute-2 sudo[248670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:58 compute-2 sudo[248670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:58 compute-2 sudo[248670]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:58 compute-2 podman[248663]: 2025-11-29 07:50:58.107427056 +0000 UTC m=+0.085373567 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:58 compute-2 nova_compute[232428]: 2025-11-29 07:50:58.231 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:50:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:50:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:58.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:50:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 07:51:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:00.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:00 compute-2 ceph-mon[77138]: osdmap e165: 3 total, 3 up, 3 in
Nov 29 07:51:00 compute-2 ceph-mon[77138]: pgmap v1411: 305 pgs: 305 active+clean; 373 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 142 op/s
Nov 29 07:51:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2843583701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 07:51:02 compute-2 ceph-mon[77138]: pgmap v1412: 305 pgs: 305 active+clean; 364 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.0 MiB/s wr, 247 op/s
Nov 29 07:51:02 compute-2 nova_compute[232428]: 2025-11-29 07:51:02.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:02.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:03 compute-2 ceph-mon[77138]: osdmap e166: 3 total, 3 up, 3 in
Nov 29 07:51:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2725795968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2725795968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:03 compute-2 nova_compute[232428]: 2025-11-29 07:51:03.237 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:03.297 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:03.298 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:03.299 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:04 compute-2 nova_compute[232428]: 2025-11-29 07:51:04.276 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402649.274394, b00071fa-b5cc-4219-97e7-f88445b8c5d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:04 compute-2 nova_compute[232428]: 2025-11-29 07:51:04.276 232432 INFO nova.compute.manager [-] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] VM Stopped (Lifecycle Event)
Nov 29 07:51:04 compute-2 nova_compute[232428]: 2025-11-29 07:51:04.317 232432 DEBUG nova.compute.manager [None req-0e6b47ab-f864-440c-826a-6881118e9345 - - - - - -] [instance: b00071fa-b5cc-4219-97e7-f88445b8c5d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:04.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:04 compute-2 ceph-mon[77138]: pgmap v1414: 305 pgs: 305 active+clean; 364 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Nov 29 07:51:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:05 compute-2 ceph-mon[77138]: pgmap v1415: 305 pgs: 305 active+clean; 283 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 226 op/s
Nov 29 07:51:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1419655568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1419655568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 07:51:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:06.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1419655568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1419655568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:06 compute-2 ceph-mon[77138]: osdmap e167: 3 total, 3 up, 3 in
Nov 29 07:51:07 compute-2 nova_compute[232428]: 2025-11-29 07:51:07.234 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3406527470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:07 compute-2 ceph-mon[77138]: pgmap v1417: 305 pgs: 305 active+clean; 283 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.7 MiB/s wr, 177 op/s
Nov 29 07:51:08 compute-2 nova_compute[232428]: 2025-11-29 07:51:08.280 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2157936712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:09 compute-2 ceph-mon[77138]: pgmap v1418: 305 pgs: 305 active+clean; 268 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Nov 29 07:51:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:11 compute-2 ceph-mon[77138]: pgmap v1419: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Nov 29 07:51:12 compute-2 nova_compute[232428]: 2025-11-29 07:51:12.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:12.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:12 compute-2 podman[248721]: 2025-11-29 07:51:12.663185778 +0000 UTC m=+0.061631192 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:51:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:13 compute-2 nova_compute[232428]: 2025-11-29 07:51:13.283 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:13 compute-2 ceph-mon[77138]: pgmap v1420: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 29 07:51:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:14.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:14.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.194 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.194 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.195 232432 INFO nova.compute.manager [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Unshelving
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.331 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.332 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.339 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'pci_requests' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.433 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.488 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.488 232432 INFO nova.compute.claims [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:51:15 compute-2 ceph-mon[77138]: pgmap v1421: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 07:51:15 compute-2 nova_compute[232428]: 2025-11-29 07:51:15.718 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1545301587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:16 compute-2 nova_compute[232428]: 2025-11-29 07:51:16.188 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:16 compute-2 nova_compute[232428]: 2025-11-29 07:51:16.197 232432 DEBUG nova.compute.provider_tree [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:16 compute-2 nova_compute[232428]: 2025-11-29 07:51:16.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:16.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:16 compute-2 nova_compute[232428]: 2025-11-29 07:51:16.557 232432 DEBUG nova.scheduler.client.report [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:16 compute-2 nova_compute[232428]: 2025-11-29 07:51:16.662 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:16 compute-2 podman[248764]: 2025-11-29 07:51:16.694816411 +0000 UTC m=+0.095383890 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1545301587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:17 compute-2 nova_compute[232428]: 2025-11-29 07:51:17.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:17 compute-2 ovn_controller[134375]: 2025-11-29T07:51:17Z|00116|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 07:51:17 compute-2 ovn_controller[134375]: 2025-11-29T07:51:17Z|00117|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 07:51:17 compute-2 nova_compute[232428]: 2025-11-29 07:51:17.448 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:17 compute-2 ovn_controller[134375]: 2025-11-29T07:51:17Z|00118|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 07:51:17 compute-2 ovn_controller[134375]: 2025-11-29T07:51:17Z|00119|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 07:51:17 compute-2 nova_compute[232428]: 2025-11-29 07:51:17.643 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:18 compute-2 ceph-mon[77138]: pgmap v1422: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Nov 29 07:51:18 compute-2 nova_compute[232428]: 2025-11-29 07:51:18.157 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:18 compute-2 nova_compute[232428]: 2025-11-29 07:51:18.158 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquired lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:18 compute-2 nova_compute[232428]: 2025-11-29 07:51:18.158 232432 DEBUG nova.network.neutron [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:51:18 compute-2 sudo[248786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:18 compute-2 sudo[248786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:18 compute-2 sudo[248786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:18 compute-2 sudo[248811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:18 compute-2 sudo[248811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:18 compute-2 sudo[248811]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:18 compute-2 nova_compute[232428]: 2025-11-29 07:51:18.285 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:18.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.400 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.401 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.488 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:51:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:19.645 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.646 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:19.647 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.768 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.770 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.789 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:51:19 compute-2 nova_compute[232428]: 2025-11-29 07:51:19.790 232432 INFO nova.compute.claims [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.126 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.187 232432 DEBUG nova.network.neutron [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:51:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3982695696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.587 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.593 232432 DEBUG nova.compute.provider_tree [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.634 232432 DEBUG nova.scheduler.client.report [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.710 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.711 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:51:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:20 compute-2 ceph-mon[77138]: pgmap v1423: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.791 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.792 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.832 232432 INFO nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:51:20 compute-2 nova_compute[232428]: 2025-11-29 07:51:20.893 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.118 232432 INFO nova.virt.block_device [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Booting with volume 4a9f4928-146a-4c56-bbea-7dd9c7945b0c at /dev/vda
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.133 232432 DEBUG nova.network.neutron [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.155 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Releasing lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.157 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.158 232432 INFO nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating image(s)
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.188 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.192 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.253 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.290 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.295 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "58f5a5bcd1c91ec96593b9360887b36053c5839a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.297 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "58f5a5bcd1c91ec96593b9360887b36053c5839a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.445 232432 DEBUG nova.policy [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37531d9f927d40ecadd246429b5b598d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.946 232432 DEBUG os_brick.utils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.948 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.963 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.963 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[a72c1081-b024-4f9f-9f86-3b8b03209a3f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.966 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.976 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.977 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4884c9-4238-47a2-ab58-4ebd44976e91]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.979 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.990 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.990 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[440e58d1-9b9f-423c-8a4d-204974c37095]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.993 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[63bf1bd4-6627-495c-bf8f-318cccf8a326]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:21 compute-2 nova_compute[232428]: 2025-11-29 07:51:21.994 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.023 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.028 232432 DEBUG os_brick.initiator.connectors.lightos [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.029 232432 DEBUG os_brick.initiator.connectors.lightos [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.030 232432 DEBUG os_brick.initiator.connectors.lightos [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.030 232432 DEBUG os_brick.utils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.031 232432 DEBUG nova.virt.block_device [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating existing volume attachment record: b199b9a2-3ce8-4c17-bca4-a1228e4d21e5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3982695696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:22 compute-2 ceph-mon[77138]: pgmap v1424: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Nov 29 07:51:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.598 232432 DEBUG nova.virt.libvirt.imagebackend [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/33939db1-a4ae-4fac-9a69-88ed807d304b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/33939db1-a4ae-4fac-9a69-88ed807d304b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.674 232432 DEBUG nova.virt.libvirt.imagebackend [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/33939db1-a4ae-4fac-9a69-88ed807d304b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 07:51:22 compute-2 nova_compute[232428]: 2025-11-29 07:51:22.675 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] cloning images/33939db1-a4ae-4fac-9a69-88ed807d304b@snap to None/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:51:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:22.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:23 compute-2 nova_compute[232428]: 2025-11-29 07:51:23.287 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:23 compute-2 nova_compute[232428]: 2025-11-29 07:51:23.359 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "58f5a5bcd1c91ec96593b9360887b36053c5839a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:23 compute-2 nova_compute[232428]: 2025-11-29 07:51:23.541 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'migration_context' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:24 compute-2 ceph-mon[77138]: pgmap v1425: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail
Nov 29 07:51:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:24 compute-2 nova_compute[232428]: 2025-11-29 07:51:24.554 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:24 compute-2 nova_compute[232428]: 2025-11-29 07:51:24.555 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:24.649 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:26 compute-2 ceph-mon[77138]: pgmap v1426: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 29 07:51:26 compute-2 nova_compute[232428]: 2025-11-29 07:51:26.241 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] flattening vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 07:51:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.254 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.254 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.254 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.255 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.314 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Image rbd:vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.315 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.315 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Ensure instance console log exists: /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.316 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.316 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.316 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.318 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:50:38Z,direct_url=<?>,disk_format='raw',id=33939db1-a4ae-4fac-9a69-88ed807d304b,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1579333464-shelved',owner='cf226b9a5bb945c3a8f54976b5736fe3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:51:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.323 232432 WARNING nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.346 232432 DEBUG nova.virt.libvirt.host [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.347 232432 DEBUG nova.virt.libvirt.host [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.350 232432 DEBUG nova.virt.libvirt.host [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.351 232432 DEBUG nova.virt.libvirt.host [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.352 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.352 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:50:38Z,direct_url=<?>,disk_format='raw',id=33939db1-a4ae-4fac-9a69-88ed807d304b,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1579333464-shelved',owner='cf226b9a5bb945c3a8f54976b5736fe3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:51:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.353 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.353 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.353 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.354 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.354 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.354 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.354 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.355 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.355 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.355 232432 DEBUG nova.virt.hardware [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.355 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.442 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1410689673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1410689673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/925565451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.943 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.969 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:27 compute-2 nova_compute[232428]: 2025-11-29 07:51:27.973 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.288 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2191120294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.407 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.409 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.433 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <uuid>d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</uuid>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <name>instance-0000001b</name>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1579333464</nova:name>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:51:27</nova:creationTime>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:user uuid="ed57e094b4c4441c8ffbfb96ecb62afc">tempest-UnshelveToHostMultiNodesTest-155692188-project-member</nova:user>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <nova:project uuid="cf226b9a5bb945c3a8f54976b5736fe3">tempest-UnshelveToHostMultiNodesTest-155692188</nova:project>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="33939db1-a4ae-4fac-9a69-88ed807d304b"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <system>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="serial">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="uuid">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </system>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <os>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </os>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <features>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </features>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk">
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </source>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config">
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </source>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:51:28 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log" append="off"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <video>
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </video>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:51:28 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:51:28 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:51:28 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:51:28 compute-2 nova_compute[232428]: </domain>
Nov 29 07:51:28 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:51:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.521 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.522 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.522 232432 INFO nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Using config drive
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.553 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:28 compute-2 podman[249145]: 2025-11-29 07:51:28.572891907 +0000 UTC m=+0.107688185 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.574 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:28 compute-2 ceph-mon[77138]: pgmap v1427: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 29 07:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/925565451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1613726453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1613726453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2191120294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.647 232432 DEBUG nova.objects.instance [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'keypairs' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:28 compute-2 nova_compute[232428]: 2025-11-29 07:51:28.650 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Successfully created port: 32326edd-9157-4611-83ff-41c84380e739 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:51:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.149 232432 INFO nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating config drive at /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.162 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprilvcuxq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.203 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.205 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.206 232432 INFO nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating image(s)
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.206 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.206 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Ensure instance console log exists: /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.207 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.207 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.208 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.208 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.312 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprilvcuxq" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.348 232432 DEBUG nova.storage.rbd_utils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.351 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.519 232432 DEBUG oslo_concurrency.processutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.520 232432 INFO nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting local config drive /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config because it was imported into RBD.
Nov 29 07:51:29 compute-2 systemd-machined[194747]: New machine qemu-17-instance-0000001b.
Nov 29 07:51:29 compute-2 systemd[1]: Started Virtual Machine qemu-17-instance-0000001b.
Nov 29 07:51:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1165337111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:29 compute-2 ceph-mon[77138]: pgmap v1428: 305 pgs: 305 active+clean; 257 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 240 KiB/s wr, 49 op/s
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.951 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402689.9506407, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.952 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Resumed (Lifecycle Event)
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.955 232432 DEBUG nova.compute.manager [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.955 232432 DEBUG nova.virt.libvirt.driver [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.958 232432 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance spawned successfully.
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.991 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:29 compute-2 nova_compute[232428]: 2025-11-29 07:51:29.995 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.084 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.084 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402689.9520972, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.085 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Started (Lifecycle Event)
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.129 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.132 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.152 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.263 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.263 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.263 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.264 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.264 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/268646081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.722 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Successfully updated port: 32326edd-9157-4611-83ff-41c84380e739 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.733 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:30.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.781 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.781 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.782 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.865 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.865 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.870 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:51:30 compute-2 nova_compute[232428]: 2025-11-29 07:51:30.870 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:51:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1765482989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.078 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.079 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4467MB free_disk=20.940109252929688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.080 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.080 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.595 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance aca637ac-6ef0-42f8-aacf-e022e990aeba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.596 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.597 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bae55d85-4263-4efe-895d-a762627b52ff actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.598 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.598 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:51:31 compute-2 nova_compute[232428]: 2025-11-29 07:51:31.845 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:51:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 5316 writes, 27K keys, 5316 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 5316 writes, 5316 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1328 writes, 6411 keys, 1328 commit groups, 1.0 writes per commit group, ingest: 13.74 MB, 0.02 MB/s
                                           Interval WAL: 1328 writes, 1328 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     72.5      0.46              0.13        14    0.033       0      0       0.0       0.0
                                             L6      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    105.5     87.4      1.30              0.43        13    0.100     64K   6937       0.0       0.0
                                            Sum      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     77.8     83.5      1.76              0.56        27    0.065     64K   6937       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.8     83.4     83.3      0.51              0.18         8    0.064     22K   2061       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    105.5     87.4      1.30              0.43        13    0.100     64K   6937       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     88.9      0.38              0.13        13    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.8 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 14.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000152 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(748,13.51 MB,4.44472%) FilterBlock(27,188.42 KB,0.0605282%) IndexBlock(27,343.98 KB,0.110501%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/268646081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/123180201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/507629536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:32 compute-2 ceph-mon[77138]: pgmap v1429: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 07:51:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3814495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.310 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.318 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:32.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.816 232432 DEBUG nova.compute.manager [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-changed-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.817 232432 DEBUG nova.compute.manager [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing instance network info cache due to event network-changed-32326edd-9157-4611-83ff-41c84380e739. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.818 232432 DEBUG oslo_concurrency.lockutils [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.858 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.936 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.937 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.938 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:32 compute-2 nova_compute[232428]: 2025-11-29 07:51:32.939 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:51:33 compute-2 nova_compute[232428]: 2025-11-29 07:51:33.225 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:51:33 compute-2 nova_compute[232428]: 2025-11-29 07:51:33.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3814495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:34 compute-2 nova_compute[232428]: 2025-11-29 07:51:34.276 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:34 compute-2 nova_compute[232428]: 2025-11-29 07:51:34.278 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:34 compute-2 nova_compute[232428]: 2025-11-29 07:51:34.278 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:51:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:35 compute-2 ceph-mon[77138]: pgmap v1430: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 07:51:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 07:51:35 compute-2 nova_compute[232428]: 2025-11-29 07:51:35.924 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:51:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:36 compute-2 ceph-mon[77138]: pgmap v1431: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Nov 29 07:51:36 compute-2 ceph-mon[77138]: osdmap e168: 3 total, 3 up, 3 in
Nov 29 07:51:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.165 232432 DEBUG nova.network.neutron [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.185 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.185 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance network_info: |[{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.186 232432 DEBUG oslo_concurrency.lockutils [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.186 232432 DEBUG nova.network.neutron [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing network info cache for port 32326edd-9157-4611-83ff-41c84380e739 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.189 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Start _get_guest_xml network_info=[{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4a9f4928-146a-4c56-bbea-7dd9c7945b0c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bae55d85-4263-4efe-895d-a762627b52ff', 'attached_at': '', 'detached_at': '', 'volume_id': '4a9f4928-146a-4c56-bbea-7dd9c7945b0c', 'serial': '4a9f4928-146a-4c56-bbea-7dd9c7945b0c'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'attachment_id': 'b199b9a2-3ce8-4c17-bca4-a1228e4d21e5', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.193 232432 WARNING nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.199 232432 DEBUG nova.virt.libvirt.host [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.200 232432 DEBUG nova.virt.libvirt.host [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.203 232432 DEBUG nova.virt.libvirt.host [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.204 232432 DEBUG nova.virt.libvirt.host [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.205 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.205 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.206 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.206 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.206 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.206 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.207 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.207 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.207 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.207 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.208 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.208 232432 DEBUG nova.virt.hardware [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.238 232432 DEBUG nova.storage.rbd_utils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image bae55d85-4263-4efe-895d-a762627b52ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.242 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.270 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3597734895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.674 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.776 232432 DEBUG nova.compute.manager [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.876 232432 DEBUG nova.virt.libvirt.vif [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:51:20Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.877 232432 DEBUG nova.network.os_vif_util [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.878 232432 DEBUG nova.network.os_vif_util [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.879 232432 DEBUG nova.objects.instance [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lazy-loading 'pci_devices' on Instance uuid bae55d85-4263-4efe-895d-a762627b52ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.946 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <uuid>bae55d85-4263-4efe-895d-a762627b52ff</uuid>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <name>instance-0000001f</name>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:name>tempest-LiveMigrationTest-server-1291961647</nova:name>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:51:37</nova:creationTime>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:user uuid="37531d9f927d40ecadd246429b5b598d">tempest-LiveMigrationTest-561693451-project-member</nova:user>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:project uuid="73f3d0f2c9aa4ba29984fc9e6a7ed869">tempest-LiveMigrationTest-561693451</nova:project>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <nova:port uuid="32326edd-9157-4611-83ff-41c84380e739">
Nov 29 07:51:37 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <system>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="serial">bae55d85-4263-4efe-895d-a762627b52ff</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="uuid">bae55d85-4263-4efe-895d-a762627b52ff</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </system>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <os>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </os>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <features>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </features>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/bae55d85-4263-4efe-895d-a762627b52ff_disk.config">
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </source>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c">
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </source>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:51:37 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <serial>4a9f4928-146a-4c56-bbea-7dd9c7945b0c</serial>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:05:17:72"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <target dev="tap32326edd-91"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/console.log" append="off"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <video>
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </video>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:51:37 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:51:37 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:51:37 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:51:37 compute-2 nova_compute[232428]: </domain>
Nov 29 07:51:37 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.956 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Preparing to wait for external event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.957 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.957 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.957 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.958 232432 DEBUG nova.virt.libvirt.vif [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:51:20Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.958 232432 DEBUG nova.network.os_vif_util [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.959 232432 DEBUG nova.network.os_vif_util [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.960 232432 DEBUG os_vif [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.961 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.962 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.963 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.970 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32326edd-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:37 compute-2 nova_compute[232428]: 2025-11-29 07:51:37.971 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32326edd-91, col_values=(('external_ids', {'iface-id': '32326edd-9157-4611-83ff-41c84380e739', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:17:72', 'vm-uuid': 'bae55d85-4263-4efe-895d-a762627b52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:38 compute-2 NetworkManager[48993]: <info>  [1764402698.0198] manager: (tap32326edd-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.025 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.027 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:38 compute-2 ceph-mon[77138]: pgmap v1433: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 157 op/s
Nov 29 07:51:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3597734895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.032 232432 INFO os_vif [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.036 232432 DEBUG oslo_concurrency.lockutils [None req-15aa07e5-36c3-4dec-85b4-cb553a333951 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 22.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.086 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.087 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.087 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No VIF found with MAC fa:16:3e:05:17:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.088 232432 INFO nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Using config drive
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.116 232432 DEBUG nova.storage.rbd_utils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image bae55d85-4263-4efe-895d-a762627b52ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:38 compute-2 nova_compute[232428]: 2025-11-29 07:51:38.293 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:38 compute-2 sudo[249397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:38 compute-2 sudo[249397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:38 compute-2 sudo[249397]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:38 compute-2 sudo[249422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:38 compute-2 sudo[249422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:38 compute-2 sudo[249422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:39 compute-2 nova_compute[232428]: 2025-11-29 07:51:39.737 232432 INFO nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating config drive at /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config
Nov 29 07:51:39 compute-2 nova_compute[232428]: 2025-11-29 07:51:39.744 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lao16u_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:39 compute-2 nova_compute[232428]: 2025-11-29 07:51:39.879 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lao16u_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:39 compute-2 nova_compute[232428]: 2025-11-29 07:51:39.913 232432 DEBUG nova.storage.rbd_utils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image bae55d85-4263-4efe-895d-a762627b52ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:39 compute-2 nova_compute[232428]: 2025-11-29 07:51:39.917 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config bae55d85-4263-4efe-895d-a762627b52ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:40 compute-2 ceph-mon[77138]: pgmap v1434: 305 pgs: 305 active+clean; 297 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.4 MiB/s wr, 136 op/s
Nov 29 07:51:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:40.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.501 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.502 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.502 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.503 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.503 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.505 232432 INFO nova.compute.manager [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Terminating instance
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.505 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.506 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquired lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.506 232432 DEBUG nova.network.neutron [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.732 232432 DEBUG nova.network.neutron [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.941 232432 DEBUG nova.network.neutron [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updated VIF entry in instance network info cache for port 32326edd-9157-4611-83ff-41c84380e739. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.941 232432 DEBUG nova.network.neutron [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:41 compute-2 nova_compute[232428]: 2025-11-29 07:51:41.966 232432 DEBUG oslo_concurrency.lockutils [req-33a0c413-3965-4174-89a2-0d591f7090ca req-4d55bd0d-9e0c-48cc-9a83-430cb8d9122c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:42 compute-2 nova_compute[232428]: 2025-11-29 07:51:42.706 232432 DEBUG nova.network.neutron [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:42 compute-2 nova_compute[232428]: 2025-11-29 07:51:42.727 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Releasing lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:42 compute-2 nova_compute[232428]: 2025-11-29 07:51:42.727 232432 DEBUG nova.compute.manager [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:51:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:42.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:42 compute-2 ceph-mon[77138]: pgmap v1435: 305 pgs: 305 active+clean; 248 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 104 op/s
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.019 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.324 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.671 232432 DEBUG oslo_concurrency.processutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config bae55d85-4263-4efe-895d-a762627b52ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.753s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.672 232432 INFO nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deleting local config drive /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/disk.config because it was imported into RBD.
Nov 29 07:51:43 compute-2 podman[249490]: 2025-11-29 07:51:43.738389326 +0000 UTC m=+0.122640423 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:51:43 compute-2 kernel: tap32326edd-91: entered promiscuous mode
Nov 29 07:51:43 compute-2 NetworkManager[48993]: <info>  [1764402703.7471] manager: (tap32326edd-91): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 07:51:43 compute-2 ovn_controller[134375]: 2025-11-29T07:51:43Z|00120|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this chassis.
Nov 29 07:51:43 compute-2 ovn_controller[134375]: 2025-11-29T07:51:43Z|00121|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.763 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.765 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.767 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:51:43 compute-2 ovn_controller[134375]: 2025-11-29T07:51:43Z|00122|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 ovn-installed in OVS
Nov 29 07:51:43 compute-2 ovn_controller[134375]: 2025-11-29T07:51:43Z|00123|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 up in Southbound
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.778 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.781 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.789 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf6b019-96a1-4545-bb2f-6758524b2a01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 systemd-udevd[249523]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:51:43 compute-2 systemd-machined[194747]: New machine qemu-18-instance-0000001f.
Nov 29 07:51:43 compute-2 systemd[1]: Started Virtual Machine qemu-18-instance-0000001f.
Nov 29 07:51:43 compute-2 NetworkManager[48993]: <info>  [1764402703.8112] device (tap32326edd-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:51:43 compute-2 NetworkManager[48993]: <info>  [1764402703.8622] device (tap32326edd-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.880 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[db235630-c17b-4f8d-8abb-3964da3112c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.884 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[46c0ea72-2e2a-457b-845a-f49c13b352e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.917 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c69141f6-1674-4196-b09a-796960582200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.939 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[246813a9-0b8b-4f4a-97e5-5b145085edc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249536, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.957 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4ec9d6-1516-44c7-bd26-6d0f9c985559]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249538, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249538, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.958 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:43 compute-2 nova_compute[232428]: 2025-11-29 07:51:43.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.961 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.961 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.962 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:51:43.962 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:51:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:44 compute-2 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 07:51:44 compute-2 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001b.scope: Consumed 12.391s CPU time.
Nov 29 07:51:44 compute-2 systemd-machined[194747]: Machine qemu-17-instance-0000001b terminated.
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.754 232432 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.754 232432 DEBUG nova.objects.instance [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'resources' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1876318502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:44 compute-2 ceph-mon[77138]: pgmap v1436: 305 pgs: 305 active+clean; 248 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 104 op/s
Nov 29 07:51:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:44.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.959 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402704.959137, bae55d85-4263-4efe-895d-a762627b52ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.960 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Started (Lifecycle Event)
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.989 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.994 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402704.959388, bae55d85-4263-4efe-895d-a762627b52ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:44 compute-2 nova_compute[232428]: 2025-11-29 07:51:44.994 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Paused (Lifecycle Event)
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.036 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.042 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.071 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.086 232432 DEBUG nova.compute.manager [req-8acf0000-b6a8-4e81-87db-8d9c53b1e5b2 req-b11fdb5c-a6d7-450a-b04e-e3a5428d011a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.087 232432 DEBUG oslo_concurrency.lockutils [req-8acf0000-b6a8-4e81-87db-8d9c53b1e5b2 req-b11fdb5c-a6d7-450a-b04e-e3a5428d011a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.087 232432 DEBUG oslo_concurrency.lockutils [req-8acf0000-b6a8-4e81-87db-8d9c53b1e5b2 req-b11fdb5c-a6d7-450a-b04e-e3a5428d011a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.087 232432 DEBUG oslo_concurrency.lockutils [req-8acf0000-b6a8-4e81-87db-8d9c53b1e5b2 req-b11fdb5c-a6d7-450a-b04e-e3a5428d011a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.088 232432 DEBUG nova.compute.manager [req-8acf0000-b6a8-4e81-87db-8d9c53b1e5b2 req-b11fdb5c-a6d7-450a-b04e-e3a5428d011a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Processing event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.089 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.093 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402705.0932066, bae55d85-4263-4efe-895d-a762627b52ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.094 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Resumed (Lifecycle Event)
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.096 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.101 232432 INFO nova.virt.libvirt.driver [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance spawned successfully.
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.101 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.122 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.129 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.133 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.134 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.135 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.135 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.136 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.136 232432 DEBUG nova.virt.libvirt.driver [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.167 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.205 232432 INFO nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 16.00 seconds to spawn the instance on the hypervisor.
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.206 232432 DEBUG nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.312 232432 INFO nova.compute.manager [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 25.60 seconds to build instance.
Nov 29 07:51:45 compute-2 nova_compute[232428]: 2025-11-29 07:51:45.391 232432 DEBUG oslo_concurrency.lockutils [None req-5a204013-9648-4568-8c3c-015bb55980b8 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:45 compute-2 ceph-mon[77138]: pgmap v1437: 305 pgs: 305 active+clean; 275 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 07:51:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 07:51:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:46.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.660 232432 INFO nova.virt.libvirt.driver [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting instance files /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.661 232432 INFO nova.virt.libvirt.driver [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deletion of /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del complete
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.728 232432 INFO nova.compute.manager [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Took 4.00 seconds to destroy the instance on the hypervisor.
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.729 232432 DEBUG oslo.service.loopingcall [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.729 232432 DEBUG nova.compute.manager [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:51:46 compute-2 nova_compute[232428]: 2025-11-29 07:51:46.729 232432 DEBUG nova.network.neutron [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:51:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.166 232432 DEBUG nova.network.neutron [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.190 232432 DEBUG nova.network.neutron [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.220 232432 INFO nova.compute.manager [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Took 0.49 seconds to deallocate network for instance.
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.302 232432 DEBUG nova.compute.manager [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.303 232432 DEBUG oslo_concurrency.lockutils [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.303 232432 DEBUG oslo_concurrency.lockutils [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.303 232432 DEBUG oslo_concurrency.lockutils [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.304 232432 DEBUG nova.compute.manager [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.304 232432 WARNING nova.compute.manager [req-abf0b69b-c8d6-4d61-9920-7f1bcd6de543 req-005aebd0-bfa2-4b94-b608-1a23c0e6b4d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state None.
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.331 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.332 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:47 compute-2 ceph-mon[77138]: osdmap e169: 3 total, 3 up, 3 in
Nov 29 07:51:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3580282389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/327708464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.475 232432 DEBUG oslo_concurrency.processutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:47 compute-2 podman[249605]: 2025-11-29 07:51:47.666019781 +0000 UTC m=+0.064120900 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 07:51:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2396678273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.986 232432 DEBUG oslo_concurrency.processutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:47 compute-2 nova_compute[232428]: 2025-11-29 07:51:47.995 232432 DEBUG nova.compute.provider_tree [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.123 232432 DEBUG nova.scheduler.client.report [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.192 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.221 232432 INFO nova.scheduler.client.report [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Deleted allocations for instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-2 nova_compute[232428]: 2025-11-29 07:51:48.373 232432 DEBUG oslo_concurrency.lockutils [None req-e496e4bd-00c0-4468-bc70-5f07e7096f24 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:48 compute-2 ceph-mon[77138]: pgmap v1439: 305 pgs: 305 active+clean; 275 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 07:51:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2396678273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:49 compute-2 ceph-mon[77138]: pgmap v1440: 305 pgs: 305 active+clean; 285 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 983 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 07:51:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:51 compute-2 ceph-mon[77138]: pgmap v1441: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 07:51:51 compute-2 nova_compute[232428]: 2025-11-29 07:51:51.965 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Check if temp file /var/lib/nova/instances/tmp5fbve2fu exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Nov 29 07:51:51 compute-2 nova_compute[232428]: 2025-11-29 07:51:51.966 232432 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Nov 29 07:51:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:53 compute-2 nova_compute[232428]: 2025-11-29 07:51:53.152 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:53 compute-2 nova_compute[232428]: 2025-11-29 07:51:53.329 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:53 compute-2 sudo[249648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:53 compute-2 sudo[249648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:53 compute-2 sudo[249648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:53 compute-2 sudo[249673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:53 compute-2 sudo[249673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:53 compute-2 sudo[249673]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:54 compute-2 sudo[249698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:54 compute-2 sudo[249698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:54 compute-2 sudo[249698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:54 compute-2 sudo[249723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:51:54 compute-2 sudo[249723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:54 compute-2 sudo[249723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:51:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:51:54 compute-2 ceph-mon[77138]: pgmap v1442: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:57 compute-2 ceph-mon[77138]: pgmap v1443: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.2 MiB/s wr, 210 op/s
Nov 29 07:51:58 compute-2 nova_compute[232428]: 2025-11-29 07:51:58.156 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:58 compute-2 nova_compute[232428]: 2025-11-29 07:51:58.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:51:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:58.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:51:58 compute-2 sudo[249781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:58 compute-2 sudo[249781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:58 compute-2 sudo[249781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:58 compute-2 sudo[249806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:58 compute-2 sudo[249806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:58 compute-2 sudo[249806]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:58 compute-2 podman[249830]: 2025-11-29 07:51:58.771093137 +0000 UTC m=+0.089610219 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 07:51:58 compute-2 ceph-mon[77138]: pgmap v1444: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.0 MiB/s wr, 187 op/s
Nov 29 07:51:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:51:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:51:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:51:59 compute-2 nova_compute[232428]: 2025-11-29 07:51:59.752 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402704.750996, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:59 compute-2 nova_compute[232428]: 2025-11-29 07:51:59.752 232432 INFO nova.compute.manager [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Stopped (Lifecycle Event)
Nov 29 07:51:59 compute-2 nova_compute[232428]: 2025-11-29 07:51:59.809 232432 DEBUG nova.compute.manager [None req-048c52dc-5393-42a2-9f83-83f0091771b2 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:00 compute-2 ceph-mon[77138]: pgmap v1445: 305 pgs: 305 active+clean; 214 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 182 op/s
Nov 29 07:52:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:00.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:00 compute-2 ovn_controller[134375]: 2025-11-29T07:52:00Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:52:00 compute-2 ovn_controller[134375]: 2025-11-29T07:52:00Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:52:01 compute-2 ceph-mon[77138]: pgmap v1446: 305 pgs: 305 active+clean; 205 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 29 07:52:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988054900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:02 compute-2 sudo[249860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:02 compute-2 sudo[249860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-2 sudo[249860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:02.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:02 compute-2 sudo[249885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:52:02 compute-2 sudo[249885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-2 sudo[249885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2524987151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:52:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:52:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2988054900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:03 compute-2 nova_compute[232428]: 2025-11-29 07:52:03.160 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:03.298 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:03.299 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:03.300 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:03 compute-2 nova_compute[232428]: 2025-11-29 07:52:03.442 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:03 compute-2 ceph-mon[77138]: pgmap v1447: 305 pgs: 305 active+clean; 205 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.191 232432 DEBUG nova.compute.manager [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.192 232432 DEBUG oslo_concurrency.lockutils [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.192 232432 DEBUG oslo_concurrency.lockutils [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.193 232432 DEBUG oslo_concurrency.lockutils [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.193 232432 DEBUG nova.compute.manager [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:04 compute-2 nova_compute[232428]: 2025-11-29 07:52:04.193 232432 DEBUG nova.compute.manager [req-d9d66171-969c-45a9-8976-3f66c5506b8a req-55835d57-1a34-4b03-a700-e8e2b68cce2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:52:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:04.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:04.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:05 compute-2 ceph-mon[77138]: pgmap v1448: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Nov 29 07:52:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.835 232432 INFO nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 10.38 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.836 232432 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.868 232432 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(c0345181-2f6f-4518-ac40-055a95731c2a),old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='b199b9a2-3ce8-4c17-bca4-a1228e4d21e5'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.872 232432 DEBUG nova.objects.instance [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lazy-loading 'migration_context' on Instance uuid bae55d85-4263-4efe-895d-a762627b52ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.874 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.876 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.876 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Nov 29 07:52:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:06.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.909 232432 DEBUG nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Find same serial number: pos=1, serial=4a9f4928-146a-4c56-bbea-7dd9c7945b0c _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.910 232432 DEBUG nova.virt.libvirt.vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:51:45Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.910 232432 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.912 232432 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.912 232432 DEBUG nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 07:52:06 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:05:17:72"/>
Nov 29 07:52:06 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 07:52:06 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:52:06 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 07:52:06 compute-2 nova_compute[232428]:   <target dev="tap32326edd-91"/>
Nov 29 07:52:06 compute-2 nova_compute[232428]: </interface>
Nov 29 07:52:06 compute-2 nova_compute[232428]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.913 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.972 232432 DEBUG nova.compute.manager [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.972 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.973 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.973 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.973 232432 DEBUG nova.compute.manager [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.973 232432 WARNING nova.compute.manager [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.974 232432 DEBUG nova.compute.manager [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-changed-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.974 232432 DEBUG nova.compute.manager [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing instance network info cache due to event network-changed-32326edd-9157-4611-83ff-41c84380e739. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.974 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.974 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:06 compute-2 nova_compute[232428]: 2025-11-29 07:52:06.974 232432 DEBUG nova.network.neutron [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing network info cache for port 32326edd-9157-4611-83ff-41c84380e739 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:52:07 compute-2 nova_compute[232428]: 2025-11-29 07:52:07.379 232432 DEBUG nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 07:52:07 compute-2 nova_compute[232428]: 2025-11-29 07:52:07.380 232432 INFO nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Increasing downtime to 50 ms after 0 sec elapsed time
Nov 29 07:52:07 compute-2 nova_compute[232428]: 2025-11-29 07:52:07.527 232432 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.031 232432 DEBUG nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.031 232432 DEBUG nova.virt.libvirt.migration [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Nov 29 07:52:08 compute-2 ceph-mon[77138]: pgmap v1449: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 594 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.151 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402728.1507523, bae55d85-4263-4efe-895d-a762627b52ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.152 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Paused (Lifecycle Event)
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.163 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.191 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.195 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.335 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During sync_power_state the instance has a pending task (migrating). Skip.
Nov 29 07:52:08 compute-2 kernel: tap32326edd-91 (unregistering): left promiscuous mode
Nov 29 07:52:08 compute-2 NetworkManager[48993]: <info>  [1764402728.4414] device (tap32326edd-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:52:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:08.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00124|binding|INFO|Releasing lport 32326edd-9157-4611-83ff-41c84380e739 from this chassis (sb_readonly=0)
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00125|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 down in Southbound
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00126|binding|INFO|Removing iface tap32326edd-91 ovn-installed in OVS
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 07:52:08 compute-2 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Consumed 15.422s CPU time.
Nov 29 07:52:08 compute-2 systemd-machined[194747]: Machine qemu-18-instance-0000001f terminated.
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.589 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '011fdddc-8681-4ece-b276-7e821dffaec6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.590 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.592 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:52:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.611 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf71850-bbd0-47b2-93a7-b3273096a60e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.648 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2b27e515-b4b1-469b-86da-7477bce5cea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.652 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3249dfd8-2238-4db0-9aef-2719697fbd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.684 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[32f62465-85c3-40f4-9b58-efae6bb13ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c: No such file or directory
Nov 29 07:52:08 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c: No such file or directory
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.703 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0826e8f4-60a2-4f69-9a6d-5b8c43f593f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249928, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 kernel: tap32326edd-91: entered promiscuous mode
Nov 29 07:52:08 compute-2 NetworkManager[48993]: <info>  [1764402728.7066] manager: (tap32326edd-91): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 07:52:08 compute-2 kernel: tap32326edd-91 (unregistering): left promiscuous mode
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00127|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this chassis.
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00128|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.725 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[31bc54ac-6da6-4eae-a7b8-f527062d712a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249932, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249932, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.727 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.729 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.730 232432 DEBUG nova.virt.libvirt.guest [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.731 232432 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation has completed
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.731 232432 INFO nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] _post_live_migration() is started..
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00129|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 ovn-installed in OVS
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00130|if_status|INFO|Not setting lport 32326edd-9157-4611-83ff-41c84380e739 down as sb is readonly
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.733 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.733 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.733 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.738 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.738 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.738 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.738 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.739 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:08 compute-2 ovn_controller[134375]: 2025-11-29T07:52:08Z|00131|binding|INFO|Releasing lport 32326edd-9157-4611-83ff-41c84380e739 from this chassis (sb_readonly=0)
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.792 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '011fdddc-8681-4ece-b276-7e821dffaec6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.793 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.794 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[787bf1c7-bc65-4d00-8e53-85b064ef876f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.845 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '011fdddc-8681-4ece-b276-7e821dffaec6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.846 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a19e6fe5-00f6-490c-b9c8-42df0675c8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.850 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2461c720-c02d-4841-81d2-1e43e02d2e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.886 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf03ea7-270f-4677-b494-a7693d7cbcdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:08.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.910 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[54513be0-cab4-477d-b2ea-83acf3b17ac4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 10, 'rx_bytes': 658, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 10, 'rx_bytes': 658, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249940, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.927 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba78e6a-f19b-4fff-ba1d-152fb0a0e1d5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249941, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249941, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.929 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 nova_compute[232428]: 2025-11-29 07:52:08.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.936 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.936 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.936 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.937 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.938 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.939 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.955 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[258d6f3f-2662-4dda-a44f-46af9cd5b89a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.986 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7907a3-389e-4b4a-8096-20faa170f71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:08.989 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[caa44dc4-3428-4b71-94f3-3b038190f3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.022 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c48a2e36-d673-4bff-949b-bedd2b15db21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.041 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5640382d-9344-4f51-85cb-f9d32b627e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249948, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.060 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4981f89a-4220-44c4-8a08-fce5019f04ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249949, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249949, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.062 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.068 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.068 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.068 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:09.069 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.539 232432 DEBUG nova.compute.manager [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.539 232432 DEBUG oslo_concurrency.lockutils [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.539 232432 DEBUG oslo_concurrency.lockutils [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.539 232432 DEBUG oslo_concurrency.lockutils [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.540 232432 DEBUG nova.compute.manager [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:09 compute-2 nova_compute[232428]: 2025-11-29 07:52:09.540 232432 DEBUG nova.compute.manager [req-74c0606c-a295-46ed-b3a4-cccf86736d72 req-42df0ffd-55d1-424a-b0a8-e0f05f74626f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:52:09 compute-2 ceph-mon[77138]: pgmap v1450: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 07:52:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:10.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/770395025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:12 compute-2 ceph-mon[77138]: pgmap v1451: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 1.9 MiB/s wr, 96 op/s
Nov 29 07:52:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:12.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:13 compute-2 nova_compute[232428]: 2025-11-29 07:52:13.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:13 compute-2 nova_compute[232428]: 2025-11-29 07:52:13.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:14 compute-2 ceph-mon[77138]: pgmap v1452: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 317 KiB/s wr, 47 op/s
Nov 29 07:52:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:14.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:14 compute-2 podman[249954]: 2025-11-29 07:52:14.663752619 +0000 UTC m=+0.057834313 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:52:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.314 232432 DEBUG nova.compute.manager [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.315 232432 DEBUG oslo_concurrency.lockutils [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.315 232432 DEBUG oslo_concurrency.lockutils [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.315 232432 DEBUG oslo_concurrency.lockutils [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.315 232432 DEBUG nova.compute.manager [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.316 232432 DEBUG nova.compute.manager [req-a9e9970e-502b-458b-a151-c2d2c1d8385f req-0f9b227e-a1c7-4daa-be42-d11007a49d71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.317 232432 DEBUG nova.compute.manager [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.317 232432 DEBUG oslo_concurrency.lockutils [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.317 232432 DEBUG oslo_concurrency.lockutils [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.317 232432 DEBUG oslo_concurrency.lockutils [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.317 232432 DEBUG nova.compute.manager [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.318 232432 WARNING nova.compute.manager [req-65b555d5-9695-434d-bc03-c5cc5746e0ab req-d7918e73-d6e6-4ba8-af73-d04cca743143 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.433 232432 DEBUG nova.network.neutron [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updated VIF entry in instance network info cache for port 32326edd-9157-4611-83ff-41c84380e739. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.434 232432 DEBUG nova.network.neutron [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.515 232432 DEBUG oslo_concurrency.lockutils [req-3b833b93-0ee6-4059-af34-18bc23e70555 req-7a1f53ea-47c7-44d2-9a01-e76a08326c6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.585 232432 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Activated binding for port 32326edd-9157-4611-83ff-41c84380e739 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.586 232432 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.587 232432 DEBUG nova.virt.libvirt.vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:51:50Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.587 232432 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.588 232432 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.588 232432 DEBUG os_vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.591 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32326edd-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.593 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.599 232432 INFO os_vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.599 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.600 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.601 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.602 232432 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.603 232432 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deleting instance files /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del
Nov 29 07:52:15 compute-2 nova_compute[232428]: 2025-11-29 07:52:15.603 232432 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deletion of /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del complete
Nov 29 07:52:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:16.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:17 compute-2 ceph-mon[77138]: pgmap v1453: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 317 KiB/s wr, 47 op/s
Nov 29 07:52:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:18.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:18 compute-2 nova_compute[232428]: 2025-11-29 07:52:18.636 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:18 compute-2 podman[249977]: 2025-11-29 07:52:18.696117355 +0000 UTC m=+0.089317931 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 29 07:52:18 compute-2 sudo[249996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:18 compute-2 sudo[249996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:52:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 17K writes, 70K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 5407 syncs, 3.20 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 45.57 MB, 0.08 MB/s
                                           Interval WAL: 10K writes, 4112 syncs, 2.59 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:52:18 compute-2 sudo[249996]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:18 compute-2 sudo[250023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:18 compute-2 sudo[250023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:18 compute-2 sudo[250023]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:18 compute-2 ceph-mon[77138]: pgmap v1454: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 07:52:20 compute-2 ceph-mon[77138]: pgmap v1455: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 07:52:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:20 compute-2 nova_compute[232428]: 2025-11-29 07:52:20.594 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:22 compute-2 ceph-mon[77138]: pgmap v1456: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 07:52:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:22.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.377 232432 DEBUG nova.compute.manager [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.378 232432 DEBUG oslo_concurrency.lockutils [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.378 232432 DEBUG oslo_concurrency.lockutils [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.378 232432 DEBUG oslo_concurrency.lockutils [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.378 232432 DEBUG nova.compute.manager [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.378 232432 WARNING nova.compute.manager [req-03132fc9-274a-44b0-b2fd-099b30929ccc req-f2c8ac87-e07d-46ab-8958-346e8e62be89 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.663 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:23 compute-2 ceph-mon[77138]: pgmap v1457: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.731 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402728.7299771, bae55d85-4263-4efe-895d-a762627b52ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.732 232432 INFO nova.compute.manager [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Stopped (Lifecycle Event)
Nov 29 07:52:23 compute-2 nova_compute[232428]: 2025-11-29 07:52:23.763 232432 DEBUG nova.compute.manager [None req-16311c7a-0a5a-475b-b18d-b516e8dce10c - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:24.646 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:24.647 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:52:24 compute-2 nova_compute[232428]: 2025-11-29 07:52:24.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:25 compute-2 nova_compute[232428]: 2025-11-29 07:52:25.596 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:25 compute-2 nova_compute[232428]: 2025-11-29 07:52:25.841 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:25 compute-2 nova_compute[232428]: 2025-11-29 07:52:25.841 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:25 compute-2 ceph-mon[77138]: pgmap v1458: 305 pgs: 305 active+clean; 240 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 5.6 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.260 232432 DEBUG nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.261 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.261 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.261 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.261 232432 DEBUG nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 WARNING nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 DEBUG nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 DEBUG oslo_concurrency.lockutils [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.262 232432 DEBUG nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.263 232432 WARNING nova.compute.manager [req-b5a2fe88-3772-49e7-89c4-5c4ca044ca1e req-922c31f9-d5e9-485b-a1f6-53b9ae5b175b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.
Nov 29 07:52:26 compute-2 nova_compute[232428]: 2025-11-29 07:52:26.289 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:52:27 compute-2 ceph-mon[77138]: pgmap v1459: 305 pgs: 305 active+clean; 240 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 5.6 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.752 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.752 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.753 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:52:27 compute-2 nova_compute[232428]: 2025-11-29 07:52:27.753 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid aca637ac-6ef0-42f8-aacf-e022e990aeba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1334175440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1334175440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:28.648 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:28 compute-2 nova_compute[232428]: 2025-11-29 07:52:28.667 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1334175440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1334175440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:28 compute-2 sshd-session[250053]: Invalid user sol from 45.148.10.240 port 43370
Nov 29 07:52:29 compute-2 sshd-session[250053]: Connection closed by invalid user sol 45.148.10.240 port 43370 [preauth]
Nov 29 07:52:29 compute-2 podman[250055]: 2025-11-29 07:52:29.079448285 +0000 UTC m=+0.102150652 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:52:29 compute-2 ceph-mon[77138]: pgmap v1460: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 07:52:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:30 compute-2 nova_compute[232428]: 2025-11-29 07:52:30.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3162617546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3376242000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:31 compute-2 ceph-mon[77138]: pgmap v1461: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 07:52:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.805 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.849 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.849 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.850 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.850 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.850 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.851 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.851 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.851 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1992865510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1785374527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.884 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.885 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.885 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.886 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.886 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.920 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.921 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.922 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.949 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.950 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.950 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.950 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:52:32 compute-2 nova_compute[232428]: 2025-11-29 07:52:32.951 232432 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2806569881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.385 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3375564744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.447 232432 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.492 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.492 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.513 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.514 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.670 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.698 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.699 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4500MB free_disk=20.921886444091797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.699 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.700 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.717 232432 WARNING nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.718 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4506MB free_disk=20.921886444091797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.718 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.777 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration for instance bae55d85-4263-4efe-895d-a762627b52ff refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.795 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.832 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance aca637ac-6ef0-42f8-aacf-e022e990aeba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.832 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration c0345181-2f6f-4518-ac40-055a95731c2a is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.833 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.833 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.851 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:52:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2806569881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3375564744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3711008745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:33 compute-2 ceph-mon[77138]: pgmap v1462: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.946 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.947 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.966 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:52:33 compute-2 nova_compute[232428]: 2025-11-29 07:52:33.993 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.054 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4290807639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.485 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.492 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.512 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.545 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.546 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.547 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.605 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration for instance bae55d85-4263-4efe-895d-a762627b52ff refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.636 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.669 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Instance aca637ac-6ef0-42f8-aacf-e022e990aeba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.670 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration c0345181-2f6f-4518-ac40-055a95731c2a is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.671 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.671 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:52:34 compute-2 nova_compute[232428]: 2025-11-29 07:52:34.753 232432 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1289566660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1095754472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4290807639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:34.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2235641255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.281 232432 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.287 232432 DEBUG nova.compute.provider_tree [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.327 232432 DEBUG nova.scheduler.client.report [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.330 232432 DEBUG nova.compute.resource_tracker [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.331 232432 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.342 232432 INFO nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.519 232432 INFO nova.scheduler.client.report [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Deleted allocation for migration c0345181-2f6f-4518-ac40-055a95731c2a
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.520 232432 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.759 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating tmpfile /var/lib/nova/instances/tmprlccsjil to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Nov 29 07:52:35 compute-2 nova_compute[232428]: 2025-11-29 07:52:35.760 232432 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Nov 29 07:52:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2235641255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:36 compute-2 ceph-mon[77138]: pgmap v1463: 305 pgs: 305 active+clean; 281 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.9 MiB/s wr, 45 op/s
Nov 29 07:52:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/723722729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:36 compute-2 nova_compute[232428]: 2025-11-29 07:52:36.898 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:36.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1532400460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:38 compute-2 nova_compute[232428]: 2025-11-29 07:52:38.241 232432 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Nov 29 07:52:38 compute-2 nova_compute[232428]: 2025-11-29 07:52:38.268 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:38 compute-2 nova_compute[232428]: 2025-11-29 07:52:38.268 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:38 compute-2 nova_compute[232428]: 2025-11-29 07:52:38.269 232432 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:52:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:38 compute-2 nova_compute[232428]: 2025-11-29 07:52:38.672 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:38 compute-2 ceph-mon[77138]: pgmap v1464: 305 pgs: 305 active+clean; 281 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Nov 29 07:52:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:38 compute-2 sudo[250175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:38 compute-2 sudo[250175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:38 compute-2 sudo[250175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:52:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:38.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:52:39 compute-2 sudo[250200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:39 compute-2 sudo[250200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:39 compute-2 sudo[250200]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:39 compute-2 ceph-mon[77138]: pgmap v1465: 305 pgs: 305 active+clean; 293 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Nov 29 07:52:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:40.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:40 compute-2 nova_compute[232428]: 2025-11-29 07:52:40.603 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.463 232432 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.485 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.489 232432 DEBUG os_brick.utils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.492 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.509 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.510 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[5926152a-8bdc-49e5-90bf-ad2224a3d707]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.512 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.524 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.525 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[587b3b73-e7ae-45b6-926d-f22d8668b6f0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.531 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.544 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.544 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae45aa3-5310-4433-b6e2-24dc942eb60e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.546 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[8c01f8ca-0f25-4354-8038-f5258ebcbcd6]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.548 232432 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.580 232432 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.583 232432 DEBUG os_brick.initiator.connectors.lightos [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.583 232432 DEBUG os_brick.initiator.connectors.lightos [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.584 232432 DEBUG os_brick.initiator.connectors.lightos [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:52:41 compute-2 nova_compute[232428]: 2025-11-29 07:52:41.584 232432 DEBUG os_brick.utils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:52:41 compute-2 ceph-mon[77138]: pgmap v1466: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 07:52:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115055220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.750 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='e91a474d-25b3-4d61-89c1-080b5b4408d2'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.750 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating instance directory: /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.751 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Ensure instance console log exists: /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.751 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.756 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.758 232432 DEBUG nova.virt.libvirt.vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:31Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.758 232432 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.759 232432 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.759 232432 DEBUG os_vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.760 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.761 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.765 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.765 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32326edd-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.765 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32326edd-91, col_values=(('external_ids', {'iface-id': '32326edd-9157-4611-83ff-41c84380e739', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:17:72', 'vm-uuid': 'bae55d85-4263-4efe-895d-a762627b52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:42 compute-2 NetworkManager[48993]: <info>  [1764402762.7701] manager: (tap32326edd-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.770 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.777 232432 INFO os_vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.782 232432 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Nov 29 07:52:42 compute-2 nova_compute[232428]: 2025-11-29 07:52:42.782 232432 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='e91a474d-25b3-4d61-89c1-080b5b4408d2'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Nov 29 07:52:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2115055220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:43 compute-2 nova_compute[232428]: 2025-11-29 07:52:43.674 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:44 compute-2 ceph-mon[77138]: pgmap v1467: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 07:52:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.103 232432 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Port 32326edd-9157-4611-83ff-41c84380e739 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.351 232432 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='e91a474d-25b3-4d61-89c1-080b5b4408d2'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.360 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.361 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.400 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.483 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.484 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.490 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.490 232432 INFO nova.compute.claims [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:52:45 compute-2 kernel: tap32326edd-91: entered promiscuous mode
Nov 29 07:52:45 compute-2 ovn_controller[134375]: 2025-11-29T07:52:45Z|00132|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this additional chassis.
Nov 29 07:52:45 compute-2 ovn_controller[134375]: 2025-11-29T07:52:45Z|00133|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:52:45 compute-2 NetworkManager[48993]: <info>  [1764402765.6461] manager: (tap32326edd-91): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:45 compute-2 ovn_controller[134375]: 2025-11-29T07:52:45Z|00134|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 ovn-installed in OVS
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.661 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:45 compute-2 systemd-machined[194747]: New machine qemu-19-instance-0000001f.
Nov 29 07:52:45 compute-2 systemd-udevd[250271]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:52:45 compute-2 podman[250238]: 2025-11-29 07:52:45.693566791 +0000 UTC m=+0.085753327 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:45 compute-2 nova_compute[232428]: 2025-11-29 07:52:45.696 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:45 compute-2 systemd[1]: Started Virtual Machine qemu-19-instance-0000001f.
Nov 29 07:52:45 compute-2 NetworkManager[48993]: <info>  [1764402765.7081] device (tap32326edd-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:52:45 compute-2 NetworkManager[48993]: <info>  [1764402765.7096] device (tap32326edd-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:52:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1606835647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.119 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.126 232432 DEBUG nova.compute.provider_tree [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.143 232432 DEBUG nova.scheduler.client.report [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.171 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.172 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.225 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.226 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.255 232432 INFO nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:52:46 compute-2 ceph-mon[77138]: pgmap v1468: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.277 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.435 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.436 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.437 232432 INFO nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Creating image(s)
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.464 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.495 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.521 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.525 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.594 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.596 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.597 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.597 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.621 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.625 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6878c573-6c98-4ab5-86eb-445077de25b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.815 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402766.8143954, bae55d85-4263-4efe-895d-a762627b52ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.816 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Started (Lifecycle Event)
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.836 232432 DEBUG nova.policy [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e6a7e8a80384d83b5debf4c717f6e09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b1a31b637613411eaeda132dc499537b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:52:46 compute-2 nova_compute[232428]: 2025-11-29 07:52:46.842 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1606835647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.386 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402767.3863456, bae55d85-4263-4efe-895d-a762627b52ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.387 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Resumed (Lifecycle Event)
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.400 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6878c573-6c98-4ab5-86eb-445077de25b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.436 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.486 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] resizing rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.707 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.732 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 07:52:47 compute-2 nova_compute[232428]: 2025-11-29 07:52:47.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:48 compute-2 nova_compute[232428]: 2025-11-29 07:52:48.709 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:48 compute-2 nova_compute[232428]: 2025-11-29 07:52:48.809 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Successfully created port: e2d03fc2-63f1-468f-a168-cd009a7ae994 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:52:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:48.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:49 compute-2 ovn_controller[134375]: 2025-11-29T07:52:49Z|00135|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this chassis.
Nov 29 07:52:49 compute-2 ovn_controller[134375]: 2025-11-29T07:52:49Z|00136|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 07:52:49 compute-2 ovn_controller[134375]: 2025-11-29T07:52:49Z|00137|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 up in Southbound
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.024 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '21', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.025 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.027 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.046 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c03afa35-03ca-49ae-a1a7-af5b61dc0bae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.089 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4659ca1b-d844-45bf-8dc4-734550c6ce45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.092 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[73b91de4-767e-4e2b-8a49-18f17ad13ae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.134 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[440db4ec-22b2-443c-a343-f4d49919cd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.156 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fa3d3d-1af0-48a9-aae8-a8be5ac4ba63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 868, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 868, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250497, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ceph-mon[77138]: pgmap v1469: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 714 KiB/s wr, 156 op/s
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.178 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[302df786-7c90-48a5-975c-c12eec1148f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250498, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250498, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.180 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.182 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.183 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.183 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.183 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.184 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:49.184 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.230 232432 INFO nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Post operation of migration started
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.343 232432 DEBUG nova.objects.instance [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'migration_context' on Instance uuid 6878c573-6c98-4ab5-86eb-445077de25b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.375 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.375 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Ensure instance console log exists: /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.376 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.377 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.377 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.659 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.660 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:49 compute-2 nova_compute[232428]: 2025-11-29 07:52:49.660 232432 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:52:49 compute-2 podman[250518]: 2025-11-29 07:52:49.716488351 +0000 UTC m=+0.094876183 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.120 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Successfully updated port: e2d03fc2-63f1-468f-a168-cd009a7ae994 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.137 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.137 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquired lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.138 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:52:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.561 232432 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.562 232432 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing instance network info cache due to event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.562 232432 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:50 compute-2 ceph-mon[77138]: pgmap v1470: 305 pgs: 305 active+clean; 308 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 173 op/s
Nov 29 07:52:50 compute-2 nova_compute[232428]: 2025-11-29 07:52:50.775 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:52:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:50.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.271 232432 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.315 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.340 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.341 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.341 232432 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.347 232432 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Nov 29 07:52:51 compute-2 virtqemud[231977]: Domain id=19 name='instance-0000001f' uuid=bae55d85-4263-4efe-895d-a762627b52ff is tainted: custom-monitor
Nov 29 07:52:51 compute-2 ceph-mon[77138]: pgmap v1471: 305 pgs: 305 active+clean; 357 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 213 op/s
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.806 232432 DEBUG nova.network.neutron [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updating instance_info_cache with network_info: [{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.829 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Releasing lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.829 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Instance network_info: |[{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.830 232432 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.830 232432 DEBUG nova.network.neutron [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.834 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Start _get_guest_xml network_info=[{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.838 232432 WARNING nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.842 232432 DEBUG nova.virt.libvirt.host [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.842 232432 DEBUG nova.virt.libvirt.host [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.845 232432 DEBUG nova.virt.libvirt.host [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.846 232432 DEBUG nova.virt.libvirt.host [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.847 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.847 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.847 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.847 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.847 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.848 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.849 232432 DEBUG nova.virt.hardware [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:52:51 compute-2 nova_compute[232428]: 2025-11-29 07:52:51.851 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/663392690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.354 232432 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.363 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.394 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.399 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/663392690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.770 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2236369778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.933 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.935 232432 DEBUG nova.virt.libvirt.vif [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-78620142',display_name='tempest-FloatingIPsAssociationTestJSON-server-78620142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-78620142',id=35,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-gdx0qrzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:46Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=6878c573-6c98-4ab5-86eb-445077de25b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.936 232432 DEBUG nova.network.os_vif_util [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.936 232432 DEBUG nova.network.os_vif_util [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.937 232432 DEBUG nova.objects.instance [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6878c573-6c98-4ab5-86eb-445077de25b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.968 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <uuid>6878c573-6c98-4ab5-86eb-445077de25b3</uuid>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <name>instance-00000023</name>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-78620142</nova:name>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:52:51</nova:creationTime>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:user uuid="2e6a7e8a80384d83b5debf4c717f6e09">tempest-FloatingIPsAssociationTestJSON-120353870-project-member</nova:user>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:project uuid="b1a31b637613411eaeda132dc499537b">tempest-FloatingIPsAssociationTestJSON-120353870</nova:project>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <nova:port uuid="e2d03fc2-63f1-468f-a168-cd009a7ae994">
Nov 29 07:52:52 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <system>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="serial">6878c573-6c98-4ab5-86eb-445077de25b3</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="uuid">6878c573-6c98-4ab5-86eb-445077de25b3</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </system>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <os>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </os>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <features>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </features>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6878c573-6c98-4ab5-86eb-445077de25b3_disk">
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </source>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6878c573-6c98-4ab5-86eb-445077de25b3_disk.config">
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </source>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:52:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:93:34:a8"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <target dev="tape2d03fc2-63"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/console.log" append="off"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <video>
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </video>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:52:52 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:52:52 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:52:52 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:52:52 compute-2 nova_compute[232428]: </domain>
Nov 29 07:52:52 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.970 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Preparing to wait for external event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.971 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.972 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.972 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.973 232432 DEBUG nova.virt.libvirt.vif [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-78620142',display_name='tempest-FloatingIPsAssociationTestJSON-server-78620142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-78620142',id=35,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-gdx0qrzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:46Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=6878c573-6c98-4ab5-86eb-445077de25b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.974 232432 DEBUG nova.network.os_vif_util [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.975 232432 DEBUG nova.network.os_vif_util [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.975 232432 DEBUG os_vif [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.977 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.978 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.982 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.982 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2d03fc2-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.983 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2d03fc2-63, col_values=(('external_ids', {'iface-id': 'e2d03fc2-63f1-468f-a168-cd009a7ae994', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:34:a8', 'vm-uuid': '6878c573-6c98-4ab5-86eb-445077de25b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-2 NetworkManager[48993]: <info>  [1764402772.9872] manager: (tape2d03fc2-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.988 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:52:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:52.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.995 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-2 nova_compute[232428]: 2025-11-29 07:52:52.997 232432 INFO os_vif [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63')
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.058 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.058 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.058 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No VIF found with MAC fa:16:3e:93:34:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.059 232432 INFO nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Using config drive
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.085 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.362 232432 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.368 232432 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.391 232432 DEBUG nova.objects.instance [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.478 232432 DEBUG nova.network.neutron [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updated VIF entry in instance network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.479 232432 DEBUG nova.network.neutron [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updating instance_info_cache with network_info: [{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.487 232432 INFO nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Creating config drive at /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.494 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15ln35 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.536 232432 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.641 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15ln35" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.674 232432 DEBUG nova.storage.rbd_utils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 6878c573-6c98-4ab5-86eb-445077de25b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.679 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config 6878c573-6c98-4ab5-86eb-445077de25b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2236369778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:53 compute-2 ceph-mon[77138]: pgmap v1472: 305 pgs: 305 active+clean; 357 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Nov 29 07:52:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/80401445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.882 232432 DEBUG oslo_concurrency.processutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config 6878c573-6c98-4ab5-86eb-445077de25b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.882 232432 INFO nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Deleting local config drive /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3/disk.config because it was imported into RBD.
Nov 29 07:52:53 compute-2 kernel: tape2d03fc2-63: entered promiscuous mode
Nov 29 07:52:53 compute-2 NetworkManager[48993]: <info>  [1764402773.9374] manager: (tape2d03fc2-63): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 07:52:53 compute-2 ovn_controller[134375]: 2025-11-29T07:52:53Z|00138|binding|INFO|Claiming lport e2d03fc2-63f1-468f-a168-cd009a7ae994 for this chassis.
Nov 29 07:52:53 compute-2 ovn_controller[134375]: 2025-11-29T07:52:53Z|00139|binding|INFO|e2d03fc2-63f1-468f-a168-cd009a7ae994: Claiming fa:16:3e:93:34:a8 10.100.0.12
Nov 29 07:52:53 compute-2 nova_compute[232428]: 2025-11-29 07:52:53.940 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.956 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:34:a8 10.100.0.12'], port_security=['fa:16:3e:93:34:a8 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6878c573-6c98-4ab5-86eb-445077de25b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be6e4a03-649a-413f-8a81-4fef5b740489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1a31b637613411eaeda132dc499537b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6df00dc5-dca9-4705-acff-a62440113d04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b522ee0-0950-4570-a9e8-0fc3c27b4f72, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e2d03fc2-63f1-468f-a168-cd009a7ae994) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.957 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e2d03fc2-63f1-468f-a168-cd009a7ae994 in datapath be6e4a03-649a-413f-8a81-4fef5b740489 bound to our chassis
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.958 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be6e4a03-649a-413f-8a81-4fef5b740489
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.971 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a77bc694-d30d-478a-96bd-7704d8d17098]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.973 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe6e4a03-61 in ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.974 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe6e4a03-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.974 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8d092dd6-ce1c-4166-9b91-8b794ad51e36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.975 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8fd38f-9ba3-49f0-9564-5991cf85f1d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:53 compute-2 systemd-machined[194747]: New machine qemu-20-instance-00000023.
Nov 29 07:52:53 compute-2 systemd-udevd[250677]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:52:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:53.990 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c0d786-ea4d-4a4a-82aa-d82d9f149ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:53 compute-2 NetworkManager[48993]: <info>  [1764402773.9976] device (tape2d03fc2-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:52:53 compute-2 NetworkManager[48993]: <info>  [1764402773.9986] device (tape2d03fc2-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 ovn_controller[134375]: 2025-11-29T07:52:54Z|00140|binding|INFO|Setting lport e2d03fc2-63f1-468f-a168-cd009a7ae994 ovn-installed in OVS
Nov 29 07:52:54 compute-2 ovn_controller[134375]: 2025-11-29T07:52:54Z|00141|binding|INFO|Setting lport e2d03fc2-63f1-468f-a168-cd009a7ae994 up in Southbound
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 systemd[1]: Started Virtual Machine qemu-20-instance-00000023.
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.016 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[648fb5ea-2d64-4197-9a51-502c3bc94dd0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.057 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8e306730-602b-46ad-bb24-c1d58bcbb8cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.063 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f149fe38-ff9e-4a07-ac6a-81868732c2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 NetworkManager[48993]: <info>  [1764402774.0644] manager: (tapbe6e4a03-60): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 07:52:54 compute-2 systemd-udevd[250681]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.128 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae4f9d3-64b1-4ec2-aa35-9896cb4bd797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.131 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f603170c-adb2-49d0-a1ff-ac20c65965ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 NetworkManager[48993]: <info>  [1764402774.1627] device (tapbe6e4a03-60): carrier: link connected
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.171 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[84bae1a0-e4f2-43a4-92cf-c1970d781902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.197 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1713f93-316a-4ccb-951f-a7ef5dabdd86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe6e4a03-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:ec:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574043, 'reachable_time': 40859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250709, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.220 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[21358b3d-e22a-4a86-8129-ae3dc18e054d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:ec9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574043, 'tstamp': 574043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250710, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.247 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12bfb649-be54-4f94-afe4-0261eabadb93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe6e4a03-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:ec:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574043, 'reachable_time': 40859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250711, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.283 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e6906a6a-7706-46b7-a62e-b25a8bf5ed84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.340 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e846628-f21e-4d1c-ae89-1b94b4f79bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.342 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6e4a03-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.342 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.342 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe6e4a03-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:54 compute-2 NetworkManager[48993]: <info>  [1764402774.3445] manager: (tapbe6e4a03-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 kernel: tapbe6e4a03-60: entered promiscuous mode
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.349 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe6e4a03-60, col_values=(('external_ids', {'iface-id': '49d1fa79-68ff-4b00-b078-6119e837e4c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:54 compute-2 ovn_controller[134375]: 2025-11-29T07:52:54Z|00142|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.353 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.361 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6eab6ac8-e03a-4727-b619-c942b7d9d4a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.362 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-be6e4a03-649a-413f-8a81-4fef5b740489
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID be6e4a03-649a-413f-8a81-4fef5b740489
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:52:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:54.363 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'env', 'PROCESS_TAG=haproxy-be6e4a03-649a-413f-8a81-4fef5b740489', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be6e4a03-649a-413f-8a81-4fef5b740489.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.769 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402774.7692702, 6878c573-6c98-4ab5-86eb-445077de25b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.770 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] VM Started (Lifecycle Event)
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.797 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.802 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402774.7700841, 6878c573-6c98-4ab5-86eb-445077de25b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.803 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] VM Paused (Lifecycle Event)
Nov 29 07:52:54 compute-2 podman[250785]: 2025-11-29 07:52:54.811829594 +0000 UTC m=+0.071117319 container create c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.824 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.827 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:54 compute-2 nova_compute[232428]: 2025-11-29 07:52:54.850 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:52:54 compute-2 systemd[1]: Started libpod-conmon-c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80.scope.
Nov 29 07:52:54 compute-2 podman[250785]: 2025-11-29 07:52:54.769248699 +0000 UTC m=+0.028536444 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:52:54 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:52:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c5019efed694bcfe08efa2f0f33c25f19791b69d46416c634e2bc422cef5b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:54 compute-2 podman[250785]: 2025-11-29 07:52:54.900489891 +0000 UTC m=+0.159777616 container init c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:54 compute-2 podman[250785]: 2025-11-29 07:52:54.906799609 +0000 UTC m=+0.166087334 container start c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 07:52:54 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [NOTICE]   (250804) : New worker (250806) forked
Nov 29 07:52:54 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [NOTICE]   (250804) : Loading success.
Nov 29 07:52:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2659406463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.878 232432 DEBUG nova.compute.manager [req-e5fb295e-452d-462c-b4b8-fc34b2a8cf2d req-5b0b1522-3a1f-4772-81c1-c02942ecac64 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.879 232432 DEBUG oslo_concurrency.lockutils [req-e5fb295e-452d-462c-b4b8-fc34b2a8cf2d req-5b0b1522-3a1f-4772-81c1-c02942ecac64 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.880 232432 DEBUG oslo_concurrency.lockutils [req-e5fb295e-452d-462c-b4b8-fc34b2a8cf2d req-5b0b1522-3a1f-4772-81c1-c02942ecac64 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.880 232432 DEBUG oslo_concurrency.lockutils [req-e5fb295e-452d-462c-b4b8-fc34b2a8cf2d req-5b0b1522-3a1f-4772-81c1-c02942ecac64 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.880 232432 DEBUG nova.compute.manager [req-e5fb295e-452d-462c-b4b8-fc34b2a8cf2d req-5b0b1522-3a1f-4772-81c1-c02942ecac64 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Processing event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.882 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.887 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402775.88687, 6878c573-6c98-4ab5-86eb-445077de25b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.888 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] VM Resumed (Lifecycle Event)
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.890 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.896 232432 INFO nova.virt.libvirt.driver [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Instance spawned successfully.
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.897 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.906 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.911 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.925 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.925 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.926 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.927 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.928 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.928 232432 DEBUG nova.virt.libvirt.driver [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.934 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.978 232432 INFO nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Took 9.54 seconds to spawn the instance on the hypervisor.
Nov 29 07:52:55 compute-2 nova_compute[232428]: 2025-11-29 07:52:55.979 232432 DEBUG nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.047 232432 INFO nova.compute.manager [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Took 10.59 seconds to build instance.
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.070 232432 DEBUG oslo_concurrency.lockutils [None req-8aa52f08-cd6e-41f9-9037-afd7d80b13ad 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:56 compute-2 ceph-mon[77138]: pgmap v1473: 305 pgs: 305 active+clean; 436 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.2 MiB/s wr, 422 op/s
Nov 29 07:52:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:52:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.902 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.902 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.903 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.903 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.903 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.904 232432 INFO nova.compute.manager [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Terminating instance
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.905 232432 DEBUG nova.compute.manager [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:52:56 compute-2 kernel: tap32326edd-91 (unregistering): left promiscuous mode
Nov 29 07:52:56 compute-2 NetworkManager[48993]: <info>  [1764402776.9700] device (tap32326edd-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:52:56 compute-2 ovn_controller[134375]: 2025-11-29T07:52:56Z|00143|binding|INFO|Releasing lport 32326edd-9157-4611-83ff-41c84380e739 from this chassis (sb_readonly=0)
Nov 29 07:52:56 compute-2 ovn_controller[134375]: 2025-11-29T07:52:56Z|00144|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 down in Southbound
Nov 29 07:52:56 compute-2 ovn_controller[134375]: 2025-11-29T07:52:56Z|00145|binding|INFO|Removing iface tap32326edd-91 ovn-installed in OVS
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:56 compute-2 nova_compute[232428]: 2025-11-29 07:52:56.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:56.994 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:56.996 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis
Nov 29 07:52:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:56.998 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3
Nov 29 07:52:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.020 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d0355a43-f65a-4bc4-972d-ca90b94f5f45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 07:52:57 compute-2 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001f.scope: Consumed 1.872s CPU time.
Nov 29 07:52:57 compute-2 systemd-machined[194747]: Machine qemu-19-instance-0000001f terminated.
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.075 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b897382f-1ce5-4604-bb58-25e6ee3c5cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.078 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed82426-3ffa-4b5c-86fb-331a80449f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.113 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9e858d-5a1d-4309-9cfa-3c6fb3ff751e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.133 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4614307c-fa32-4808-8958-29d73fa61b19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 29, 'tx_packets': 16, 'rx_bytes': 1498, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 29, 'tx_packets': 16, 'rx_bytes': 1498, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561395, 'reachable_time': 22770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250830, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.142 232432 INFO nova.virt.libvirt.driver [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance destroyed successfully.
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.142 232432 DEBUG nova.objects.instance [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lazy-loading 'resources' on Instance uuid bae55d85-4263-4efe-895d-a762627b52ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.153 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80e7245f-443a-4d46-8ca9-ceb8003b2994]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561409, 'tstamp': 561409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250838, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb746034c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561412, 'tstamp': 561412}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250838, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.155 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.157 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.165 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.166 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.166 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.167 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:52:57.167 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.168 232432 DEBUG nova.virt.libvirt.vif [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:53Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.168 232432 DEBUG nova.network.os_vif_util [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.169 232432 DEBUG nova.network.os_vif_util [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.169 232432 DEBUG os_vif [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.171 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.171 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32326edd-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.173 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.174 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.177 232432 INFO os_vif [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')
Nov 29 07:52:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3716945864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.412 232432 INFO nova.virt.libvirt.driver [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deleting instance files /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.413 232432 INFO nova.virt.libvirt.driver [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deletion of /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del complete
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.457 232432 INFO nova.compute.manager [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 0.55 seconds to destroy the instance on the hypervisor.
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.458 232432 DEBUG oslo.service.loopingcall [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.458 232432 DEBUG nova.compute.manager [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.458 232432 DEBUG nova.network.neutron [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.993 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.994 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.994 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.994 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.995 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] No waiting events found dispatching network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.995 232432 WARNING nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received unexpected event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 for instance with vm_state active and task_state None.
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.995 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.996 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.996 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.996 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.997 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.997 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.997 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.998 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.998 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.998 232432 DEBUG oslo_concurrency.lockutils [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.999 232432 DEBUG nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:57 compute-2 nova_compute[232428]: 2025-11-29 07:52:57.999 232432 WARNING nova.compute.manager [req-b39a35fb-16c4-4b58-8a7b-c0ab68c44fdd req-4a4d5d20-e025-4c7f-a166-be7f214e7225 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state deleting.
Nov 29 07:52:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:58 compute-2 ceph-mon[77138]: pgmap v1474: 305 pgs: 305 active+clean; 436 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 861 KiB/s rd, 7.2 MiB/s wr, 367 op/s
Nov 29 07:52:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:58 compute-2 nova_compute[232428]: 2025-11-29 07:52:58.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:52:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:52:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:52:59 compute-2 sudo[250862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:59 compute-2 sudo[250862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:59 compute-2 sudo[250862]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:59 compute-2 sudo[250893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:59 compute-2 sudo[250893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:59 compute-2 sudo[250893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.257 232432 DEBUG nova.network.neutron [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.270 232432 INFO nova.compute.manager [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 1.81 seconds to deallocate network for instance.
Nov 29 07:52:59 compute-2 podman[250886]: 2025-11-29 07:52:59.308321553 +0000 UTC m=+0.143427695 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.468 232432 INFO nova.compute.manager [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.469 232432 DEBUG nova.compute.manager [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deleting volume: 4a9f4928-146a-4c56-bbea-7dd9c7945b0c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 07:52:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/506622480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:59 compute-2 ceph-mon[77138]: pgmap v1475: 305 pgs: 305 active+clean; 448 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 993 KiB/s rd, 8.6 MiB/s wr, 405 op/s
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.814 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.815 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.821 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.871 232432 INFO nova.scheduler.client.report [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Deleted allocations for instance bae55d85-4263-4efe-895d-a762627b52ff
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.939 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:59 compute-2 NetworkManager[48993]: <info>  [1764402779.9399] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 07:52:59 compute-2 NetworkManager[48993]: <info>  [1764402779.9409] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 07:52:59 compute-2 nova_compute[232428]: 2025-11-29 07:52:59.980 232432 DEBUG oslo_concurrency.lockutils [None req-1ad2dac5-eed0-49a3-b58e-8900125e5fc6 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.097 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00146|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00147|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00148|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.133 232432 DEBUG nova.compute.manager [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-deleted-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.471 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.473 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.474 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.475 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.476 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.478 232432 INFO nova.compute.manager [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Terminating instance
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.481 232432 DEBUG nova.compute.manager [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:53:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:00 compute-2 kernel: tape347928a-5a (unregistering): left promiscuous mode
Nov 29 07:53:00 compute-2 NetworkManager[48993]: <info>  [1764402780.5535] device (tape347928a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00149|binding|INFO|Releasing lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00150|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 down in Southbound
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00151|binding|INFO|Releasing lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00152|binding|INFO|Setting lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b down in Southbound
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00153|binding|INFO|Removing iface tape347928a-5a ovn-installed in OVS
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.576 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:42:48 10.100.0.9'], port_security=['fa:16:3e:93:42:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1284960005', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aca637ac-6ef0-42f8-aacf-e022e990aeba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1284960005', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e347928a-5a81-4fdb-a7df-4ac039bb8bb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00154|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00155|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_controller[134375]: 2025-11-29T07:53:00Z|00156|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.578 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:69:f6 19.80.0.160'], port_security=['fa:16:3e:f4:69:f6 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e347928a-5a81-4fdb-a7df-4ac039bb8bb3'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1926672861', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1926672861', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c4847c33-f725-4948-8187-3e41c1ea344f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ecac803-0ffe-4cdf-a724-cbb61954b01b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.580 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.582 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b746034c-0143-4024-986c-673efea114a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.583 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c77a2-8e1d-4277-aba1-183f9f2e05b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.584 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace which is not needed anymore
Nov 29 07:53:00 compute-2 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 07:53:00 compute-2 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001e.scope: Consumed 10.725s CPU time.
Nov 29 07:53:00 compute-2 systemd-machined[194747]: Machine qemu-16-instance-0000001e terminated.
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.610 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.636 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.724 232432 INFO nova.virt.libvirt.driver [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance destroyed successfully.
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.725 232432 DEBUG nova.objects.instance [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lazy-loading 'resources' on Instance uuid aca637ac-6ef0-42f8-aacf-e022e990aeba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.750 232432 DEBUG nova.virt.libvirt.vif [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:52Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.750 232432 DEBUG nova.network.os_vif_util [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.751 232432 DEBUG nova.network.os_vif_util [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.752 232432 DEBUG os_vif [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.754 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.754 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape347928a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.760 232432 INFO os_vif [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a')
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [NOTICE]   (248457) : haproxy version is 2.8.14-c23fe91
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [NOTICE]   (248457) : path to executable is /usr/sbin/haproxy
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [WARNING]  (248457) : Exiting Master process...
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [WARNING]  (248457) : Exiting Master process...
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [ALERT]    (248457) : Current worker (248459) exited with code 143 (Terminated)
Nov 29 07:53:00 compute-2 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[248453]: [WARNING]  (248457) : All workers exited. Exiting... (0)
Nov 29 07:53:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1701659343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4055466643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4055466643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:00 compute-2 systemd[1]: libpod-9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5.scope: Deactivated successfully.
Nov 29 07:53:00 compute-2 podman[250964]: 2025-11-29 07:53:00.772537231 +0000 UTC m=+0.076425446 container died 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:53:00 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5-userdata-shm.mount: Deactivated successfully.
Nov 29 07:53:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-791219dc6bc483f099d20d1d3bc3e1b4fcfe77afea899c4e6e0b2fd877655b48-merged.mount: Deactivated successfully.
Nov 29 07:53:00 compute-2 podman[250964]: 2025-11-29 07:53:00.818540993 +0000 UTC m=+0.122429198 container cleanup 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:00 compute-2 systemd[1]: libpod-conmon-9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5.scope: Deactivated successfully.
Nov 29 07:53:00 compute-2 podman[251021]: 2025-11-29 07:53:00.887196804 +0000 UTC m=+0.044234847 container remove 9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[495e542c-f7c1-4b50-8b7f-9fc11c10bcac]: (4, ('Sat Nov 29 07:53:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5)\n9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5\nSat Nov 29 07:53:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5)\n9d5b082875d3f71913de0d693020d6ea6d9ddfe57823419158eb36004337aee5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.896 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc36773-b87d-4ef5-acef-09717e901b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.897 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 kernel: tapb746034c-00: left promiscuous mode
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.908 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0893e1b2-9542-46fa-b6bd-0f6bde51e3cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 nova_compute[232428]: 2025-11-29 07:53:00.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.925 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cc530927-4e73-4178-8b3e-2ca0eb6d988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.927 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9460bf53-5b10-410c-a429-15c2f16b6dfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.943 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f4bc37-c9a3-4ebd-a635-94d1ee256d1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561386, 'reachable_time': 37832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251038, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 systemd[1]: run-netns-ovnmeta\x2db746034c\x2d0143\x2d4024\x2d986c\x2d673efea114a3.mount: Deactivated successfully.
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.946 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b746034c-0143-4024-986c-673efea114a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.946 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[6c331fa5-778d-4393-a2c8-4587df3a6cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.949 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecac803-0ffe-4cdf-a724-cbb61954b01b in datapath 5791158c-7fc4-4c56-891c-c8aa0c79ed59 unbound from our chassis
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.951 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5791158c-7fc4-4c56-891c-c8aa0c79ed59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.952 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d961d1fa-3c7b-4926-89fe-a603126bb2d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:00.952 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 namespace which is not needed anymore
Nov 29 07:53:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [NOTICE]   (248550) : haproxy version is 2.8.14-c23fe91
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [NOTICE]   (248550) : path to executable is /usr/sbin/haproxy
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [WARNING]  (248550) : Exiting Master process...
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [WARNING]  (248550) : Exiting Master process...
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [ALERT]    (248550) : Current worker (248552) exited with code 143 (Terminated)
Nov 29 07:53:01 compute-2 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[248544]: [WARNING]  (248550) : All workers exited. Exiting... (0)
Nov 29 07:53:01 compute-2 systemd[1]: libpod-2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318.scope: Deactivated successfully.
Nov 29 07:53:01 compute-2 podman[251057]: 2025-11-29 07:53:01.098088652 +0000 UTC m=+0.047888212 container died 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:53:01 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318-userdata-shm.mount: Deactivated successfully.
Nov 29 07:53:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-b2febf88f4d4b5758b306e81fb1a52cf5bb5374d0ef4873651aebb2c5f56f3ba-merged.mount: Deactivated successfully.
Nov 29 07:53:01 compute-2 podman[251057]: 2025-11-29 07:53:01.128238356 +0000 UTC m=+0.078037916 container cleanup 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:53:01 compute-2 systemd[1]: libpod-conmon-2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318.scope: Deactivated successfully.
Nov 29 07:53:01 compute-2 podman[251085]: 2025-11-29 07:53:01.208411318 +0000 UTC m=+0.054986763 container remove 2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.219 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df997be8-1be3-4abe-b910-7082f1adb31e]: (4, ('Sat Nov 29 07:53:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 (2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318)\n2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318\nSat Nov 29 07:53:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 (2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318)\n2376c562c55f9b7648868605f06d4cd9d748b0d915b38f5ec11755112c422318\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.222 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b74afb70-ea3c-44f7-8803-c122576a3b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.223 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5791158c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.224 232432 INFO nova.virt.libvirt.driver [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deleting instance files /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba_del
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.225 232432 INFO nova.virt.libvirt.driver [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deletion of /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba_del complete
Nov 29 07:53:01 compute-2 kernel: tap5791158c-70: left promiscuous mode
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.227 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.251 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1bf0dc-6aed-4929-9fd7-9b6e90b5ac6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.265 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab05ec6-00c4-4591-97e2-02cb0e616b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.267 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[944856b7-b50d-4f4e-9f9f-e660d262f0d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.285 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6f58cc58-8839-492d-a6c7-ed93315cd46f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561479, 'reachable_time': 26574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251099, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.288 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.288 232432 INFO nova.compute.manager [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 29 07:53:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:01.288 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[1a13fc5c-0e79-4bf0-b241-a49a7b90b999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.288 232432 DEBUG oslo.service.loopingcall [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.289 232432 DEBUG nova.compute.manager [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:53:01 compute-2 nova_compute[232428]: 2025-11-29 07:53:01.289 232432 DEBUG nova.network.neutron [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.446461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781446603, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2711, "num_deletes": 514, "total_data_size": 5421317, "memory_usage": 5500880, "flush_reason": "Manual Compaction"}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781464619, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 2410991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26366, "largest_seqno": 29072, "table_properties": {"data_size": 2402493, "index_size": 4352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24255, "raw_average_key_size": 20, "raw_value_size": 2381918, "raw_average_value_size": 1973, "num_data_blocks": 192, "num_entries": 1207, "num_filter_entries": 1207, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402590, "oldest_key_time": 1764402590, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18242 microseconds, and 10442 cpu microseconds.
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464707) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 2410991 bytes OK
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464742) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467011) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467031) EVENT_LOG_v1 {"time_micros": 1764402781467024, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5408426, prev total WAL file size 5408426, number of live WAL files 2.
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.468851) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373630' seq:0, type:0; will stop at (end)
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(2354KB)], [51(10221KB)]
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781468992, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12878269, "oldest_snapshot_seqno": -1}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5709 keys, 10029362 bytes, temperature: kUnknown
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781560072, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 10029362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9990714, "index_size": 23242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 146383, "raw_average_key_size": 25, "raw_value_size": 9887592, "raw_average_value_size": 1731, "num_data_blocks": 942, "num_entries": 5709, "num_filter_entries": 5709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.560519) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10029362 bytes
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.562845) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.2 rd, 110.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 6676, records dropped: 967 output_compression: NoCompression
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.562872) EVENT_LOG_v1 {"time_micros": 1764402781562860, "job": 30, "event": "compaction_finished", "compaction_time_micros": 91201, "compaction_time_cpu_micros": 32137, "output_level": 6, "num_output_files": 1, "total_output_size": 10029362, "num_input_records": 6676, "num_output_records": 5709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781563831, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781566354, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.468758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.566442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.566449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.566451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.566453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:01.566455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4146737388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:01 compute-2 ceph-mon[77138]: pgmap v1476: 305 pgs: 305 active+clean; 388 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.1 MiB/s wr, 503 op/s
Nov 29 07:53:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3320301644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:01 compute-2 systemd[1]: run-netns-ovnmeta\x2d5791158c\x2d7fc4\x2d4c56\x2d891c\x2dc8aa0c79ed59.mount: Deactivated successfully.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782464149, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 251, "total_data_size": 73349, "memory_usage": 79680, "flush_reason": "Manual Compaction"}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782467194, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 47910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29077, "largest_seqno": 29348, "table_properties": {"data_size": 46026, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4910, "raw_average_key_size": 18, "raw_value_size": 42336, "raw_average_value_size": 158, "num_data_blocks": 5, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402781, "oldest_key_time": 1764402781, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3188 microseconds, and 1159 cpu microseconds.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.467257) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 47910 bytes OK
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.467274) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.468624) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.468642) EVENT_LOG_v1 {"time_micros": 1764402782468636, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.468664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 71250, prev total WAL file size 71250, number of live WAL files 2.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.469571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(46KB)], [54(9794KB)]
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782469837, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 10077272, "oldest_snapshot_seqno": -1}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5466 keys, 7732562 bytes, temperature: kUnknown
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782529523, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 7732562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7697563, "index_size": 20233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 142039, "raw_average_key_size": 25, "raw_value_size": 7600589, "raw_average_value_size": 1390, "num_data_blocks": 808, "num_entries": 5466, "num_filter_entries": 5466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.530252) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7732562 bytes
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532729) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.6 rd, 128.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(371.7) write-amplify(161.4) OK, records in: 5976, records dropped: 510 output_compression: NoCompression
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532750) EVENT_LOG_v1 {"time_micros": 1764402782532739, "job": 32, "event": "compaction_finished", "compaction_time_micros": 60144, "compaction_time_cpu_micros": 20341, "output_level": 6, "num_output_files": 1, "total_output_size": 7732562, "num_input_records": 5976, "num_output_records": 5466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782533257, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 29 07:53:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782535523, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.469030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.535587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.535593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.535595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.535597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:53:02.535598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:02 compute-2 sudo[251100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:02 compute-2 sudo[251100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:02 compute-2 sudo[251100]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:02 compute-2 sudo[251125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:02 compute-2 sudo[251125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:02 compute-2 sudo[251125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.708 232432 DEBUG nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.709 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.709 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.710 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.710 232432 DEBUG nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.710 232432 DEBUG nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.710 232432 DEBUG nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.711 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.711 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.711 232432 DEBUG oslo_concurrency.lockutils [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.711 232432 DEBUG nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.711 232432 WARNING nova.compute.manager [req-582347f6-52d4-4d8a-b148-97aca5a9951b req-c8621a7f-dc79-41b2-89b4-0b09c6533d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state deleting.
Nov 29 07:53:02 compute-2 sudo[251150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:02 compute-2 sudo[251150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:02 compute-2 sudo[251150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:02 compute-2 sudo[251175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:53:02 compute-2 sudo[251175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:02 compute-2 nova_compute[232428]: 2025-11-29 07:53:02.989 232432 DEBUG nova.network.neutron [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:53:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.010 232432 INFO nova.compute.manager [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Took 1.72 seconds to deallocate network for instance.
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.069 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.069 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.136 232432 DEBUG oslo_concurrency.processutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:03.299 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:03.299 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:03.300 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:03 compute-2 podman[251295]: 2025-11-29 07:53:03.439596348 +0000 UTC m=+0.062430887 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:03 compute-2 podman[251295]: 2025-11-29 07:53:03.555913903 +0000 UTC m=+0.178748402 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:53:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2673904979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.609 232432 DEBUG oslo_concurrency.processutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.627 232432 DEBUG nova.compute.provider_tree [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.650 232432 DEBUG nova.scheduler.client.report [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.671 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.706 232432 INFO nova.scheduler.client.report [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Deleted allocations for instance aca637ac-6ef0-42f8-aacf-e022e990aeba
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.817 232432 DEBUG oslo_concurrency.lockutils [None req-9679ce61-bbb6-4ff3-aa04-ad0bb747b118 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:03 compute-2 nova_compute[232428]: 2025-11-29 07:53:03.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2673904979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:04 compute-2 ceph-mon[77138]: pgmap v1477: 305 pgs: 305 active+clean; 388 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 454 op/s
Nov 29 07:53:04 compute-2 podman[251454]: 2025-11-29 07:53:04.354635829 +0000 UTC m=+0.059269198 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:53:04 compute-2 podman[251454]: 2025-11-29 07:53:04.369751193 +0000 UTC m=+0.074384512 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 07:53:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:04.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:04 compute-2 podman[251520]: 2025-11-29 07:53:04.596184817 +0000 UTC m=+0.068074234 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, architecture=x86_64, version=2.2.4, name=keepalived, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 07:53:04 compute-2 podman[251520]: 2025-11-29 07:53:04.621375217 +0000 UTC m=+0.093264624 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64)
Nov 29 07:53:04 compute-2 sudo[251175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:04 compute-2 sudo[251554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:04 compute-2 sudo[251554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:04 compute-2 sudo[251554]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:04 compute-2 sudo[251579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:04 compute-2 sudo[251579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:04 compute-2 sudo[251579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:04 compute-2 sudo[251604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:04 compute-2 sudo[251604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:04 compute-2 sudo[251604]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:04 compute-2 sudo[251629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:53:04 compute-2 sudo[251629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:05 compute-2 sudo[251629]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-2 nova_compute[232428]: 2025-11-29 07:53:05.757 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:53:06 compute-2 ceph-mon[77138]: pgmap v1478: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.0 MiB/s wr, 579 op/s
Nov 29 07:53:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:06.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:06 compute-2 nova_compute[232428]: 2025-11-29 07:53:06.965 232432 DEBUG nova.compute.manager [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:06 compute-2 nova_compute[232428]: 2025-11-29 07:53:06.966 232432 DEBUG nova.compute.manager [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing instance network info cache due to event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:53:06 compute-2 nova_compute[232428]: 2025-11-29 07:53:06.966 232432 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:06 compute-2 nova_compute[232428]: 2025-11-29 07:53:06.966 232432 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:06 compute-2 nova_compute[232428]: 2025-11-29 07:53:06.966 232432 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:53:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:53:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:07.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:53:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:53:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:53:07 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:08 compute-2 ceph-mon[77138]: pgmap v1479: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.5 MiB/s wr, 277 op/s
Nov 29 07:53:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:08.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:08 compute-2 nova_compute[232428]: 2025-11-29 07:53:08.822 232432 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updated VIF entry in instance network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:53:08 compute-2 nova_compute[232428]: 2025-11-29 07:53:08.822 232432 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updating instance_info_cache with network_info: [{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:08 compute-2 nova_compute[232428]: 2025-11-29 07:53:08.857 232432 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:08 compute-2 nova_compute[232428]: 2025-11-29 07:53:08.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:09 compute-2 ceph-mon[77138]: pgmap v1480: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 302 op/s
Nov 29 07:53:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:10 compute-2 nova_compute[232428]: 2025-11-29 07:53:10.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:11 compute-2 ovn_controller[134375]: 2025-11-29T07:53:11Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:34:a8 10.100.0.12
Nov 29 07:53:11 compute-2 ovn_controller[134375]: 2025-11-29T07:53:11Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:34:a8 10.100.0.12
Nov 29 07:53:11 compute-2 ovn_controller[134375]: 2025-11-29T07:53:11Z|00157|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 07:53:11 compute-2 nova_compute[232428]: 2025-11-29 07:53:11.625 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:11 compute-2 ceph-mon[77138]: pgmap v1481: 305 pgs: 305 active+clean; 284 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 342 op/s
Nov 29 07:53:12 compute-2 nova_compute[232428]: 2025-11-29 07:53:12.139 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402777.1378016, bae55d85-4263-4efe-895d-a762627b52ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:12 compute-2 nova_compute[232428]: 2025-11-29 07:53:12.140 232432 INFO nova.compute.manager [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Stopped (Lifecycle Event)
Nov 29 07:53:12 compute-2 nova_compute[232428]: 2025-11-29 07:53:12.317 232432 DEBUG nova.compute.manager [None req-81b546cc-016a-4910-93c5-a650fd5c5655 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:12.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:13 compute-2 ceph-mon[77138]: pgmap v1482: 305 pgs: 305 active+clean; 284 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 227 op/s
Nov 29 07:53:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:13 compute-2 nova_compute[232428]: 2025-11-29 07:53:13.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:14 compute-2 sudo[251689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:14 compute-2 sudo[251689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:14 compute-2 sudo[251689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:14 compute-2 sudo[251714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:53:14 compute-2 sudo[251714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:14 compute-2 sudo[251714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:14.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:14 compute-2 nova_compute[232428]: 2025-11-29 07:53:14.962 232432 DEBUG nova.compute.manager [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:14 compute-2 nova_compute[232428]: 2025-11-29 07:53:14.962 232432 DEBUG nova.compute.manager [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing instance network info cache due to event network-changed-e2d03fc2-63f1-468f-a168-cd009a7ae994. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:53:14 compute-2 nova_compute[232428]: 2025-11-29 07:53:14.963 232432 DEBUG oslo_concurrency.lockutils [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:14 compute-2 nova_compute[232428]: 2025-11-29 07:53:14.963 232432 DEBUG oslo_concurrency.lockutils [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:14 compute-2 nova_compute[232428]: 2025-11-29 07:53:14.963 232432 DEBUG nova.network.neutron [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Refreshing network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:53:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:15 compute-2 nova_compute[232428]: 2025-11-29 07:53:15.719 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402780.716408, aca637ac-6ef0-42f8-aacf-e022e990aeba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:15 compute-2 nova_compute[232428]: 2025-11-29 07:53:15.720 232432 INFO nova.compute.manager [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Stopped (Lifecycle Event)
Nov 29 07:53:15 compute-2 nova_compute[232428]: 2025-11-29 07:53:15.737 232432 DEBUG nova.compute.manager [None req-87dd3921-069a-4177-8d79-f24cd14891ea - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:15 compute-2 nova_compute[232428]: 2025-11-29 07:53:15.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:53:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:16.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:16 compute-2 podman[251740]: 2025-11-29 07:53:16.655563784 +0000 UTC m=+0.061591781 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:16 compute-2 ceph-mon[77138]: pgmap v1483: 305 pgs: 305 active+clean; 318 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 290 op/s
Nov 29 07:53:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.304 232432 DEBUG nova.network.neutron [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updated VIF entry in instance network info cache for port e2d03fc2-63f1-468f-a168-cd009a7ae994. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.305 232432 DEBUG nova.network.neutron [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updating instance_info_cache with network_info: [{"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.322 232432 DEBUG oslo_concurrency.lockutils [req-493ab8eb-fcf4-4803-8729-c0bd3203225e req-8f4737d2-edae-4814-b365-bc86e976adf5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6878c573-6c98-4ab5-86eb-445077de25b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.568 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.569 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.569 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.569 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.569 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.570 232432 INFO nova.compute.manager [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Terminating instance
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.571 232432 DEBUG nova.compute.manager [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:53:17 compute-2 kernel: tape2d03fc2-63 (unregistering): left promiscuous mode
Nov 29 07:53:17 compute-2 NetworkManager[48993]: <info>  [1764402797.8195] device (tape2d03fc2-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:53:17 compute-2 ovn_controller[134375]: 2025-11-29T07:53:17Z|00158|binding|INFO|Releasing lport e2d03fc2-63f1-468f-a168-cd009a7ae994 from this chassis (sb_readonly=0)
Nov 29 07:53:17 compute-2 ovn_controller[134375]: 2025-11-29T07:53:17Z|00159|binding|INFO|Setting lport e2d03fc2-63f1-468f-a168-cd009a7ae994 down in Southbound
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:17 compute-2 ovn_controller[134375]: 2025-11-29T07:53:17Z|00160|binding|INFO|Removing iface tape2d03fc2-63 ovn-installed in OVS
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:17.841 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:34:a8 10.100.0.12'], port_security=['fa:16:3e:93:34:a8 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6878c573-6c98-4ab5-86eb-445077de25b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be6e4a03-649a-413f-8a81-4fef5b740489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1a31b637613411eaeda132dc499537b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6df00dc5-dca9-4705-acff-a62440113d04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b522ee0-0950-4570-a9e8-0fc3c27b4f72, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e2d03fc2-63f1-468f-a168-cd009a7ae994) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:17.842 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e2d03fc2-63f1-468f-a168-cd009a7ae994 in datapath be6e4a03-649a-413f-8a81-4fef5b740489 unbound from our chassis
Nov 29 07:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:17.844 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be6e4a03-649a-413f-8a81-4fef5b740489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:17.846 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9fee5e52-1741-4e46-97d5-ec675d46a383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:17.847 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 namespace which is not needed anymore
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:17 compute-2 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 29 07:53:17 compute-2 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Consumed 14.448s CPU time.
Nov 29 07:53:17 compute-2 systemd-machined[194747]: Machine qemu-20-instance-00000023 terminated.
Nov 29 07:53:17 compute-2 ceph-mon[77138]: pgmap v1484: 305 pgs: 305 active+clean; 318 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 166 op/s
Nov 29 07:53:17 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [NOTICE]   (250804) : haproxy version is 2.8.14-c23fe91
Nov 29 07:53:17 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [NOTICE]   (250804) : path to executable is /usr/sbin/haproxy
Nov 29 07:53:17 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [WARNING]  (250804) : Exiting Master process...
Nov 29 07:53:17 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [ALERT]    (250804) : Current worker (250806) exited with code 143 (Terminated)
Nov 29 07:53:17 compute-2 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[250800]: [WARNING]  (250804) : All workers exited. Exiting... (0)
Nov 29 07:53:17 compute-2 systemd[1]: libpod-c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80.scope: Deactivated successfully.
Nov 29 07:53:17 compute-2 podman[251784]: 2025-11-29 07:53:17.991003107 +0000 UTC m=+0.044143845 container died c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:53:17 compute-2 nova_compute[232428]: 2025-11-29 07:53:17.997 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.011 232432 INFO nova.virt.libvirt.driver [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Instance destroyed successfully.
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.012 232432 DEBUG nova.objects.instance [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'resources' on Instance uuid 6878c573-6c98-4ab5-86eb-445077de25b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:18 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80-userdata-shm.mount: Deactivated successfully.
Nov 29 07:53:18 compute-2 systemd[1]: var-lib-containers-storage-overlay-d6c5019efed694bcfe08efa2f0f33c25f19791b69d46416c634e2bc422cef5b8-merged.mount: Deactivated successfully.
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.038 232432 DEBUG nova.virt.libvirt.vif [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-78620142',display_name='tempest-FloatingIPsAssociationTestJSON-server-78620142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-78620142',id=35,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:52:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-gdx0qrzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:56Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=6878c573-6c98-4ab5-86eb-445077de25b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.039 232432 DEBUG nova.network.os_vif_util [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "address": "fa:16:3e:93:34:a8", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2d03fc2-63", "ovs_interfaceid": "e2d03fc2-63f1-468f-a168-cd009a7ae994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:18 compute-2 podman[251784]: 2025-11-29 07:53:18.040541199 +0000 UTC m=+0.093681937 container cleanup c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.039 232432 DEBUG nova.network.os_vif_util [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.041 232432 DEBUG os_vif [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.043 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2d03fc2-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.046 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.051 232432 INFO os_vif [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:34:a8,bridge_name='br-int',has_traffic_filtering=True,id=e2d03fc2-63f1-468f-a168-cd009a7ae994,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2d03fc2-63')
Nov 29 07:53:18 compute-2 systemd[1]: libpod-conmon-c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80.scope: Deactivated successfully.
Nov 29 07:53:18 compute-2 ovn_controller[134375]: 2025-11-29T07:53:18Z|00161|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.103 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 podman[251825]: 2025-11-29 07:53:18.112756842 +0000 UTC m=+0.046513208 container remove c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.118 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc7a52e-a967-47a9-bb0a-dc89debbf65d]: (4, ('Sat Nov 29 07:53:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 (c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80)\nc91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80\nSat Nov 29 07:53:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 (c91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80)\nc91f26c4bf7a6d7a6f68a6907fe650808b75f228c7d034ba35da8145c6ab6d80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.120 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2b488963-b02a-4523-afad-a69b098eb765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.121 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6e4a03-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.254 232432 DEBUG nova.compute.manager [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-unplugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.254 232432 DEBUG oslo_concurrency.lockutils [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.254 232432 DEBUG oslo_concurrency.lockutils [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.254 232432 DEBUG oslo_concurrency.lockutils [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.255 232432 DEBUG nova.compute.manager [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] No waiting events found dispatching network-vif-unplugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.255 232432 DEBUG nova.compute.manager [req-de9e73f9-546a-4d44-b8f5-2f541aac1680 req-1f70af1b-62fe-4c77-a318-34298ad02111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-unplugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:53:18 compute-2 kernel: tapbe6e4a03-60: left promiscuous mode
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.316 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.318 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[17e0e9cf-1573-4406-81d3-bf52da89f7ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.337 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9857f0b8-cc56-41f9-a1cb-b50b09e80a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2b800f38-7648-4325-9005-6392e41559da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.357 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[58891e43-9933-4986-a3f3-5cd65daf28fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574031, 'reachable_time': 19991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251858, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.359 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:53:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:18.360 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a2765da6-cc88-4f9b-a85c-6eb613199ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:18 compute-2 systemd[1]: run-netns-ovnmeta\x2dbe6e4a03\x2d649a\x2d413f\x2d8a81\x2d4fef5b740489.mount: Deactivated successfully.
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.531 232432 INFO nova.virt.libvirt.driver [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Deleting instance files /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3_del
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.531 232432 INFO nova.virt.libvirt.driver [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Deletion of /var/lib/nova/instances/6878c573-6c98-4ab5-86eb-445077de25b3_del complete
Nov 29 07:53:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.586 232432 INFO nova.compute.manager [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.587 232432 DEBUG oslo.service.loopingcall [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.587 232432 DEBUG nova.compute.manager [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.587 232432 DEBUG nova.network.neutron [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:53:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:18 compute-2 nova_compute[232428]: 2025-11-29 07:53:18.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:19 compute-2 sudo[251860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:19 compute-2 sudo[251860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:19 compute-2 sudo[251860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:19 compute-2 sudo[251885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:19 compute-2 sudo[251885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:19 compute-2 sudo[251885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:19 compute-2 nova_compute[232428]: 2025-11-29 07:53:19.535 232432 DEBUG nova.network.neutron [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:19 compute-2 nova_compute[232428]: 2025-11-29 07:53:19.562 232432 INFO nova.compute.manager [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Took 0.97 seconds to deallocate network for instance.
Nov 29 07:53:19 compute-2 nova_compute[232428]: 2025-11-29 07:53:19.640 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:19 compute-2 nova_compute[232428]: 2025-11-29 07:53:19.641 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:19 compute-2 nova_compute[232428]: 2025-11-29 07:53:19.737 232432 DEBUG oslo_concurrency.processutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:19 compute-2 ceph-mon[77138]: pgmap v1485: 305 pgs: 305 active+clean; 310 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Nov 29 07:53:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3155583134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.157 232432 DEBUG oslo_concurrency.processutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.163 232432 DEBUG nova.compute.provider_tree [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.191 232432 DEBUG nova.scheduler.client.report [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.225 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.259 232432 INFO nova.scheduler.client.report [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Deleted allocations for instance 6878c573-6c98-4ab5-86eb-445077de25b3
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.343 232432 DEBUG oslo_concurrency.lockutils [None req-23faaab1-5b15-4359-885d-c32d986e7b65 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.490 232432 DEBUG nova.compute.manager [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.491 232432 DEBUG oslo_concurrency.lockutils [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.492 232432 DEBUG oslo_concurrency.lockutils [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.492 232432 DEBUG oslo_concurrency.lockutils [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6878c573-6c98-4ab5-86eb-445077de25b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.493 232432 DEBUG nova.compute.manager [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] No waiting events found dispatching network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.493 232432 WARNING nova.compute.manager [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received unexpected event network-vif-plugged-e2d03fc2-63f1-468f-a168-cd009a7ae994 for instance with vm_state deleted and task_state None.
Nov 29 07:53:20 compute-2 nova_compute[232428]: 2025-11-29 07:53:20.494 232432 DEBUG nova.compute.manager [req-61d6b3a9-21c5-4a8c-92c5-f5f113512ae7 req-654d68ec-8b4a-4573-987d-e6c194449fdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Received event network-vif-deleted-e2d03fc2-63f1-468f-a168-cd009a7ae994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:20 compute-2 podman[251932]: 2025-11-29 07:53:20.662912964 +0000 UTC m=+0.069393115 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:53:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3155583134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:21.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:22 compute-2 ceph-mon[77138]: pgmap v1486: 305 pgs: 305 active+clean; 278 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.2 MiB/s wr, 249 op/s
Nov 29 07:53:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:23 compute-2 nova_compute[232428]: 2025-11-29 07:53:23.046 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2282278663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:23 compute-2 nova_compute[232428]: 2025-11-29 07:53:23.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:24 compute-2 ceph-mon[77138]: pgmap v1487: 305 pgs: 305 active+clean; 278 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 4.8 MiB/s wr, 171 op/s
Nov 29 07:53:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2049988182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:25 compute-2 nova_compute[232428]: 2025-11-29 07:53:25.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:25 compute-2 ceph-mon[77138]: pgmap v1488: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 781 KiB/s rd, 6.6 MiB/s wr, 233 op/s
Nov 29 07:53:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:53:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:53:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:27.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:27 compute-2 nova_compute[232428]: 2025-11-29 07:53:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:27 compute-2 nova_compute[232428]: 2025-11-29 07:53:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:53:27 compute-2 nova_compute[232428]: 2025-11-29 07:53:27.223 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:53:27 compute-2 nova_compute[232428]: 2025-11-29 07:53:27.223 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:27 compute-2 ceph-mon[77138]: pgmap v1489: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 462 KiB/s rd, 4.0 MiB/s wr, 170 op/s
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.405 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.405 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.592 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.681 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.681 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.688 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.689 232432 INFO nova.compute.claims [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:53:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1601275792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1601275792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.810 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:28 compute-2 nova_compute[232428]: 2025-11-29 07:53:28.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2564287642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.263 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.271 232432 DEBUG nova.compute.provider_tree [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.291 232432 DEBUG nova.scheduler.client.report [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.322 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.323 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.398 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.399 232432 DEBUG nova.network.neutron [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.419 232432 INFO nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.435 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.553 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.554 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.554 232432 INFO nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Creating image(s)
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.589 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.621 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.656 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.660 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.694 232432 DEBUG nova.network.neutron [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.695 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:53:29 compute-2 podman[251994]: 2025-11-29 07:53:29.703355 +0000 UTC m=+0.104893738 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.748 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.750 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.751 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.751 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.786 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:29 compute-2 nova_compute[232428]: 2025-11-29 07:53:29.790 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2564287642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:29 compute-2 ceph-mon[77138]: pgmap v1490: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 876 KiB/s rd, 4.0 MiB/s wr, 185 op/s
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.239 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.331 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] resizing rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.451 232432 DEBUG nova.objects.instance [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'migration_context' on Instance uuid ae6771f9-37e7-4bc3-b252-6e6ed299a444 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.466 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.466 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Ensure instance console log exists: /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.467 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.467 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.467 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.469 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.473 232432 WARNING nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.478 232432 DEBUG nova.virt.libvirt.host [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.479 232432 DEBUG nova.virt.libvirt.host [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.481 232432 DEBUG nova.virt.libvirt.host [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.481 232432 DEBUG nova.virt.libvirt.host [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.482 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.482 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.483 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.483 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.483 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.483 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.484 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.484 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.484 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.484 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.484 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.485 232432 DEBUG nova.virt.hardware [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.487 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1834427842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.931 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.955 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:30 compute-2 nova_compute[232428]: 2025-11-29 07:53:30.961 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:31.065 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:31.067 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.066 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1379002420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1834427842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:53:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1420708708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.449 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.452 232432 DEBUG nova.objects.instance [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae6771f9-37e7-4bc3-b252-6e6ed299a444 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.474 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <uuid>ae6771f9-37e7-4bc3-b252-6e6ed299a444</uuid>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <name>instance-00000025</name>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:name>tempest-LiveMigrationNegativeTest-server-926226744</nova:name>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:53:30</nova:creationTime>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:user uuid="6ca0284fe9484539925c684d27654f2f">tempest-LiveMigrationNegativeTest-1757760000-project-member</nova:user>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <nova:project uuid="cb511b88d472452f9749846769c119a1">tempest-LiveMigrationNegativeTest-1757760000</nova:project>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <system>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="serial">ae6771f9-37e7-4bc3-b252-6e6ed299a444</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="uuid">ae6771f9-37e7-4bc3-b252-6e6ed299a444</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </system>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <os>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </os>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <features>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </features>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk">
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </source>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config">
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </source>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:53:31 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/console.log" append="off"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <video>
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </video>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:53:31 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:53:31 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:53:31 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:53:31 compute-2 nova_compute[232428]: </domain>
Nov 29 07:53:31 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.553 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.554 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.554 232432 INFO nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Using config drive
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.580 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.758 232432 INFO nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Creating config drive at /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.765 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw43txlar execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.899 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw43txlar" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.926 232432 DEBUG nova.storage.rbd_utils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:31 compute-2 nova_compute[232428]: 2025-11-29 07:53:31.930 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1420708708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2479510454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2985886005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:32 compute-2 ceph-mon[77138]: pgmap v1491: 305 pgs: 305 active+clean; 168 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 250 op/s
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.159 232432 DEBUG oslo_concurrency.processutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config ae6771f9-37e7-4bc3-b252-6e6ed299a444_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.161 232432 INFO nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Deleting local config drive /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444/disk.config because it was imported into RBD.
Nov 29 07:53:32 compute-2 systemd-machined[194747]: New machine qemu-21-instance-00000025.
Nov 29 07:53:32 compute-2 systemd[1]: Started Virtual Machine qemu-21-instance-00000025.
Nov 29 07:53:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.923 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402812.9230452, ae6771f9-37e7-4bc3-b252-6e6ed299a444 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.925 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] VM Resumed (Lifecycle Event)
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.928 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.928 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.934 232432 INFO nova.virt.libvirt.driver [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance spawned successfully.
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.934 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.956 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.960 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.969 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.969 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.970 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.970 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.970 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:32 compute-2 nova_compute[232428]: 2025-11-29 07:53:32.971 232432 DEBUG nova.virt.libvirt.driver [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.002 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.003 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402812.924286, ae6771f9-37e7-4bc3-b252-6e6ed299a444 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.003 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] VM Started (Lifecycle Event)
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.007 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402798.006976, 6878c573-6c98-4ab5-86eb-445077de25b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.007 232432 INFO nova.compute.manager [-] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] VM Stopped (Lifecycle Event)
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.028 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.031 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.033 232432 DEBUG nova.compute.manager [None req-9e25ec81-7c1c-4fbf-a9c4-157bb0f2dcf0 - - - - - -] [instance: 6878c573-6c98-4ab5-86eb-445077de25b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.043 232432 INFO nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Took 3.49 seconds to spawn the instance on the hypervisor.
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.043 232432 DEBUG nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.049 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.104 232432 INFO nova.compute.manager [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Took 4.44 seconds to build instance.
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.131 232432 DEBUG oslo_concurrency.lockutils [None req-e6508ea0-602a-4cbc-a3e5-3fb03c98365e 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1132834006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2698841310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3474148776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.229 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4084388218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.676 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.748 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.749 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.905 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.906 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4624MB free_disk=20.913898468017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.907 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.907 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.972 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance ae6771f9-37e7-4bc3-b252-6e6ed299a444 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.972 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.973 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:53:33 compute-2 nova_compute[232428]: 2025-11-29 07:53:33.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.003 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:34 compute-2 ceph-mon[77138]: pgmap v1492: 305 pgs: 305 active+clean; 168 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Nov 29 07:53:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4084388218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3076684238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.462 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.469 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.487 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.505 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:53:34 compute-2 nova_compute[232428]: 2025-11-29 07:53:34.506 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:53:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:53:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3076684238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:36 compute-2 ceph-mon[77138]: pgmap v1493: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 279 op/s
Nov 29 07:53:36 compute-2 nova_compute[232428]: 2025-11-29 07:53:36.507 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:36.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:53:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:53:37 compute-2 ceph-mon[77138]: pgmap v1494: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Nov 29 07:53:38 compute-2 nova_compute[232428]: 2025-11-29 07:53:38.058 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:38.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3682029325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:38 compute-2 nova_compute[232428]: 2025-11-29 07:53:38.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:53:39.069 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:39 compute-2 sudo[252399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:39 compute-2 sudo[252399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:39 compute-2 sudo[252399]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:39 compute-2 sudo[252424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:39 compute-2 sudo[252424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:39 compute-2 sudo[252424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:39 compute-2 ceph-mon[77138]: pgmap v1495: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 224 op/s
Nov 29 07:53:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:40.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1946674025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2366711324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:41 compute-2 ceph-mon[77138]: pgmap v1496: 305 pgs: 305 active+clean; 197 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.8 MiB/s wr, 233 op/s
Nov 29 07:53:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:43 compute-2 nova_compute[232428]: 2025-11-29 07:53:43.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:43 compute-2 ceph-mon[77138]: pgmap v1497: 305 pgs: 305 active+clean; 197 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Nov 29 07:53:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/171021420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:43 compute-2 nova_compute[232428]: 2025-11-29 07:53:43.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/171021420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:45 compute-2 ceph-mon[77138]: pgmap v1498: 305 pgs: 305 active+clean; 218 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Nov 29 07:53:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:47 compute-2 podman[252453]: 2025-11-29 07:53:47.683225403 +0000 UTC m=+0.064533263 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:53:48 compute-2 nova_compute[232428]: 2025-11-29 07:53:48.061 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:48 compute-2 ceph-mon[77138]: pgmap v1499: 305 pgs: 305 active+clean; 218 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 29 07:53:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:48 compute-2 nova_compute[232428]: 2025-11-29 07:53:48.987 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:50 compute-2 ceph-mon[77138]: pgmap v1500: 305 pgs: 305 active+clean; 220 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Nov 29 07:53:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1634362885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:51 compute-2 podman[252474]: 2025-11-29 07:53:51.671834357 +0000 UTC m=+0.077366935 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.205 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.206 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.207 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.207 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.208 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.210 232432 INFO nova.compute.manager [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Terminating instance
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.212 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "refresh_cache-ae6771f9-37e7-4bc3-b252-6e6ed299a444" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.212 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquired lock "refresh_cache-ae6771f9-37e7-4bc3-b252-6e6ed299a444" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.213 232432 DEBUG nova.network.neutron [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:53:52 compute-2 ceph-mon[77138]: pgmap v1501: 305 pgs: 305 active+clean; 200 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 191 op/s
Nov 29 07:53:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:52 compute-2 nova_compute[232428]: 2025-11-29 07:53:52.688 232432 DEBUG nova.network.neutron [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.023 232432 DEBUG nova.network.neutron [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.040 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Releasing lock "refresh_cache-ae6771f9-37e7-4bc3-b252-6e6ed299a444" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.041 232432 DEBUG nova.compute.manager [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:53 compute-2 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 29 07:53:53 compute-2 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000025.scope: Consumed 14.189s CPU time.
Nov 29 07:53:53 compute-2 systemd-machined[194747]: Machine qemu-21-instance-00000025 terminated.
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.268 232432 INFO nova.virt.libvirt.driver [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance destroyed successfully.
Nov 29 07:53:53 compute-2 nova_compute[232428]: 2025-11-29 07:53:53.268 232432 DEBUG nova.objects.instance [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'resources' on Instance uuid ae6771f9-37e7-4bc3-b252-6e6ed299a444 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:54 compute-2 nova_compute[232428]: 2025-11-29 07:53:54.031 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:54 compute-2 ceph-mon[77138]: pgmap v1502: 305 pgs: 305 active+clean; 200 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 166 op/s
Nov 29 07:53:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1864284547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.596 232432 INFO nova.virt.libvirt.driver [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Deleting instance files /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444_del
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.597 232432 INFO nova.virt.libvirt.driver [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Deletion of /var/lib/nova/instances/ae6771f9-37e7-4bc3-b252-6e6ed299a444_del complete
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.657 232432 INFO nova.compute.manager [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Took 2.62 seconds to destroy the instance on the hypervisor.
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.658 232432 DEBUG oslo.service.loopingcall [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.658 232432 DEBUG nova.compute.manager [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.658 232432 DEBUG nova.network.neutron [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.866 232432 DEBUG nova.network.neutron [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.883 232432 DEBUG nova.network.neutron [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.903 232432 INFO nova.compute.manager [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Took 0.24 seconds to deallocate network for instance.
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.964 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:55 compute-2 nova_compute[232428]: 2025-11-29 07:53:55.965 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.028 232432 DEBUG oslo_concurrency.processutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/825290622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:56 compute-2 ceph-mon[77138]: pgmap v1503: 305 pgs: 305 active+clean; 147 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 189 op/s
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.469 232432 DEBUG oslo_concurrency.processutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.481 232432 DEBUG nova.compute.provider_tree [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.502 232432 DEBUG nova.scheduler.client.report [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.530 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.567 232432 INFO nova.scheduler.client.report [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Deleted allocations for instance ae6771f9-37e7-4bc3-b252-6e6ed299a444
Nov 29 07:53:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:56 compute-2 nova_compute[232428]: 2025-11-29 07:53:56.620 232432 DEBUG oslo_concurrency.lockutils [None req-0ca41b15-4e24-4703-81a2-587b99f6f2aa 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "ae6771f9-37e7-4bc3-b252-6e6ed299a444" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/825290622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:58 compute-2 nova_compute[232428]: 2025-11-29 07:53:58.065 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:58 compute-2 ceph-mon[77138]: pgmap v1504: 305 pgs: 305 active+clean; 147 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 2.8 MiB/s wr, 101 op/s
Nov 29 07:53:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:53:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:53:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:59 compute-2 nova_compute[232428]: 2025-11-29 07:53:59.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:53:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:53:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:59.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:53:59 compute-2 sudo[252542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:59 compute-2 sudo[252542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:59 compute-2 sudo[252542]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:59 compute-2 ceph-mon[77138]: pgmap v1505: 305 pgs: 305 active+clean; 167 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 3.4 MiB/s wr, 134 op/s
Nov 29 07:53:59 compute-2 sudo[252567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:59 compute-2 sudo[252567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:59 compute-2 sudo[252567]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:59 compute-2 podman[252591]: 2025-11-29 07:53:59.903425727 +0000 UTC m=+0.140477172 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 07:54:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4110876370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2958552636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:01 compute-2 ceph-mon[77138]: pgmap v1506: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 29 07:54:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:03.300 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:03.301 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:03.301 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:03 compute-2 ceph-mon[77138]: pgmap v1507: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.834 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.834 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.860 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:54:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.964 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.965 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.972 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:54:03 compute-2 nova_compute[232428]: 2025-11-29 07:54:03.973 232432 INFO nova.compute.claims [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.035 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.126 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4272651705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.573 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.580 232432 DEBUG nova.compute.provider_tree [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.594 232432 DEBUG nova.scheduler.client.report [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:54:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.618 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.619 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.664 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.665 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.685 232432 INFO nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.706 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.812 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.814 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.814 232432 INFO nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Creating image(s)
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.841 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.873 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.902 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:04 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.907 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:04.998 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.001 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.002 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.003 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.041 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.046 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.110 232432 DEBUG nova.policy [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3905209925cd414980eac7c79bf04af2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6ba183d92db4c6795dd0f44dc77fad4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:54:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4272651705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:05 compute-2 nova_compute[232428]: 2025-11-29 07:54:05.940 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.025 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] resizing rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.142 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Successfully created port: 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:54:06 compute-2 ceph-mon[77138]: pgmap v1508: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.265 232432 DEBUG nova.objects.instance [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lazy-loading 'migration_context' on Instance uuid 9554d48d-d298-43f9-a68d-c8f52fe2cc33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.340 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.341 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Ensure instance console log exists: /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.341 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.342 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:06 compute-2 nova_compute[232428]: 2025-11-29 07:54:06.342 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:07 compute-2 ceph-mon[77138]: pgmap v1509: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 707 KiB/s rd, 607 KiB/s wr, 67 op/s
Nov 29 07:54:07 compute-2 nova_compute[232428]: 2025-11-29 07:54:07.948 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Successfully updated port: 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:54:07 compute-2 nova_compute[232428]: 2025-11-29 07:54:07.967 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:54:07 compute-2 nova_compute[232428]: 2025-11-29 07:54:07.967 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquired lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:54:07 compute-2 nova_compute[232428]: 2025-11-29 07:54:07.968 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.070 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.112 232432 DEBUG nova.compute.manager [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-changed-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.113 232432 DEBUG nova.compute.manager [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Refreshing instance network info cache due to event network-changed-02fd52e9-af0d-4291-bff2-e68ba5bf7a70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.113 232432 DEBUG oslo_concurrency.lockutils [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.206 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.266 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402833.2652583, ae6771f9-37e7-4bc3-b252-6e6ed299a444 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.267 232432 INFO nova.compute.manager [-] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] VM Stopped (Lifecycle Event)
Nov 29 07:54:08 compute-2 nova_compute[232428]: 2025-11-29 07:54:08.299 232432 DEBUG nova.compute.manager [None req-9c2ebc5e-a30e-485d-99c2-75eb012ff381 - - - - - -] [instance: ae6771f9-37e7-4bc3-b252-6e6ed299a444] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.038 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.477 232432 DEBUG nova.network.neutron [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updating instance_info_cache with network_info: [{"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.507 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Releasing lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.508 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Instance network_info: |[{"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.510 232432 DEBUG oslo_concurrency.lockutils [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.511 232432 DEBUG nova.network.neutron [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Refreshing network info cache for port 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.519 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Start _get_guest_xml network_info=[{"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.528 232432 WARNING nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.533 232432 DEBUG nova.virt.libvirt.host [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.535 232432 DEBUG nova.virt.libvirt.host [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.538 232432 DEBUG nova.virt.libvirt.host [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.539 232432 DEBUG nova.virt.libvirt.host [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.540 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.541 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.541 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.542 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.542 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.542 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.542 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.543 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.543 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.543 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.544 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.544 232432 DEBUG nova.virt.hardware [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:54:09 compute-2 nova_compute[232428]: 2025-11-29 07:54:09.547 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:09 compute-2 ceph-mon[77138]: pgmap v1510: 305 pgs: 305 active+clean; 182 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Nov 29 07:54:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3798158231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.028 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.056 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.063 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1612147892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.527 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.529 232432 DEBUG nova.virt.libvirt.vif [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1246751040',display_name='tempest-ImagesOneServerTestJSON-server-1246751040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1246751040',id=40,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6ba183d92db4c6795dd0f44dc77fad4',ramdisk_id='',reservation_id='r-40qgay24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2038682252',owner_user_name='tempest-ImagesOneServerTestJSON-2038682252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:04Z,user_data=None,user_id='3905209925cd414980eac7c79bf04af2',uuid=9554d48d-d298-43f9-a68d-c8f52fe2cc33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.529 232432 DEBUG nova.network.os_vif_util [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converting VIF {"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.530 232432 DEBUG nova.network.os_vif_util [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.531 232432 DEBUG nova.objects.instance [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9554d48d-d298-43f9-a68d-c8f52fe2cc33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.550 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <uuid>9554d48d-d298-43f9-a68d-c8f52fe2cc33</uuid>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <name>instance-00000028</name>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1246751040</nova:name>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:54:09</nova:creationTime>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:user uuid="3905209925cd414980eac7c79bf04af2">tempest-ImagesOneServerTestJSON-2038682252-project-member</nova:user>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:project uuid="e6ba183d92db4c6795dd0f44dc77fad4">tempest-ImagesOneServerTestJSON-2038682252</nova:project>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <nova:port uuid="02fd52e9-af0d-4291-bff2-e68ba5bf7a70">
Nov 29 07:54:10 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <system>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="serial">9554d48d-d298-43f9-a68d-c8f52fe2cc33</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="uuid">9554d48d-d298-43f9-a68d-c8f52fe2cc33</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </system>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <os>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </os>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <features>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </features>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk">
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config">
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </source>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:54:10 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:54:f7:65"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <target dev="tap02fd52e9-af"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/console.log" append="off"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <video>
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </video>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:54:10 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:54:10 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:54:10 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:54:10 compute-2 nova_compute[232428]: </domain>
Nov 29 07:54:10 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.552 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Preparing to wait for external event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.552 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.552 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.552 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.553 232432 DEBUG nova.virt.libvirt.vif [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1246751040',display_name='tempest-ImagesOneServerTestJSON-server-1246751040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1246751040',id=40,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6ba183d92db4c6795dd0f44dc77fad4',ramdisk_id='',reservation_id='r-40qgay24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2038682252',owner_user_name='tempest-ImagesOneServerTestJSON-2038682252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:04Z,user_data=None,user_id='3905209925cd414980eac7c79bf04af2',uuid=9554d48d-d298-43f9-a68d-c8f52fe2cc33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.553 232432 DEBUG nova.network.os_vif_util [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converting VIF {"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.554 232432 DEBUG nova.network.os_vif_util [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.554 232432 DEBUG os_vif [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.555 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.555 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.556 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.559 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02fd52e9-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.560 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02fd52e9-af, col_values=(('external_ids', {'iface-id': '02fd52e9-af0d-4291-bff2-e68ba5bf7a70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:f7:65', 'vm-uuid': '9554d48d-d298-43f9-a68d-c8f52fe2cc33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.561 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-2 NetworkManager[48993]: <info>  [1764402850.5627] manager: (tap02fd52e9-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.564 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.569 232432 INFO os_vif [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af')
Nov 29 07:54:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.633 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.633 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.633 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] No VIF found with MAC fa:16:3e:54:f7:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.634 232432 INFO nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Using config drive
Nov 29 07:54:10 compute-2 nova_compute[232428]: 2025-11-29 07:54:10.660 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3798158231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3805871239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3805871239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1612147892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:11.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.220 232432 INFO nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Creating config drive at /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.226 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs8zn328 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.366 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs8zn328" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.405 232432 DEBUG nova.storage.rbd_utils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] rbd image 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.410 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.701 232432 DEBUG oslo_concurrency.processutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config 9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.702 232432 INFO nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Deleting local config drive /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33/disk.config because it was imported into RBD.
Nov 29 07:54:11 compute-2 ceph-mon[77138]: pgmap v1511: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 29 07:54:11 compute-2 kernel: tap02fd52e9-af: entered promiscuous mode
Nov 29 07:54:11 compute-2 NetworkManager[48993]: <info>  [1764402851.7631] manager: (tap02fd52e9-af): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 07:54:11 compute-2 ovn_controller[134375]: 2025-11-29T07:54:11Z|00162|binding|INFO|Claiming lport 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 for this chassis.
Nov 29 07:54:11 compute-2 ovn_controller[134375]: 2025-11-29T07:54:11Z|00163|binding|INFO|02fd52e9-af0d-4291-bff2-e68ba5bf7a70: Claiming fa:16:3e:54:f7:65 10.100.0.6
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.773 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-2 systemd-udevd[252949]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:54:11 compute-2 systemd-machined[194747]: New machine qemu-22-instance-00000028.
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.797 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f7:65 10.100.0.6'], port_security=['fa:16:3e:54:f7:65 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9554d48d-d298-43f9-a68d-c8f52fe2cc33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6ba183d92db4c6795dd0f44dc77fad4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ad50f6f-f9d0-428c-8a64-de2a144e5c7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaae3603-798f-4bda-b3f1-1c10a1927c46, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=02fd52e9-af0d-4291-bff2-e68ba5bf7a70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.798 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 in datapath 2fb8dc00-ab99-4b85-bc43-2d5f32594f21 bound to our chassis
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.800 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fb8dc00-ab99-4b85-bc43-2d5f32594f21
Nov 29 07:54:11 compute-2 NetworkManager[48993]: <info>  [1764402851.8052] device (tap02fd52e9-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:54:11 compute-2 NetworkManager[48993]: <info>  [1764402851.8061] device (tap02fd52e9-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[71353858-6336-4e35-b81f-d64a53fb7346]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.812 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fb8dc00-a1 in ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.815 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fb8dc00-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.815 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea28524a-438a-449a-ae1a-29baf068c74d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.816 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[44287c4c-f0b4-4298-821a-38f12e71f15f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 systemd[1]: Started Virtual Machine qemu-22-instance-00000028.
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.831 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[709bddd6-dcb2-4a7a-ab31-c19ec225a048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-2 ovn_controller[134375]: 2025-11-29T07:54:11Z|00164|binding|INFO|Setting lport 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 ovn-installed in OVS
Nov 29 07:54:11 compute-2 ovn_controller[134375]: 2025-11-29T07:54:11Z|00165|binding|INFO|Setting lport 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 up in Southbound
Nov 29 07:54:11 compute-2 nova_compute[232428]: 2025-11-29 07:54:11.847 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.857 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7d7bb3-b025-4297-a319-4e1b2573cd8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.885 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[afc0325d-be28-4597-a755-54fd68c3e2e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[72ac664c-268a-4813-ab70-28a08eb0395c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 NetworkManager[48993]: <info>  [1764402851.8926] manager: (tap2fb8dc00-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.923 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[07e654e7-8f93-40de-8c25-410d1b9c13ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.926 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[73d17762-5758-4d0d-b330-33a2ada7ec64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 NetworkManager[48993]: <info>  [1764402851.9500] device (tap2fb8dc00-a0): carrier: link connected
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.957 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[be87cf8c-9cc5-4a71-8008-65bb807cf472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.975 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c29b39-1bb7-4a63-8358-511c953260c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fb8dc00-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:9f:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581821, 'reachable_time': 32989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252983, 'error': None, 'target': 'ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:11.997 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e76ff2c-694c-4c24-af05-f6972099578d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:9f7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581821, 'tstamp': 581821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252991, 'error': None, 'target': 'ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.016 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4d193994-73ac-48a5-ac07-501135f6ca44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fb8dc00-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:9f:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581821, 'reachable_time': 32989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253001, 'error': None, 'target': 'ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.055 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aa60984b-d088-4221-8e3c-a049e0e50f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.132 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09fc680e-0787-4cf2-a64a-aa72e9086460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.133 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb8dc00-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.133 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.133 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fb8dc00-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.135 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:12 compute-2 NetworkManager[48993]: <info>  [1764402852.1366] manager: (tap2fb8dc00-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 07:54:12 compute-2 kernel: tap2fb8dc00-a0: entered promiscuous mode
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.138 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.139 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fb8dc00-a0, col_values=(('external_ids', {'iface-id': '46eebea1-719e-425a-abd2-07b83fa65825'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:12 compute-2 ovn_controller[134375]: 2025-11-29T07:54:12Z|00166|binding|INFO|Releasing lport 46eebea1-719e-425a-abd2-07b83fa65825 from this chassis (sb_readonly=0)
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.170 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.172 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fb8dc00-ab99-4b85-bc43-2d5f32594f21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fb8dc00-ab99-4b85-bc43-2d5f32594f21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.173 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1bc8fa-74bc-4388-8b1c-a9f705c72a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.174 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-2fb8dc00-ab99-4b85-bc43-2d5f32594f21
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/2fb8dc00-ab99-4b85-bc43-2d5f32594f21.pid.haproxy
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 2fb8dc00-ab99-4b85-bc43-2d5f32594f21
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:54:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:12.175 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'env', 'PROCESS_TAG=haproxy-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fb8dc00-ab99-4b85-bc43-2d5f32594f21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.181 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402852.180577, 9554d48d-d298-43f9-a68d-c8f52fe2cc33 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.181 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] VM Started (Lifecycle Event)
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.187 232432 DEBUG nova.network.neutron [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updated VIF entry in instance network info cache for port 02fd52e9-af0d-4291-bff2-e68ba5bf7a70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.187 232432 DEBUG nova.network.neutron [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updating instance_info_cache with network_info: [{"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.267 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.272 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402852.180852, 9554d48d-d298-43f9-a68d-c8f52fe2cc33 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.272 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] VM Paused (Lifecycle Event)
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.297 232432 DEBUG oslo_concurrency.lockutils [req-5dfc9b88-69d6-4136-84bf-7e42eebf1c41 req-92d258c0-612c-469b-8f3c-8707b39467dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.304 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.308 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:54:12 compute-2 nova_compute[232428]: 2025-11-29 07:54:12.356 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:54:12 compute-2 podman[253059]: 2025-11-29 07:54:12.55974753 +0000 UTC m=+0.059451174 container create 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:54:12 compute-2 systemd[1]: Started libpod-conmon-1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7.scope.
Nov 29 07:54:12 compute-2 podman[253059]: 2025-11-29 07:54:12.525255339 +0000 UTC m=+0.024959023 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:54:12 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:54:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04374a2c3275098374e86e2240d66da98f0657cdb29af16f0978de9e9a097777/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:12 compute-2 podman[253059]: 2025-11-29 07:54:12.63856783 +0000 UTC m=+0.138271494 container init 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 07:54:12 compute-2 podman[253059]: 2025-11-29 07:54:12.644270779 +0000 UTC m=+0.143974413 container start 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:54:12 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [NOTICE]   (253078) : New worker (253080) forked
Nov 29 07:54:12 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [NOTICE]   (253078) : Loading success.
Nov 29 07:54:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:13 compute-2 ceph-mon[77138]: pgmap v1512: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 29 07:54:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:14 compute-2 sudo[253090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:14 compute-2 sudo[253090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:14 compute-2 sudo[253090]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:14 compute-2 sudo[253115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:14 compute-2 sudo[253115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:14 compute-2 sudo[253115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:14 compute-2 sudo[253140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:14 compute-2 sudo[253140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:14 compute-2 sudo[253140]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:14 compute-2 sudo[253165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:54:14 compute-2 sudo[253165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2769963415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.853 232432 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.853 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.853 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Processing event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.854 232432 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] No waiting events found dispatching network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.855 232432 WARNING nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received unexpected event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 for instance with vm_state building and task_state spawning.
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.855 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.860 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402854.8604052, 9554d48d-d298-43f9-a68d-c8f52fe2cc33 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.861 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] VM Resumed (Lifecycle Event)
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.863 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.867 232432 INFO nova.virt.libvirt.driver [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Instance spawned successfully.
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.868 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.890 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.896 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.902 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.903 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.903 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.904 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.904 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.904 232432 DEBUG nova.virt.libvirt.driver [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.933 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.975 232432 INFO nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 10.16 seconds to spawn the instance on the hypervisor.
Nov 29 07:54:14 compute-2 nova_compute[232428]: 2025-11-29 07:54:14.976 232432 DEBUG nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:15 compute-2 nova_compute[232428]: 2025-11-29 07:54:15.034 232432 INFO nova.compute.manager [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 11.10 seconds to build instance.
Nov 29 07:54:15 compute-2 nova_compute[232428]: 2025-11-29 07:54:15.059 232432 DEBUG oslo_concurrency.lockutils [None req-0eb8b227-b02c-4d87-8d76-4a26d6291169 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:15 compute-2 sudo[253165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:15 compute-2 nova_compute[232428]: 2025-11-29 07:54:15.561 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:54:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:15 compute-2 ceph-mon[77138]: pgmap v1513: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 29 07:54:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:17.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194464338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194464338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3194464338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3194464338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:18 compute-2 nova_compute[232428]: 2025-11-29 07:54:18.233 232432 DEBUG nova.compute.manager [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:18 compute-2 nova_compute[232428]: 2025-11-29 07:54:18.270 232432 INFO nova.compute.manager [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] instance snapshotting
Nov 29 07:54:18 compute-2 nova_compute[232428]: 2025-11-29 07:54:18.544 232432 INFO nova.virt.libvirt.driver [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Beginning live snapshot process
Nov 29 07:54:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 07:54:18 compute-2 ceph-mon[77138]: pgmap v1514: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 29 07:54:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:18 compute-2 podman[253225]: 2025-11-29 07:54:18.666233434 +0000 UTC m=+0.065395321 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:54:18 compute-2 nova_compute[232428]: 2025-11-29 07:54:18.755 232432 DEBUG nova.virt.libvirt.imagebackend [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 07:54:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:19 compute-2 nova_compute[232428]: 2025-11-29 07:54:19.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:19.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:19 compute-2 nova_compute[232428]: 2025-11-29 07:54:19.153 232432 DEBUG nova.storage.rbd_utils [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] creating snapshot(2b64f300636b41b9a52ef6208836a5c7) on rbd image(9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:19 compute-2 sudo[253297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:19 compute-2 sudo[253297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:19 compute-2 sudo[253297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:19 compute-2 sudo[253322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:19 compute-2 sudo[253322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:19 compute-2 sudo[253322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 07:54:20 compute-2 ceph-mon[77138]: osdmap e170: 3 total, 3 up, 3 in
Nov 29 07:54:20 compute-2 nova_compute[232428]: 2025-11-29 07:54:20.239 232432 DEBUG nova.storage.rbd_utils [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] cloning vms/9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk@2b64f300636b41b9a52ef6208836a5c7 to images/7125fca0-e10e-42cb-acad-616c180d5096 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:54:20 compute-2 nova_compute[232428]: 2025-11-29 07:54:20.486 232432 DEBUG nova.storage.rbd_utils [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] flattening images/7125fca0-e10e-42cb-acad-616c180d5096 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 07:54:20 compute-2 nova_compute[232428]: 2025-11-29 07:54:20.564 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:20 compute-2 nova_compute[232428]: 2025-11-29 07:54:20.838 232432 DEBUG nova.storage.rbd_utils [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] removing snapshot(2b64f300636b41b9a52ef6208836a5c7) on rbd image(9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:54:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:21.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 07:54:21 compute-2 ceph-mon[77138]: pgmap v1516: 305 pgs: 305 active+clean; 175 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 07:54:21 compute-2 ceph-mon[77138]: osdmap e171: 3 total, 3 up, 3 in
Nov 29 07:54:21 compute-2 nova_compute[232428]: 2025-11-29 07:54:21.561 232432 DEBUG nova.storage.rbd_utils [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] creating snapshot(snap) on rbd image(7125fca0-e10e-42cb-acad-616c180d5096) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:22 compute-2 ceph-mon[77138]: osdmap e172: 3 total, 3 up, 3 in
Nov 29 07:54:22 compute-2 ceph-mon[77138]: pgmap v1519: 305 pgs: 305 active+clean; 134 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.5 MiB/s wr, 253 op/s
Nov 29 07:54:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:54:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 07:54:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:22 compute-2 podman[253438]: 2025-11-29 07:54:22.662994604 +0000 UTC m=+0.066996520 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 07:54:22 compute-2 sudo[253458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:22 compute-2 sudo[253458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:22 compute-2 sudo[253458]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:22 compute-2 sudo[253483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:54:22 compute-2 sudo[253483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:22 compute-2 sudo[253483]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:23.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:23 compute-2 ceph-mon[77138]: osdmap e173: 3 total, 3 up, 3 in
Nov 29 07:54:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:54:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/449062815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:24 compute-2 nova_compute[232428]: 2025-11-29 07:54:24.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:24 compute-2 nova_compute[232428]: 2025-11-29 07:54:24.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:24 compute-2 ceph-mon[77138]: pgmap v1521: 305 pgs: 305 active+clean; 134 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 9.1 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 07:54:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:25 compute-2 nova_compute[232428]: 2025-11-29 07:54:25.008 232432 INFO nova.virt.libvirt.driver [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Snapshot image upload complete
Nov 29 07:54:25 compute-2 nova_compute[232428]: 2025-11-29 07:54:25.009 232432 INFO nova.compute.manager [None req-f12c6a4c-10b0-4ad1-b6aa-6208e9e8a24a 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 6.74 seconds to snapshot the instance on the hypervisor.
Nov 29 07:54:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:25 compute-2 nova_compute[232428]: 2025-11-29 07:54:25.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:25 compute-2 ceph-mon[77138]: pgmap v1522: 305 pgs: 305 active+clean; 155 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 5.6 MiB/s wr, 310 op/s
Nov 29 07:54:26 compute-2 nova_compute[232428]: 2025-11-29 07:54:26.219 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:27.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:27 compute-2 nova_compute[232428]: 2025-11-29 07:54:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:27 compute-2 nova_compute[232428]: 2025-11-29 07:54:27.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:54:27 compute-2 nova_compute[232428]: 2025-11-29 07:54:27.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:54:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 07:54:28 compute-2 nova_compute[232428]: 2025-11-29 07:54:28.432 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:54:28 compute-2 nova_compute[232428]: 2025-11-29 07:54:28.432 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:54:28 compute-2 nova_compute[232428]: 2025-11-29 07:54:28.433 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:54:28 compute-2 nova_compute[232428]: 2025-11-29 07:54:28.433 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9554d48d-d298-43f9-a68d-c8f52fe2cc33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:28.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:28 compute-2 ceph-mon[77138]: osdmap e174: 3 total, 3 up, 3 in
Nov 29 07:54:28 compute-2 ceph-mon[77138]: pgmap v1524: 305 pgs: 305 active+clean; 155 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 115 op/s
Nov 29 07:54:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3628032023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2288473742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2288473742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:29 compute-2 nova_compute[232428]: 2025-11-29 07:54:29.047 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:29.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:29 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 07:54:30 compute-2 nova_compute[232428]: 2025-11-29 07:54:30.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:30.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:30 compute-2 podman[253512]: 2025-11-29 07:54:30.711772687 +0000 UTC m=+0.099282522 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:54:30 compute-2 ceph-mon[77138]: pgmap v1525: 305 pgs: 305 active+clean; 155 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 107 op/s
Nov 29 07:54:31 compute-2 sshd-session[253524]: Invalid user sol from 45.148.10.240 port 60988
Nov 29 07:54:31 compute-2 sshd-session[253524]: Connection closed by invalid user sol 45.148.10.240 port 60988 [preauth]
Nov 29 07:54:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:31.588 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:31.590 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.636 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updating instance_info_cache with network_info: [{"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.654 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-9554d48d-d298-43f9-a68d-c8f52fe2cc33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.654 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.654 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.656 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.656 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.656 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:31 compute-2 nova_compute[232428]: 2025-11-29 07:54:31.656 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:54:32 compute-2 nova_compute[232428]: 2025-11-29 07:54:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 07:54:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/507080374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:32 compute-2 ceph-mon[77138]: pgmap v1526: 305 pgs: 305 active+clean; 171 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Nov 29 07:54:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:33.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:34 compute-2 nova_compute[232428]: 2025-11-29 07:54:34.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:34 compute-2 ovn_controller[134375]: 2025-11-29T07:54:34Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:f7:65 10.100.0.6
Nov 29 07:54:34 compute-2 ovn_controller[134375]: 2025-11-29T07:54:34Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f7:65 10.100.0.6
Nov 29 07:54:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:54:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:35.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:35 compute-2 ceph-mon[77138]: osdmap e175: 3 total, 3 up, 3 in
Nov 29 07:54:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3512358792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2528106426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.247 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.248 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.248 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.248 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.249 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1595821932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:35 compute-2 nova_compute[232428]: 2025-11-29 07:54:35.770 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.306 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.307 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.523 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.525 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4505MB free_disk=20.94647979736328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.525 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:36 compute-2 nova_compute[232428]: 2025-11-29 07:54:36.525 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2089465281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:37 compute-2 ceph-mon[77138]: pgmap v1528: 305 pgs: 305 active+clean; 171 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 MiB/s wr, 48 op/s
Nov 29 07:54:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2393006790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:37 compute-2 ceph-mon[77138]: pgmap v1529: 305 pgs: 305 active+clean; 176 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 100 op/s
Nov 29 07:54:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1595821932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:37 compute-2 nova_compute[232428]: 2025-11-29 07:54:37.234 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9554d48d-d298-43f9-a68d-c8f52fe2cc33 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:54:37 compute-2 nova_compute[232428]: 2025-11-29 07:54:37.235 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:54:37 compute-2 nova_compute[232428]: 2025-11-29 07:54:37.236 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:54:37 compute-2 nova_compute[232428]: 2025-11-29 07:54:37.309 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/616846377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:38 compute-2 nova_compute[232428]: 2025-11-29 07:54:38.315 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:38 compute-2 nova_compute[232428]: 2025-11-29 07:54:38.324 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:54:38 compute-2 nova_compute[232428]: 2025-11-29 07:54:38.362 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:54:38 compute-2 nova_compute[232428]: 2025-11-29 07:54:38.386 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:54:38 compute-2 nova_compute[232428]: 2025-11-29 07:54:38.387 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 07:54:38 compute-2 ceph-mon[77138]: pgmap v1530: 305 pgs: 305 active+clean; 176 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 84 op/s
Nov 29 07:54:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:54:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:54:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:39 compute-2 nova_compute[232428]: 2025-11-29 07:54:39.087 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:39.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:39 compute-2 nova_compute[232428]: 2025-11-29 07:54:39.388 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/616846377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:39 compute-2 ceph-mon[77138]: osdmap e176: 3 total, 3 up, 3 in
Nov 29 07:54:40 compute-2 sudo[253590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:40 compute-2 sudo[253590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:40 compute-2 sudo[253590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:40 compute-2 sudo[253615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:40 compute-2 sudo[253615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:40 compute-2 sudo[253615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:40 compute-2 nova_compute[232428]: 2025-11-29 07:54:40.573 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:54:40.592 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:40 compute-2 nova_compute[232428]: 2025-11-29 07:54:40.680 232432 DEBUG nova.compute.manager [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:40 compute-2 ceph-mon[77138]: pgmap v1532: 305 pgs: 305 active+clean; 229 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 112 op/s
Nov 29 07:54:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1007932061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1238365819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/956393232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/87760384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:40 compute-2 nova_compute[232428]: 2025-11-29 07:54:40.841 232432 INFO nova.compute.manager [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] instance snapshotting
Nov 29 07:54:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:41.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:42 compute-2 ceph-mon[77138]: pgmap v1533: 305 pgs: 305 active+clean; 241 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 117 op/s
Nov 29 07:54:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:42 compute-2 nova_compute[232428]: 2025-11-29 07:54:42.984 232432 INFO nova.virt.libvirt.driver [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Beginning live snapshot process
Nov 29 07:54:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:43.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:43 compute-2 nova_compute[232428]: 2025-11-29 07:54:43.179 232432 DEBUG nova.virt.libvirt.imagebackend [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 07:54:43 compute-2 nova_compute[232428]: 2025-11-29 07:54:43.458 232432 DEBUG nova.storage.rbd_utils [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] creating snapshot(9dee3a0d6ca34b5fad2e3329ed13cccd) on rbd image(9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 07:54:43 compute-2 nova_compute[232428]: 2025-11-29 07:54:43.680 232432 DEBUG nova.storage.rbd_utils [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] cloning vms/9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk@9dee3a0d6ca34b5fad2e3329ed13cccd to images/2c100ba6-aad9-4f8a-a5b0-c35f5675fe20 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:54:43 compute-2 nova_compute[232428]: 2025-11-29 07:54:43.824 232432 DEBUG nova.storage.rbd_utils [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] flattening images/2c100ba6-aad9-4f8a-a5b0-c35f5675fe20 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 07:54:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:44 compute-2 nova_compute[232428]: 2025-11-29 07:54:44.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:44 compute-2 ceph-mon[77138]: osdmap e177: 3 total, 3 up, 3 in
Nov 29 07:54:44 compute-2 ceph-mon[77138]: pgmap v1535: 305 pgs: 305 active+clean; 241 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 5.4 MiB/s wr, 77 op/s
Nov 29 07:54:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:45.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:46 compute-2 nova_compute[232428]: 2025-11-29 07:54:46.114 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:46 compute-2 ceph-mon[77138]: pgmap v1536: 305 pgs: 305 active+clean; 275 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 204 op/s
Nov 29 07:54:46 compute-2 nova_compute[232428]: 2025-11-29 07:54:46.262 232432 DEBUG nova.storage.rbd_utils [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] removing snapshot(9dee3a0d6ca34b5fad2e3329ed13cccd) on rbd image(9554d48d-d298-43f9-a68d-c8f52fe2cc33_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:54:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:46.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 07:54:47 compute-2 nova_compute[232428]: 2025-11-29 07:54:47.292 232432 DEBUG nova.storage.rbd_utils [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] creating snapshot(snap) on rbd image(2c100ba6-aad9-4f8a-a5b0-c35f5675fe20) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:47 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 07:54:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 07:54:48 compute-2 ceph-mon[77138]: osdmap e178: 3 total, 3 up, 3 in
Nov 29 07:54:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:48.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:49 compute-2 nova_compute[232428]: 2025-11-29 07:54:49.134 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:49.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:49 compute-2 podman[253787]: 2025-11-29 07:54:49.709328957 +0000 UTC m=+0.094142081 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:54:50 compute-2 ceph-mon[77138]: pgmap v1538: 305 pgs: 305 active+clean; 275 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 149 op/s
Nov 29 07:54:50 compute-2 ceph-mon[77138]: osdmap e179: 3 total, 3 up, 3 in
Nov 29 07:54:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:50.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:50 compute-2 nova_compute[232428]: 2025-11-29 07:54:50.953 232432 INFO nova.virt.libvirt.driver [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Snapshot image upload complete
Nov 29 07:54:50 compute-2 nova_compute[232428]: 2025-11-29 07:54:50.954 232432 INFO nova.compute.manager [None req-7f9b0eba-ffd5-475f-97a0-031f55cbc69e 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 10.11 seconds to snapshot the instance on the hypervisor.
Nov 29 07:54:51 compute-2 ceph-mon[77138]: pgmap v1540: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 311 op/s
Nov 29 07:54:51 compute-2 nova_compute[232428]: 2025-11-29 07:54:51.116 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:51.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:52.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:53.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:53 compute-2 podman[253809]: 2025-11-29 07:54:53.714273843 +0000 UTC m=+0.101391207 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:54:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:54 compute-2 nova_compute[232428]: 2025-11-29 07:54:54.138 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:54.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:54 compute-2 ceph-mon[77138]: pgmap v1541: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 359 op/s
Nov 29 07:54:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:55.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 07:54:55 compute-2 ceph-mon[77138]: pgmap v1542: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 232 op/s
Nov 29 07:54:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/59936033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:56 compute-2 nova_compute[232428]: 2025-11-29 07:54:56.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:57.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 07:54:58 compute-2 ceph-mon[77138]: pgmap v1543: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.5 MiB/s wr, 320 op/s
Nov 29 07:54:58 compute-2 ceph-mon[77138]: osdmap e180: 3 total, 3 up, 3 in
Nov 29 07:54:59 compute-2 nova_compute[232428]: 2025-11-29 07:54:59.705 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:54:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 07:54:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:54:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:59.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:54:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:54:59 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:54:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:00 compute-2 ceph-mon[77138]: pgmap v1545: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.3 MiB/s wr, 289 op/s
Nov 29 07:55:00 compute-2 ceph-mon[77138]: osdmap e181: 3 total, 3 up, 3 in
Nov 29 07:55:00 compute-2 sudo[253832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:00 compute-2 sudo[253832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:00 compute-2 sudo[253832]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:00 compute-2 sudo[253857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:00 compute-2 sudo[253857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:00 compute-2 sudo[253857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:01 compute-2 nova_compute[232428]: 2025-11-29 07:55:01.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:01 compute-2 ceph-mon[77138]: pgmap v1547: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 914 KiB/s rd, 6.2 KiB/s wr, 111 op/s
Nov 29 07:55:01 compute-2 podman[253883]: 2025-11-29 07:55:01.70191803 +0000 UTC m=+0.099112566 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 07:55:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 07:55:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:01 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:01 compute-2 anacron[34256]: Job `cron.weekly' started
Nov 29 07:55:01 compute-2 anacron[34256]: Job `cron.weekly' terminated
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.349 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.350 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.350 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.351 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.351 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.352 232432 INFO nova.compute.manager [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Terminating instance
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.353 232432 DEBUG nova.compute.manager [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:55:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 07:55:02 compute-2 kernel: tap02fd52e9-af (unregistering): left promiscuous mode
Nov 29 07:55:02 compute-2 NetworkManager[48993]: <info>  [1764402902.5851] device (tap02fd52e9-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:55:02 compute-2 ovn_controller[134375]: 2025-11-29T07:55:02Z|00167|binding|INFO|Releasing lport 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 from this chassis (sb_readonly=0)
Nov 29 07:55:02 compute-2 ovn_controller[134375]: 2025-11-29T07:55:02Z|00168|binding|INFO|Setting lport 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 down in Southbound
Nov 29 07:55:02 compute-2 ovn_controller[134375]: 2025-11-29T07:55:02Z|00169|binding|INFO|Removing iface tap02fd52e9-af ovn-installed in OVS
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.608 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f7:65 10.100.0.6'], port_security=['fa:16:3e:54:f7:65 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9554d48d-d298-43f9-a68d-c8f52fe2cc33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6ba183d92db4c6795dd0f44dc77fad4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ad50f6f-f9d0-428c-8a64-de2a144e5c7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaae3603-798f-4bda-b3f1-1c10a1927c46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=02fd52e9-af0d-4291-bff2-e68ba5bf7a70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.612 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 02fd52e9-af0d-4291-bff2-e68ba5bf7a70 in datapath 2fb8dc00-ab99-4b85-bc43-2d5f32594f21 unbound from our chassis
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.615 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fb8dc00-ab99-4b85-bc43-2d5f32594f21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.616 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2610be41-5dfb-459f-acb7-24afaa4dbc70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.617 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21 namespace which is not needed anymore
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.618 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 29 07:55:02 compute-2 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000028.scope: Consumed 15.363s CPU time.
Nov 29 07:55:02 compute-2 systemd-machined[194747]: Machine qemu-22-instance-00000028 terminated.
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [NOTICE]   (253078) : haproxy version is 2.8.14-c23fe91
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [NOTICE]   (253078) : path to executable is /usr/sbin/haproxy
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [WARNING]  (253078) : Exiting Master process...
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [WARNING]  (253078) : Exiting Master process...
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [ALERT]    (253078) : Current worker (253080) exited with code 143 (Terminated)
Nov 29 07:55:02 compute-2 neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21[253074]: [WARNING]  (253078) : All workers exited. Exiting... (0)
Nov 29 07:55:02 compute-2 systemd[1]: libpod-1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7.scope: Deactivated successfully.
Nov 29 07:55:02 compute-2 podman[253936]: 2025-11-29 07:55:02.801251057 +0000 UTC m=+0.055401718 container died 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.801 232432 INFO nova.virt.libvirt.driver [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Instance destroyed successfully.
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.802 232432 DEBUG nova.objects.instance [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lazy-loading 'resources' on Instance uuid 9554d48d-d298-43f9-a68d-c8f52fe2cc33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.817 232432 DEBUG nova.virt.libvirt.vif [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1246751040',display_name='tempest-ImagesOneServerTestJSON-server-1246751040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1246751040',id=40,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6ba183d92db4c6795dd0f44dc77fad4',ramdisk_id='',reservation_id='r-40qgay24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-2038682252',owner_user_name='tempest-ImagesOneServerTestJSON-2038682252-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:50Z,user_data=None,user_id='3905209925cd414980eac7c79bf04af2',uuid=9554d48d-d298-43f9-a68d-c8f52fe2cc33,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.819 232432 DEBUG nova.network.os_vif_util [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converting VIF {"id": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "address": "fa:16:3e:54:f7:65", "network": {"id": "2fb8dc00-ab99-4b85-bc43-2d5f32594f21", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1509005361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ba183d92db4c6795dd0f44dc77fad4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02fd52e9-af", "ovs_interfaceid": "02fd52e9-af0d-4291-bff2-e68ba5bf7a70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.821 232432 DEBUG nova.network.os_vif_util [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.821 232432 DEBUG os_vif [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.824 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.824 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02fd52e9-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.826 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.835 232432 INFO os_vif [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f7:65,bridge_name='br-int',has_traffic_filtering=True,id=02fd52e9-af0d-4291-bff2-e68ba5bf7a70,network=Network(2fb8dc00-ab99-4b85-bc43-2d5f32594f21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02fd52e9-af')
Nov 29 07:55:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7-userdata-shm.mount: Deactivated successfully.
Nov 29 07:55:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-04374a2c3275098374e86e2240d66da98f0657cdb29af16f0978de9e9a097777-merged.mount: Deactivated successfully.
Nov 29 07:55:02 compute-2 podman[253936]: 2025-11-29 07:55:02.855726743 +0000 UTC m=+0.109877404 container cleanup 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:02 compute-2 systemd[1]: libpod-conmon-1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7.scope: Deactivated successfully.
Nov 29 07:55:02 compute-2 podman[253991]: 2025-11-29 07:55:02.930378512 +0000 UTC m=+0.043104371 container remove 1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.937 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[302a196c-e925-41c8-b43e-6feb19b48806]: (4, ('Sat Nov 29 07:55:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21 (1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7)\n1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7\nSat Nov 29 07:55:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21 (1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7)\n1e3f2bfb428d7f27039c1f6f5311b3bbf888e7e38ecd5c63309659ca5c6fece7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.940 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e437c9a5-e544-4b34-bde8-954562ef9a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.942 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb8dc00-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:02 compute-2 kernel: tap2fb8dc00-a0: left promiscuous mode
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.947 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 nova_compute[232428]: 2025-11-29 07:55:02.959 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.962 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[40d852fc-0834-463b-94d6-9f6a5843555b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.977 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d80de1-75b1-43b7-a9cb-86ac0f2573fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.978 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aff85ad3-0067-4065-a33b-8f332b7d34b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.993 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[166e4ade-defb-4a4d-b5cf-60bfddacd813]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581814, 'reachable_time': 24056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254007, 'error': None, 'target': 'ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.996 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fb8dc00-ab99-4b85-bc43-2d5f32594f21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:55:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:02.996 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[5368b173-4574-4d5e-a526-a959517885a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:02 compute-2 systemd[1]: run-netns-ovnmeta\x2d2fb8dc00\x2dab99\x2d4b85\x2dbc43\x2d2d5f32594f21.mount: Deactivated successfully.
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.034 232432 DEBUG nova.compute.manager [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-unplugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.035 232432 DEBUG oslo_concurrency.lockutils [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.035 232432 DEBUG oslo_concurrency.lockutils [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.035 232432 DEBUG oslo_concurrency.lockutils [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.036 232432 DEBUG nova.compute.manager [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] No waiting events found dispatching network-vif-unplugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:55:03 compute-2 nova_compute[232428]: 2025-11-29 07:55:03.036 232432 DEBUG nova.compute.manager [req-69daa49b-460f-4a0f-ba25-0ae4449a25c3 req-88a45160-9868-438c-bded-096eb57b58d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-unplugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:03.301 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:03.301 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:03.301 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 07:55:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:03.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:03 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:03 compute-2 ceph-mon[77138]: pgmap v1548: 305 pgs: 305 active+clean; 245 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 948 KiB/s rd, 220 KiB/s wr, 134 op/s
Nov 29 07:55:03 compute-2 ceph-mon[77138]: osdmap e182: 3 total, 3 up, 3 in
Nov 29 07:55:04 compute-2 nova_compute[232428]: 2025-11-29 07:55:04.142 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.138 232432 DEBUG nova.compute.manager [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.139 232432 DEBUG oslo_concurrency.lockutils [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.139 232432 DEBUG oslo_concurrency.lockutils [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.140 232432 DEBUG oslo_concurrency.lockutils [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.140 232432 DEBUG nova.compute.manager [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] No waiting events found dispatching network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.141 232432 WARNING nova.compute.manager [req-864205c8-3a3b-42ed-bea1-f5ea008f5014 req-e0b84bfc-e547-4f45-852b-54d11a043870 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received unexpected event network-vif-plugged-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 for instance with vm_state active and task_state deleting.
Nov 29 07:55:05 compute-2 ceph-mon[77138]: pgmap v1550: 305 pgs: 305 active+clean; 245 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 219 KiB/s wr, 29 op/s
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.334 232432 INFO nova.virt.libvirt.driver [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Deleting instance files /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33_del
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.336 232432 INFO nova.virt.libvirt.driver [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Deletion of /var/lib/nova/instances/9554d48d-d298-43f9-a68d-c8f52fe2cc33_del complete
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.448 232432 INFO nova.compute.manager [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 3.09 seconds to destroy the instance on the hypervisor.
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.449 232432 DEBUG oslo.service.loopingcall [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.449 232432 DEBUG nova.compute.manager [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:55:05 compute-2 nova_compute[232428]: 2025-11-29 07:55:05.449 232432 DEBUG nova.network.neutron [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:55:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 07:55:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:05 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.328 232432 DEBUG nova.network.neutron [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.371 232432 INFO nova.compute.manager [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Took 1.92 seconds to deallocate network for instance.
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.454 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.455 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.484 232432 DEBUG nova.compute.manager [req-450f3762-2c18-4231-b69b-b99d055fd54e req-778c32f2-80e3-4821-a494-e9021ed30bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Received event network-vif-deleted-02fd52e9-af0d-4291-bff2-e68ba5bf7a70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.515 232432 DEBUG oslo_concurrency.processutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 07:55:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:07.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.829 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1292799651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.969 232432 DEBUG oslo_concurrency.processutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:07 compute-2 nova_compute[232428]: 2025-11-29 07:55:07.979 232432 DEBUG nova.compute.provider_tree [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:08 compute-2 nova_compute[232428]: 2025-11-29 07:55:08.001 232432 DEBUG nova.scheduler.client.report [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:08 compute-2 nova_compute[232428]: 2025-11-29 07:55:08.026 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:08 compute-2 nova_compute[232428]: 2025-11-29 07:55:08.050 232432 INFO nova.scheduler.client.report [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Deleted allocations for instance 9554d48d-d298-43f9-a68d-c8f52fe2cc33
Nov 29 07:55:08 compute-2 nova_compute[232428]: 2025-11-29 07:55:08.125 232432 DEBUG oslo_concurrency.lockutils [None req-a24aa130-0930-4f4d-8f20-270516ae833d 3905209925cd414980eac7c79bf04af2 e6ba183d92db4c6795dd0f44dc77fad4 - - default default] Lock "9554d48d-d298-43f9-a68d-c8f52fe2cc33" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:08 compute-2 ceph-mon[77138]: pgmap v1551: 305 pgs: 305 active+clean; 139 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 3.1 MiB/s wr, 169 op/s
Nov 29 07:55:08 compute-2 ceph-mon[77138]: osdmap e183: 3 total, 3 up, 3 in
Nov 29 07:55:09 compute-2 nova_compute[232428]: 2025-11-29 07:55:09.144 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:09.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:10 compute-2 ceph-mon[77138]: pgmap v1553: 305 pgs: 305 active+clean; 139 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 596 KiB/s rd, 3.1 MiB/s wr, 163 op/s
Nov 29 07:55:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1292799651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2007226899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:11 compute-2 ceph-mon[77138]: pgmap v1554: 305 pgs: 305 active+clean; 138 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 3.0 MiB/s wr, 165 op/s
Nov 29 07:55:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:11.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:12 compute-2 nova_compute[232428]: 2025-11-29 07:55:12.834 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:14 compute-2 nova_compute[232428]: 2025-11-29 07:55:14.147 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1870973595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:15 compute-2 nova_compute[232428]: 2025-11-29 07:55:15.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:15.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:16 compute-2 ceph-mon[77138]: pgmap v1555: 305 pgs: 305 active+clean; 149 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 581 KiB/s rd, 3.0 MiB/s wr, 153 op/s
Nov 29 07:55:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3910136033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:16 compute-2 ceph-mon[77138]: pgmap v1556: 305 pgs: 305 active+clean; 149 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 2.7 MiB/s wr, 140 op/s
Nov 29 07:55:17 compute-2 ceph-mon[77138]: pgmap v1557: 305 pgs: 305 active+clean; 187 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 29 07:55:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:55:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:17.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:55:17 compute-2 nova_compute[232428]: 2025-11-29 07:55:17.799 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402902.7985463, 9554d48d-d298-43f9-a68d-c8f52fe2cc33 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:17 compute-2 nova_compute[232428]: 2025-11-29 07:55:17.800 232432 INFO nova.compute.manager [-] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] VM Stopped (Lifecycle Event)
Nov 29 07:55:17 compute-2 nova_compute[232428]: 2025-11-29 07:55:17.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:17 compute-2 nova_compute[232428]: 2025-11-29 07:55:17.847 232432 DEBUG nova.compute.manager [None req-941634ba-73a8-422a-957d-0edac684513c - - - - - -] [instance: 9554d48d-d298-43f9-a68d-c8f52fe2cc33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:19 compute-2 nova_compute[232428]: 2025-11-29 07:55:19.150 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:20.221 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:20.222 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:55:20 compute-2 nova_compute[232428]: 2025-11-29 07:55:20.232 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:20 compute-2 sudo[254040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:20 compute-2 sudo[254040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:20 compute-2 sudo[254040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:20 compute-2 sudo[254068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:20 compute-2 sudo[254068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:20 compute-2 sudo[254068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:20 compute-2 podman[254064]: 2025-11-29 07:55:20.492379721 +0000 UTC m=+0.094156031 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 07:55:20 compute-2 ceph-mon[77138]: pgmap v1558: 305 pgs: 305 active+clean; 187 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Nov 29 07:55:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:55:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:21.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:55:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:22 compute-2 ceph-mon[77138]: pgmap v1559: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 1.9 MiB/s wr, 53 op/s
Nov 29 07:55:22 compute-2 nova_compute[232428]: 2025-11-29 07:55:22.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:23 compute-2 sudo[254112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:23 compute-2 sudo[254112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.050 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "d2a81205-724b-4164-8572-beb503ccc1bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.051 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.075 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:55:23 compute-2 sudo[254137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:23 compute-2 sudo[254137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254137]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.157 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.157 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.174 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.175 232432 INFO nova.compute.claims [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:55:23 compute-2 sudo[254162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:23 compute-2 sudo[254162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254162]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 sudo[254188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.292 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:23 compute-2 sudo[254188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 ceph-mon[77138]: pgmap v1560: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 07:55:23 compute-2 sudo[254188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:23.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1506096212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:23 compute-2 sudo[254253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:23 compute-2 sudo[254253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254253]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.761 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.768 232432 DEBUG nova.compute.provider_tree [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:23 compute-2 podman[254277]: 2025-11-29 07:55:23.826855219 +0000 UTC m=+0.064853483 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:23 compute-2 sudo[254295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:23 compute-2 sudo[254295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254295]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.908 232432 DEBUG nova.scheduler.client.report [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:23 compute-2 sudo[254325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:23 compute-2 sudo[254325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 sudo[254325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.942 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.943 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:55:23 compute-2 sudo[254350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:55:23 compute-2 sudo[254350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.991 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:55:23 compute-2 nova_compute[232428]: 2025-11-29 07:55:23.992 232432 DEBUG nova.network.neutron [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.013 232432 INFO nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.033 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.123 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.125 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.125 232432 INFO nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Creating image(s)
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.149 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.171 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.193 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.197 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:24.225 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.276 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.278 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.278 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.278 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.300 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.304 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d2a81205-724b-4164-8572-beb503ccc1bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:24 compute-2 sudo[254350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.435 232432 DEBUG nova.network.neutron [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:55:24 compute-2 nova_compute[232428]: 2025-11-29 07:55:24.436 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:55:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.066 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d2a81205-724b-4164-8572-beb503ccc1bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.762s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:55:25 compute-2 ceph-mon[77138]: pgmap v1561: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1506096212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.139 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] resizing rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.246 232432 DEBUG nova.objects.instance [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'migration_context' on Instance uuid d2a81205-724b-4164-8572-beb503ccc1bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.265 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.266 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Ensure instance console log exists: /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.267 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.267 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.268 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.271 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.278 232432 WARNING nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.284 232432 DEBUG nova.virt.libvirt.host [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.285 232432 DEBUG nova.virt.libvirt.host [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.288 232432 DEBUG nova.virt.libvirt.host [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.289 232432 DEBUG nova.virt.libvirt.host [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.291 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.292 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.293 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.293 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.294 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.294 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.295 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.295 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.296 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.298 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.298 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.299 232432 DEBUG nova.virt.hardware [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.303 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.379 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.380 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.431 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.541 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.542 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.547 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.547 232432 INFO nova.compute.claims [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.672 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:25.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:25.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1245324304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.778 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.815 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:25 compute-2 nova_compute[232428]: 2025-11-29 07:55:25.822 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3063409247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.134 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.142 232432 DEBUG nova.compute.provider_tree [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.172 232432 DEBUG nova.scheduler.client.report [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.197 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.199 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.246 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.247 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.270 232432 INFO nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.291 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:55:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3301477670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.347 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.349 232432 DEBUG nova.objects.instance [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'pci_devices' on Instance uuid d2a81205-724b-4164-8572-beb503ccc1bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.388 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <uuid>d2a81205-724b-4164-8572-beb503ccc1bb</uuid>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <name>instance-0000002c</name>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1476557971</nova:name>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:55:25</nova:creationTime>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:user uuid="db06e8f865ef4c7fbacd588b0c473e37">tempest-ServersAdminNegativeTestJSON-1455232210-project-member</nova:user>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <nova:project uuid="4a3681cf294441768c28547476705844">tempest-ServersAdminNegativeTestJSON-1455232210</nova:project>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <system>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="serial">d2a81205-724b-4164-8572-beb503ccc1bb</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="uuid">d2a81205-724b-4164-8572-beb503ccc1bb</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </system>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <os>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </os>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <features>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </features>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d2a81205-724b-4164-8572-beb503ccc1bb_disk">
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </source>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d2a81205-724b-4164-8572-beb503ccc1bb_disk.config">
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </source>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:55:26 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/console.log" append="off"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <video>
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </video>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:55:26 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:55:26 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:55:26 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:55:26 compute-2 nova_compute[232428]: </domain>
Nov 29 07:55:26 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.400 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.401 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.402 232432 INFO nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Creating image(s)
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.428 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.452 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.479 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.482 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.522 232432 DEBUG nova.policy [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f4bf9b09ffd4b6ebd538fb75ec8125b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e666e8b3cc744c97a39b55c135d7769f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.565 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.566 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.567 232432 INFO nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Using config drive
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.599 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.606 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.606 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.607 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.607 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.633 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.637 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a0b17694-217b-4e21-ba29-56995925b299_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.913 232432 INFO nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Creating config drive at /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config
Nov 29 07:55:26 compute-2 nova_compute[232428]: 2025-11-29 07:55:26.921 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9eyq41uj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:55:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:55:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:55:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1245324304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.068 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9eyq41uj" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.102 232432 DEBUG nova.storage.rbd_utils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image d2a81205-724b-4164-8572-beb503ccc1bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.107 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config d2a81205-724b-4164-8572-beb503ccc1bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.233 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.447 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Successfully created port: 770720ed-c9ad-4ee3-be57-1b6fa775a39f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:55:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:27 compute-2 nova_compute[232428]: 2025-11-29 07:55:27.845 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.004 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a0b17694-217b-4e21-ba29-56995925b299_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3698670164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.089 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] resizing rbd image a0b17694-217b-4e21-ba29-56995925b299_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:55:28 compute-2 ceph-mon[77138]: pgmap v1562: 305 pgs: 305 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Nov 29 07:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3063409247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3301477670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4256619282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4256619282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3698670164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.436 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Successfully updated port: 770720ed-c9ad-4ee3-be57-1b6fa775a39f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.449 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.449 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquired lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.449 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.521 232432 DEBUG nova.compute.manager [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.521 232432 DEBUG nova.compute.manager [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing instance network info cache due to event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.521 232432 DEBUG oslo_concurrency.lockutils [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:28 compute-2 nova_compute[232428]: 2025-11-29 07:55:28.587 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.154 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.782 232432 DEBUG nova.network.neutron [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updating instance_info_cache with network_info: [{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.807 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Releasing lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.807 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Instance network_info: |[{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.807 232432 DEBUG oslo_concurrency.lockutils [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:29 compute-2 nova_compute[232428]: 2025-11-29 07:55:29.808 232432 DEBUG nova.network.neutron [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.586 232432 DEBUG nova.objects.instance [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lazy-loading 'migration_context' on Instance uuid a0b17694-217b-4e21-ba29-56995925b299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.602 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.603 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Ensure instance console log exists: /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.603 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.603 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.604 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.605 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Start _get_guest_xml network_info=[{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.610 232432 WARNING nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.613 232432 DEBUG nova.virt.libvirt.host [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.614 232432 DEBUG nova.virt.libvirt.host [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.618 232432 DEBUG nova.virt.libvirt.host [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.618 232432 DEBUG nova.virt.libvirt.host [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.619 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.619 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.620 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.620 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.620 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.620 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.621 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.621 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.621 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.621 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.622 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.622 232432 DEBUG nova.virt.hardware [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:55:30 compute-2 nova_compute[232428]: 2025-11-29 07:55:30.625 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:31 compute-2 nova_compute[232428]: 2025-11-29 07:55:31.017 232432 DEBUG nova.network.neutron [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updated VIF entry in instance network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:31 compute-2 nova_compute[232428]: 2025-11-29 07:55:31.018 232432 DEBUG nova.network.neutron [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updating instance_info_cache with network_info: [{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/698610573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:31 compute-2 nova_compute[232428]: 2025-11-29 07:55:31.053 232432 DEBUG oslo_concurrency.lockutils [req-ec2a3692-eced-4c01-a53f-90257147d4f2 req-6f6c876b-80d0-44ea-91fb-5c887f71adc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:31 compute-2 nova_compute[232428]: 2025-11-29 07:55:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:31.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:31.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:32 compute-2 nova_compute[232428]: 2025-11-29 07:55:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:32 compute-2 ceph-mon[77138]: pgmap v1563: 305 pgs: 305 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 98 op/s
Nov 29 07:55:32 compute-2 podman[254905]: 2025-11-29 07:55:32.737784555 +0000 UTC m=+0.137058735 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 07:55:32 compute-2 nova_compute[232428]: 2025-11-29 07:55:32.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.232 232432 DEBUG oslo_concurrency.processutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config d2a81205-724b-4164-8572-beb503ccc1bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.232 232432 INFO nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Deleting local config drive /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb/disk.config because it was imported into RBD.
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.265 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.303 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:33 compute-2 nova_compute[232428]: 2025-11-29 07:55:33.309 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:33 compute-2 systemd-machined[194747]: New machine qemu-23-instance-0000002c.
Nov 29 07:55:33 compute-2 systemd[1]: Started Virtual Machine qemu-23-instance-0000002c.
Nov 29 07:55:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1653877150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:33.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:34 compute-2 ceph-mon[77138]: pgmap v1564: 305 pgs: 305 active+clean; 247 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 07:55:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/698610573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:34 compute-2 ceph-mon[77138]: pgmap v1565: 305 pgs: 305 active+clean; 280 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 29 07:55:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.742 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.744 232432 DEBUG nova.virt.libvirt.vif [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-188342549',id=45,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e666e8b3cc744c97a39b55c135d7769f',ramdisk_id='',reservation_id='r-atgnm5yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:26Z,user_data=None,user_id='8f4bf9b09ffd4b6ebd538fb75ec8125b',uuid=a0b17694-217b-4e21-ba29-56995925b299,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.745 232432 DEBUG nova.network.os_vif_util [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converting VIF {"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.749 232432 DEBUG nova.network.os_vif_util [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.751 232432 DEBUG nova.objects.instance [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b17694-217b-4e21-ba29-56995925b299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.771 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <uuid>a0b17694-217b-4e21-ba29-56995925b299</uuid>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <name>instance-0000002d</name>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491</nova:name>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:55:30</nova:creationTime>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:user uuid="8f4bf9b09ffd4b6ebd538fb75ec8125b">tempest-FloatingIPsAssociationNegativeTestJSON-1266738983-project-member</nova:user>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:project uuid="e666e8b3cc744c97a39b55c135d7769f">tempest-FloatingIPsAssociationNegativeTestJSON-1266738983</nova:project>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <nova:port uuid="770720ed-c9ad-4ee3-be57-1b6fa775a39f">
Nov 29 07:55:34 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <system>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="serial">a0b17694-217b-4e21-ba29-56995925b299</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="uuid">a0b17694-217b-4e21-ba29-56995925b299</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </system>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <os>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </os>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <features>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </features>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a0b17694-217b-4e21-ba29-56995925b299_disk">
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </source>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a0b17694-217b-4e21-ba29-56995925b299_disk.config">
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </source>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:55:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:0b:87:cb"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <target dev="tap770720ed-c9"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/console.log" append="off"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <video>
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </video>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:55:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:55:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:55:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:55:34 compute-2 nova_compute[232428]: </domain>
Nov 29 07:55:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.773 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Preparing to wait for external event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.774 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.774 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.774 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.775 232432 DEBUG nova.virt.libvirt.vif [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-188342549',id=45,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e666e8b3cc744c97a39b55c135d7769f',ramdisk_id='',reservation_id='r-atgnm5yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:26Z,user_data=None,user_id='8f4bf9b09ffd4b6ebd538fb75ec8125b',uuid=a0b17694-217b-4e21-ba29-56995925b299,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.776 232432 DEBUG nova.network.os_vif_util [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converting VIF {"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.776 232432 DEBUG nova.network.os_vif_util [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.777 232432 DEBUG os_vif [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.778 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.778 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.783 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap770720ed-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.784 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap770720ed-c9, col_values=(('external_ids', {'iface-id': '770720ed-c9ad-4ee3-be57-1b6fa775a39f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:87:cb', 'vm-uuid': 'a0b17694-217b-4e21-ba29-56995925b299'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 NetworkManager[48993]: <info>  [1764402934.7904] manager: (tap770720ed-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.805 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.806 232432 INFO os_vif [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9')
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.873 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.874 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.874 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] No VIF found with MAC fa:16:3e:0b:87:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.874 232432 INFO nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Using config drive
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.899 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.905 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402934.8750076, d2a81205-724b-4164-8572-beb503ccc1bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.906 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] VM Resumed (Lifecycle Event)
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.907 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.907 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.914 232432 INFO nova.virt.libvirt.driver [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance spawned successfully.
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.914 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.943 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.949 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.950 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.950 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.951 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.951 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.952 232432 DEBUG nova.virt.libvirt.driver [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.956 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.995 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.995 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402934.8771124, d2a81205-724b-4164-8572-beb503ccc1bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:34 compute-2 nova_compute[232428]: 2025-11-29 07:55:34.996 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] VM Started (Lifecycle Event)
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.093 232432 INFO nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Took 10.97 seconds to spawn the instance on the hypervisor.
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.093 232432 DEBUG nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.101 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.108 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.138 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.269 232432 INFO nova.compute.manager [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Took 12.15 seconds to build instance.
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.289 232432 INFO nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Creating config drive at /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.294 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdc96rarq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.344 232432 DEBUG oslo_concurrency.lockutils [None req-eafe1aa3-ef2d-495c-ac3c-51063dc351b9 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.454 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdc96rarq" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.486 232432 DEBUG nova.storage.rbd_utils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] rbd image a0b17694-217b-4e21-ba29-56995925b299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:35 compute-2 nova_compute[232428]: 2025-11-29 07:55:35.490 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config a0b17694-217b-4e21-ba29-56995925b299_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:35.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:35.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2649230686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:35 compute-2 ceph-mon[77138]: pgmap v1566: 305 pgs: 305 active+clean; 280 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 123 op/s
Nov 29 07:55:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1653877150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1978654288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1266717774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.247 232432 DEBUG nova.objects.instance [None req-86bdb19d-dbc9-4375-8511-10813dcf50fd dbf4589425d2433e8f05169c124fc996 ad7fad31d5864eb0887205db329f479a - - default default] Lazy-loading 'pci_devices' on Instance uuid d2a81205-724b-4164-8572-beb503ccc1bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.272 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402936.27196, d2a81205-724b-4164-8572-beb503ccc1bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.272 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] VM Paused (Lifecycle Event)
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.310 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.315 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:36 compute-2 nova_compute[232428]: 2025-11-29 07:55:36.346 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.282 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.283 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.283 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.283 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.284 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1822659885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.698 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/48570772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:37.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:37.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:37 compute-2 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 29 07:55:37 compute-2 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002c.scope: Consumed 2.003s CPU time.
Nov 29 07:55:37 compute-2 systemd-machined[194747]: Machine qemu-23-instance-0000002c terminated.
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.827 232432 DEBUG nova.compute.manager [None req-86bdb19d-dbc9-4375-8511-10813dcf50fd dbf4589425d2433e8f05169c124fc996 ad7fad31d5864eb0887205db329f479a - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.967 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.968 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.974 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:37 compute-2 nova_compute[232428]: 2025-11-29 07:55:37.974 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.153 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.154 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4611MB free_disk=20.880298614501953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.155 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.155 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.273 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance d2a81205-724b-4164-8572-beb503ccc1bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.273 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance a0b17694-217b-4e21-ba29-56995925b299 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.274 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.274 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.345 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696250781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.804 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.811 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.828 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.875 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:55:38 compute-2 nova_compute[232428]: 2025-11-29 07:55:38.875 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:39 compute-2 nova_compute[232428]: 2025-11-29 07:55:39.186 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:39 compute-2 nova_compute[232428]: 2025-11-29 07:55:39.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:40 compute-2 sudo[255144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:40 compute-2 sudo[255144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:40 compute-2 sudo[255144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:40 compute-2 sudo[255169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:40 compute-2 sudo[255169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:40 compute-2 sudo[255169]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.402 232432 DEBUG oslo_concurrency.processutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config a0b17694-217b-4e21-ba29-56995925b299_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.913s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.404 232432 INFO nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Deleting local config drive /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299/disk.config because it was imported into RBD.
Nov 29 07:55:41 compute-2 kernel: tap770720ed-c9: entered promiscuous mode
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.4904] manager: (tap770720ed-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 07:55:41 compute-2 ovn_controller[134375]: 2025-11-29T07:55:41Z|00170|binding|INFO|Claiming lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f for this chassis.
Nov 29 07:55:41 compute-2 ovn_controller[134375]: 2025-11-29T07:55:41Z|00171|binding|INFO|770720ed-c9ad-4ee3-be57-1b6fa775a39f: Claiming fa:16:3e:0b:87:cb 10.100.0.14
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.518 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:87:cb 10.100.0.14'], port_security=['fa:16:3e:0b:87:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0b17694-217b-4e21-ba29-56995925b299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6bac91f-f859-4ea6-a636-ede05a76826c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e666e8b3cc744c97a39b55c135d7769f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9877e00-3b5a-4d8a-80db-c26b2680f1ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6f0ab5-e647-4517-afc3-f74f1cb434b1, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=770720ed-c9ad-4ee3-be57-1b6fa775a39f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.521 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 770720ed-c9ad-4ee3-be57-1b6fa775a39f in datapath b6bac91f-f859-4ea6-a636-ede05a76826c bound to our chassis
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.525 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6bac91f-f859-4ea6-a636-ede05a76826c
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.523 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "d2a81205-724b-4164-8572-beb503ccc1bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.524 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.524 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "d2a81205-724b-4164-8572-beb503ccc1bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.524 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.525 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.530 232432 INFO nova.compute.manager [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Terminating instance
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.532 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "refresh_cache-d2a81205-724b-4164-8572-beb503ccc1bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.532 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquired lock "refresh_cache-d2a81205-724b-4164-8572-beb503ccc1bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.532 232432 DEBUG nova.network.neutron [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:55:41 compute-2 systemd-machined[194747]: New machine qemu-24-instance-0000002d.
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.547 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43b2a97b-9e50-469b-8ce1-9f51be1c737c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.548 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6bac91f-f1 in ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.551 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6bac91f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.551 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0212f6a4-449a-4963-b88f-811b120a20e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.553 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5377486e-9c4a-4f66-a0fe-62b7f261877b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 systemd[1]: Started Virtual Machine qemu-24-instance-0000002d.
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.574 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e25073-4dfb-4d9c-85df-793a5cb397bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_controller[134375]: 2025-11-29T07:55:41Z|00172|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f ovn-installed in OVS
Nov 29 07:55:41 compute-2 ovn_controller[134375]: 2025-11-29T07:55:41Z|00173|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f up in Southbound
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 systemd-udevd[255210]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.599 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1751c980-c7aa-4d88-950a-43fd8d325ba7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.6131] device (tap770720ed-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.6143] device (tap770720ed-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.638 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e221b4-9388-4c7e-b184-5836ca2bb190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.6455] manager: (tapb6bac91f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 07:55:41 compute-2 systemd-udevd[255216]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.644 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8658256d-3223-4f22-b29a-f77ff0dab523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.679 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fdeca4d5-4ab6-4633-86db-f69fa86553ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.682 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[38e4d795-6b3a-4cc8-912e-fd6bebfea719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.7115] device (tapb6bac91f-f0): carrier: link connected
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.718 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c3aa49-e077-48fd-859b-89db92c73c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.744 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0b090d34-52d5-4de9-8305-f1a4aa154789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6bac91f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590797, 'reachable_time': 44089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255241, 'error': None, 'target': 'ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:41.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.764 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1bec135b-683b-4182-9930-9391765f24a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:b66e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590797, 'tstamp': 590797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255242, 'error': None, 'target': 'ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:55:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:41.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.789 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[99d21d16-e122-4deb-8ddd-a854bfecd60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6bac91f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590797, 'reachable_time': 44089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255243, 'error': None, 'target': 'ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.832 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa253de8-ae09-485d-867a-d08b6f65d31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.906 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5efecc-5c47-47c1-8c4c-5c841d4f9c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.907 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6bac91f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.908 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.908 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6bac91f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 NetworkManager[48993]: <info>  [1764402941.9119] manager: (tapb6bac91f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 07:55:41 compute-2 kernel: tapb6bac91f-f0: entered promiscuous mode
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.914 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.916 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6bac91f-f0, col_values=(('external_ids', {'iface-id': '7308c084-59ae-4fee-8ae3-c004448ab3d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 ovn_controller[134375]: 2025-11-29T07:55:41Z|00174|binding|INFO|Releasing lport 7308c084-59ae-4fee-8ae3-c004448ab3d1 from this chassis (sb_readonly=0)
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.920 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.921 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6bac91f-f859-4ea6-a636-ede05a76826c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6bac91f-f859-4ea6-a636-ede05a76826c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.922 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7b353a81-5c70-4038-8a35-9d5daf45831f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.924 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-b6bac91f-f859-4ea6-a636-ede05a76826c
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/b6bac91f-f859-4ea6-a636-ede05a76826c.pid.haproxy
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID b6bac91f-f859-4ea6-a636-ede05a76826c
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:55:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:55:41.925 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c', 'env', 'PROCESS_TAG=haproxy-b6bac91f-f859-4ea6-a636-ede05a76826c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6bac91f-f859-4ea6-a636-ede05a76826c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:55:41 compute-2 nova_compute[232428]: 2025-11-29 07:55:41.939 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.025 232432 DEBUG nova.network.neutron [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.199 232432 DEBUG nova.compute.manager [req-4357ab0b-c135-4a4a-aa13-91cba87abd44 req-964d03ab-4b17-4cd8-a78b-adc11479b5ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.200 232432 DEBUG oslo_concurrency.lockutils [req-4357ab0b-c135-4a4a-aa13-91cba87abd44 req-964d03ab-4b17-4cd8-a78b-adc11479b5ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.200 232432 DEBUG oslo_concurrency.lockutils [req-4357ab0b-c135-4a4a-aa13-91cba87abd44 req-964d03ab-4b17-4cd8-a78b-adc11479b5ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.200 232432 DEBUG oslo_concurrency.lockutils [req-4357ab0b-c135-4a4a-aa13-91cba87abd44 req-964d03ab-4b17-4cd8-a78b-adc11479b5ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.201 232432 DEBUG nova.compute.manager [req-4357ab0b-c135-4a4a-aa13-91cba87abd44 req-964d03ab-4b17-4cd8-a78b-adc11479b5ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Processing event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.315 232432 DEBUG nova.network.neutron [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.350 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Releasing lock "refresh_cache-d2a81205-724b-4164-8572-beb503ccc1bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.351 232432 DEBUG nova.compute.manager [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.359 232432 INFO nova.virt.libvirt.driver [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance destroyed successfully.
Nov 29 07:55:42 compute-2 nova_compute[232428]: 2025-11-29 07:55:42.360 232432 DEBUG nova.objects.instance [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'resources' on Instance uuid d2a81205-724b-4164-8572-beb503ccc1bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:42 compute-2 podman[255293]: 2025-11-29 07:55:42.364173328 +0000 UTC m=+0.064760780 container create d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:55:42 compute-2 systemd[1]: Started libpod-conmon-d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0.scope.
Nov 29 07:55:42 compute-2 podman[255293]: 2025-11-29 07:55:42.332364651 +0000 UTC m=+0.032952113 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:55:42 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:55:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcac42ea9ab8fcbd15e6aa8438f8e24325daa9341ee638a7a2fbc5a60da62dc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:42 compute-2 podman[255293]: 2025-11-29 07:55:42.457696348 +0000 UTC m=+0.158283800 container init d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:55:42 compute-2 podman[255293]: 2025-11-29 07:55:42.469952772 +0000 UTC m=+0.170540204 container start d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:55:42 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [NOTICE]   (255330) : New worker (255332) forked
Nov 29 07:55:42 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [NOTICE]   (255330) : Loading success.
Nov 29 07:55:42 compute-2 ceph-mon[77138]: pgmap v1567: 305 pgs: 305 active+clean; 296 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 150 op/s
Nov 29 07:55:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1822659885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.634 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.635 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402943.633545, a0b17694-217b-4e21-ba29-56995925b299 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.636 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] VM Started (Lifecycle Event)
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.640 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.645 232432 INFO nova.virt.libvirt.driver [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] Instance spawned successfully.
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.646 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.661 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.667 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.672 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.672 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.673 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.673 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.674 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.674 232432 DEBUG nova.virt.libvirt.driver [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.702 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.702 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402943.6338303, a0b17694-217b-4e21-ba29-56995925b299 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.702 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] VM Paused (Lifecycle Event)
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.742 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.745 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764402943.638419, a0b17694-217b-4e21-ba29-56995925b299 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.745 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] VM Resumed (Lifecycle Event)
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.754 232432 INFO nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Took 17.35 seconds to spawn the instance on the hypervisor.
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.755 232432 DEBUG nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.763 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:43.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.766 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:43.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.788 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.849 232432 INFO nova.compute.manager [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Took 18.33 seconds to build instance.
Nov 29 07:55:43 compute-2 nova_compute[232428]: 2025-11-29 07:55:43.991 232432 DEBUG oslo_concurrency.lockutils [None req-a68071b4-ef88-46ac-9652-ea21ad347629 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.196 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:44 compute-2 ceph-mon[77138]: pgmap v1568: 305 pgs: 305 active+clean; 296 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 3.1 MiB/s wr, 61 op/s
Nov 29 07:55:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2696250781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:44 compute-2 ceph-mon[77138]: pgmap v1569: 305 pgs: 305 active+clean; 320 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 4.6 MiB/s wr, 88 op/s
Nov 29 07:55:44 compute-2 ceph-mon[77138]: pgmap v1570: 305 pgs: 305 active+clean; 348 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 158 op/s
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.519 232432 DEBUG nova.compute.manager [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.520 232432 DEBUG oslo_concurrency.lockutils [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.521 232432 DEBUG oslo_concurrency.lockutils [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.521 232432 DEBUG oslo_concurrency.lockutils [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.522 232432 DEBUG nova.compute.manager [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.522 232432 WARNING nova.compute.manager [req-d70aaec1-dd16-4395-94eb-3a4ff222af42 req-5d5ba9a4-a559-44cd-bda6-6f024656eef5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state active and task_state None.
Nov 29 07:55:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:44 compute-2 nova_compute[232428]: 2025-11-29 07:55:44.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:45 compute-2 ceph-mon[77138]: pgmap v1571: 305 pgs: 305 active+clean; 348 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Nov 29 07:55:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2395800221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2440601012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.404 232432 INFO nova.virt.libvirt.driver [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Deleting instance files /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb_del
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.405 232432 INFO nova.virt.libvirt.driver [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Deletion of /var/lib/nova/instances/d2a81205-724b-4164-8572-beb503ccc1bb_del complete
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.523 232432 INFO nova.compute.manager [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Took 3.17 seconds to destroy the instance on the hypervisor.
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.524 232432 DEBUG oslo.service.loopingcall [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.524 232432 DEBUG nova.compute.manager [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.525 232432 DEBUG nova.network.neutron [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.689 232432 DEBUG nova.network.neutron [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.706 232432 DEBUG nova.network.neutron [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.730 232432 INFO nova.compute.manager [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Took 0.21 seconds to deallocate network for instance.
Nov 29 07:55:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:45.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:45.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.776 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.779 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:45 compute-2 nova_compute[232428]: 2025-11-29 07:55:45.867 232432 DEBUG oslo_concurrency.processutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:46 compute-2 sudo[255388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:46 compute-2 sudo[255388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:46 compute-2 sudo[255388]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:46 compute-2 sudo[255413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:55:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1167763174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:55:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:55:46 compute-2 sudo[255413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:46 compute-2 sudo[255413]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1114347489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.328 232432 DEBUG oslo_concurrency.processutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.339 232432 DEBUG nova.compute.provider_tree [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.361 232432 DEBUG nova.scheduler.client.report [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.395 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.432 232432 INFO nova.scheduler.client.report [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Deleted allocations for instance d2a81205-724b-4164-8572-beb503ccc1bb
Nov 29 07:55:46 compute-2 nova_compute[232428]: 2025-11-29 07:55:46.515 232432 DEBUG oslo_concurrency.lockutils [None req-b6ee662e-3c08-4018-bf5a-d7dc8da56d54 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "d2a81205-724b-4164-8572-beb503ccc1bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:47.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:47.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:47 compute-2 ceph-mon[77138]: pgmap v1572: 305 pgs: 305 active+clean; 276 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 242 op/s
Nov 29 07:55:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1114347489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:48 compute-2 ceph-mon[77138]: pgmap v1573: 305 pgs: 305 active+clean; 276 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Nov 29 07:55:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1755307174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:49 compute-2 nova_compute[232428]: 2025-11-29 07:55:49.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:49 compute-2 nova_compute[232428]: 2025-11-29 07:55:49.419 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:49 compute-2 NetworkManager[48993]: <info>  [1764402949.4205] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 07:55:49 compute-2 NetworkManager[48993]: <info>  [1764402949.4228] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 07:55:49 compute-2 nova_compute[232428]: 2025-11-29 07:55:49.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:49 compute-2 ovn_controller[134375]: 2025-11-29T07:55:49Z|00175|binding|INFO|Releasing lport 7308c084-59ae-4fee-8ae3-c004448ab3d1 from this chassis (sb_readonly=0)
Nov 29 07:55:49 compute-2 nova_compute[232428]: 2025-11-29 07:55:49.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:49 compute-2 nova_compute[232428]: 2025-11-29 07:55:49.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:50 compute-2 podman[255443]: 2025-11-29 07:55:50.681809874 +0000 UTC m=+0.073513564 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 07:55:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 07:55:50 compute-2 ceph-mon[77138]: pgmap v1574: 305 pgs: 305 active+clean; 205 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 287 op/s
Nov 29 07:55:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3146170637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:51 compute-2 nova_compute[232428]: 2025-11-29 07:55:51.189 232432 DEBUG nova.compute.manager [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:51 compute-2 nova_compute[232428]: 2025-11-29 07:55:51.190 232432 DEBUG nova.compute.manager [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing instance network info cache due to event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:51 compute-2 nova_compute[232428]: 2025-11-29 07:55:51.191 232432 DEBUG oslo_concurrency.lockutils [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:51 compute-2 nova_compute[232428]: 2025-11-29 07:55:51.191 232432 DEBUG oslo_concurrency.lockutils [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:51 compute-2 nova_compute[232428]: 2025-11-29 07:55:51.191 232432 DEBUG nova.network.neutron [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:51.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:51 compute-2 ceph-mon[77138]: osdmap e184: 3 total, 3 up, 3 in
Nov 29 07:55:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532480653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532480653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.770 232432 DEBUG nova.network.neutron [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updated VIF entry in instance network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.770 232432 DEBUG nova.network.neutron [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updating instance_info_cache with network_info: [{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.798 232432 DEBUG oslo_concurrency.lockutils [req-ce1a52d9-f5ed-47b3-bf51-d80651f7467f req-14c2902c-0fda-4aed-be68-c43a7453792c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.829 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402937.8287196, d2a81205-724b-4164-8572-beb503ccc1bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.830 232432 INFO nova.compute.manager [-] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] VM Stopped (Lifecycle Event)
Nov 29 07:55:52 compute-2 nova_compute[232428]: 2025-11-29 07:55:52.848 232432 DEBUG nova.compute.manager [None req-233b1ef4-e145-4d37-ad10-878959284097 - - - - - -] [instance: d2a81205-724b-4164-8572-beb503ccc1bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:53 compute-2 ceph-mon[77138]: pgmap v1576: 305 pgs: 305 active+clean; 155 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 128 KiB/s wr, 289 op/s
Nov 29 07:55:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/532480653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/532480653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:53.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:53.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:54 compute-2 nova_compute[232428]: 2025-11-29 07:55:54.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:54 compute-2 podman[255466]: 2025-11-29 07:55:54.685380648 +0000 UTC m=+0.081212555 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 07:55:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:54 compute-2 nova_compute[232428]: 2025-11-29 07:55:54.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:55 compute-2 ceph-mon[77138]: pgmap v1577: 305 pgs: 305 active+clean; 147 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 129 KiB/s wr, 290 op/s
Nov 29 07:55:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:55.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:57 compute-2 ovn_controller[134375]: 2025-11-29T07:55:57Z|00176|binding|INFO|Releasing lport 7308c084-59ae-4fee-8ae3-c004448ab3d1 from this chassis (sb_readonly=0)
Nov 29 07:55:57 compute-2 nova_compute[232428]: 2025-11-29 07:55:57.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:57 compute-2 ceph-mon[77138]: pgmap v1578: 305 pgs: 305 active+clean; 134 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 61 KiB/s wr, 222 op/s
Nov 29 07:55:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:55:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:57.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:55:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:58 compute-2 ovn_controller[134375]: 2025-11-29T07:55:58Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:87:cb 10.100.0.14
Nov 29 07:55:58 compute-2 ovn_controller[134375]: 2025-11-29T07:55:58Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:87:cb 10.100.0.14
Nov 29 07:55:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 07:55:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:59 compute-2 ceph-mon[77138]: pgmap v1579: 305 pgs: 305 active+clean; 134 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 61 KiB/s wr, 222 op/s
Nov 29 07:55:59 compute-2 ceph-mon[77138]: osdmap e185: 3 total, 3 up, 3 in
Nov 29 07:55:59 compute-2 ceph-mon[77138]: osdmap e186: 3 total, 3 up, 3 in
Nov 29 07:55:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1535849423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:55:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:59.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.795 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:55:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:55:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:59.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.935 232432 DEBUG nova.compute.manager [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.936 232432 DEBUG nova.compute.manager [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing instance network info cache due to event network-changed-770720ed-c9ad-4ee3-be57-1b6fa775a39f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.937 232432 DEBUG oslo_concurrency.lockutils [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.937 232432 DEBUG oslo_concurrency.lockutils [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:59 compute-2 nova_compute[232428]: 2025-11-29 07:55:59.938 232432 DEBUG nova.network.neutron [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Refreshing network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:00 compute-2 sudo[255489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:00 compute-2 sudo[255489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:00 compute-2 sudo[255489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:00 compute-2 sudo[255514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:00 compute-2 sudo[255514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:00 compute-2 sudo[255514]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.252 232432 DEBUG nova.network.neutron [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updated VIF entry in instance network info cache for port 770720ed-c9ad-4ee3-be57-1b6fa775a39f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.253 232432 DEBUG nova.network.neutron [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updating instance_info_cache with network_info: [{"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.270 232432 DEBUG oslo_concurrency.lockutils [req-ee809128-7b20-4846-81b1-1ce2e45d9c36 req-6583149b-680d-49a6-8a76-029bd7d2a64e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a0b17694-217b-4e21-ba29-56995925b299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:01.429 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.430 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:01.431 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:56:01 compute-2 ovn_controller[134375]: 2025-11-29T07:56:01Z|00177|binding|INFO|Releasing lport 7308c084-59ae-4fee-8ae3-c004448ab3d1 from this chassis (sb_readonly=0)
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:01 compute-2 ovn_controller[134375]: 2025-11-29T07:56:01Z|00178|binding|INFO|Releasing lport 7308c084-59ae-4fee-8ae3-c004448ab3d1 from this chassis (sb_readonly=0)
Nov 29 07:56:01 compute-2 nova_compute[232428]: 2025-11-29 07:56:01.721 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 07:56:03 compute-2 ceph-mon[77138]: pgmap v1582: 305 pgs: 305 active+clean; 147 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 1.3 MiB/s wr, 126 op/s
Nov 29 07:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:03.302 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:03.302 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:03.303 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:03.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:03 compute-2 podman[255542]: 2025-11-29 07:56:03.796766233 +0000 UTC m=+0.188636671 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:56:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:03.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:04 compute-2 nova_compute[232428]: 2025-11-29 07:56:04.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:04 compute-2 ceph-mon[77138]: pgmap v1583: 305 pgs: 305 active+clean; 174 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Nov 29 07:56:04 compute-2 ceph-mon[77138]: osdmap e187: 3 total, 3 up, 3 in
Nov 29 07:56:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 07:56:04 compute-2 nova_compute[232428]: 2025-11-29 07:56:04.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:05 compute-2 ceph-mon[77138]: pgmap v1585: 305 pgs: 305 active+clean; 188 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 199 op/s
Nov 29 07:56:05 compute-2 ceph-mon[77138]: osdmap e188: 3 total, 3 up, 3 in
Nov 29 07:56:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/232760435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3015177557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 07:56:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:07 compute-2 ceph-mon[77138]: pgmap v1587: 305 pgs: 305 active+clean; 273 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 11 MiB/s wr, 251 op/s
Nov 29 07:56:07 compute-2 ceph-mon[77138]: osdmap e189: 3 total, 3 up, 3 in
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.684 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.685 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.685 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.685 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.686 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.687 232432 INFO nova.compute.manager [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Terminating instance
Nov 29 07:56:07 compute-2 nova_compute[232428]: 2025-11-29 07:56:07.688 232432 DEBUG nova.compute.manager [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:56:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:07.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:08 compute-2 kernel: tap770720ed-c9 (unregistering): left promiscuous mode
Nov 29 07:56:08 compute-2 NetworkManager[48993]: <info>  [1764402968.5490] device (tap770720ed-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.557 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00179|binding|INFO|Releasing lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f from this chassis (sb_readonly=0)
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00180|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f down in Southbound
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00181|binding|INFO|Removing iface tap770720ed-c9 ovn-installed in OVS
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.565 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:87:cb 10.100.0.14'], port_security=['fa:16:3e:0b:87:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0b17694-217b-4e21-ba29-56995925b299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6bac91f-f859-4ea6-a636-ede05a76826c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e666e8b3cc744c97a39b55c135d7769f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9877e00-3b5a-4d8a-80db-c26b2680f1ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6f0ab5-e647-4517-afc3-f74f1cb434b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=770720ed-c9ad-4ee3-be57-1b6fa775a39f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.567 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 770720ed-c9ad-4ee3-be57-1b6fa775a39f in datapath b6bac91f-f859-4ea6-a636-ede05a76826c unbound from our chassis
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.568 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6bac91f-f859-4ea6-a636-ede05a76826c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.569 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aba218d9-524b-4a67-9c57-5d2f886de20c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.570 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c namespace which is not needed anymore
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.583 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 29 07:56:08 compute-2 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Consumed 14.693s CPU time.
Nov 29 07:56:08 compute-2 systemd-machined[194747]: Machine qemu-24-instance-0000002d terminated.
Nov 29 07:56:08 compute-2 kernel: tap770720ed-c9: entered promiscuous mode
Nov 29 07:56:08 compute-2 NetworkManager[48993]: <info>  [1764402968.7084] manager: (tap770720ed-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 07:56:08 compute-2 kernel: tap770720ed-c9 (unregistering): left promiscuous mode
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00182|binding|INFO|Claiming lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f for this chassis.
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.710 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00183|binding|INFO|770720ed-c9ad-4ee3-be57-1b6fa775a39f: Claiming fa:16:3e:0b:87:cb 10.100.0.14
Nov 29 07:56:08 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [NOTICE]   (255330) : haproxy version is 2.8.14-c23fe91
Nov 29 07:56:08 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [NOTICE]   (255330) : path to executable is /usr/sbin/haproxy
Nov 29 07:56:08 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [WARNING]  (255330) : Exiting Master process...
Nov 29 07:56:08 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [ALERT]    (255330) : Current worker (255332) exited with code 143 (Terminated)
Nov 29 07:56:08 compute-2 neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c[255323]: [WARNING]  (255330) : All workers exited. Exiting... (0)
Nov 29 07:56:08 compute-2 systemd[1]: libpod-d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0.scope: Deactivated successfully.
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.734 232432 INFO nova.virt.libvirt.driver [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] Instance destroyed successfully.
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.734 232432 DEBUG nova.objects.instance [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lazy-loading 'resources' on Instance uuid a0b17694-217b-4e21-ba29-56995925b299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:08 compute-2 podman[255594]: 2025-11-29 07:56:08.735958802 +0000 UTC m=+0.054706486 container died d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00184|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f ovn-installed in OVS
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00185|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f up in Southbound
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00186|binding|INFO|Releasing lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f from this chassis (sb_readonly=1)
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.737 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00187|if_status|INFO|Dropped 1 log messages in last 240 seconds (most recently, 240 seconds ago) due to excessive rate
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00188|if_status|INFO|Not setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f down as sb is readonly
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00189|binding|INFO|Removing iface tap770720ed-c9 ovn-installed in OVS
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.739 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:87:cb 10.100.0.14'], port_security=['fa:16:3e:0b:87:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0b17694-217b-4e21-ba29-56995925b299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6bac91f-f859-4ea6-a636-ede05a76826c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e666e8b3cc744c97a39b55c135d7769f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9877e00-3b5a-4d8a-80db-c26b2680f1ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6f0ab5-e647-4517-afc3-f74f1cb434b1, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=770720ed-c9ad-4ee3-be57-1b6fa775a39f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00190|binding|INFO|Releasing lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f from this chassis (sb_readonly=0)
Nov 29 07:56:08 compute-2 ovn_controller[134375]: 2025-11-29T07:56:08Z|00191|binding|INFO|Setting lport 770720ed-c9ad-4ee3-be57-1b6fa775a39f down in Southbound
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.752 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:87:cb 10.100.0.14'], port_security=['fa:16:3e:0b:87:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0b17694-217b-4e21-ba29-56995925b299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6bac91f-f859-4ea6-a636-ede05a76826c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e666e8b3cc744c97a39b55c135d7769f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9877e00-3b5a-4d8a-80db-c26b2680f1ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6f0ab5-e647-4517-afc3-f74f1cb434b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=770720ed-c9ad-4ee3-be57-1b6fa775a39f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:08 compute-2 ceph-mon[77138]: pgmap v1589: 305 pgs: 305 active+clean; 273 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 8.7 MiB/s wr, 176 op/s
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.763 232432 DEBUG nova.virt.libvirt.vif [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1883425491',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-188342549',id=45,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e666e8b3cc744c97a39b55c135d7769f',ramdisk_id='',reservation_id='r-atgnm5yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1266738983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:43Z,user_data=None,user_id='8f4bf9b09ffd4b6ebd538fb75ec8125b',uuid=a0b17694-217b-4e21-ba29-56995925b299,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.765 232432 DEBUG nova.network.os_vif_util [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converting VIF {"id": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "address": "fa:16:3e:0b:87:cb", "network": {"id": "b6bac91f-f859-4ea6-a636-ede05a76826c", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1466834154-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e666e8b3cc744c97a39b55c135d7769f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap770720ed-c9", "ovs_interfaceid": "770720ed-c9ad-4ee3-be57-1b6fa775a39f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:08 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0-userdata-shm.mount: Deactivated successfully.
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.767 232432 DEBUG nova.network.os_vif_util [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.767 232432 DEBUG os_vif [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:56:08 compute-2 systemd[1]: var-lib-containers-storage-overlay-fcac42ea9ab8fcbd15e6aa8438f8e24325daa9341ee638a7a2fbc5a60da62dc3-merged.mount: Deactivated successfully.
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.769 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap770720ed-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.772 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.775 232432 INFO os_vif [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:87:cb,bridge_name='br-int',has_traffic_filtering=True,id=770720ed-c9ad-4ee3-be57-1b6fa775a39f,network=Network(b6bac91f-f859-4ea6-a636-ede05a76826c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap770720ed-c9')
Nov 29 07:56:08 compute-2 podman[255594]: 2025-11-29 07:56:08.77865579 +0000 UTC m=+0.097403484 container cleanup d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:56:08 compute-2 systemd[1]: libpod-conmon-d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0.scope: Deactivated successfully.
Nov 29 07:56:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 07:56:08 compute-2 podman[255641]: 2025-11-29 07:56:08.849620273 +0000 UTC m=+0.045989752 container remove d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.855 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4cabf6-722b-43ff-85cd-d18e891fff36]: (4, ('Sat Nov 29 07:56:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c (d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0)\nd80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0\nSat Nov 29 07:56:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c (d80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0)\nd80f63b39adc42ba65623bac15bb252e5f670fef92e68f838ea6a404ca16c4d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.858 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9bbbfb-b96b-4543-96db-7b8f0a906e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.859 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6bac91f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 kernel: tapb6bac91f-f0: left promiscuous mode
Nov 29 07:56:08 compute-2 nova_compute[232428]: 2025-11-29 07:56:08.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.879 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4879734a-137b-47db-9c5d-cd8dea3f3915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.890 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6c6706-c754-48ef-b53b-777caac4333c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.892 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a16873fc-ad01-4681-a165-3794da27ed46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.916 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf5aa62-38be-43e7-b23c-4d79628cc778]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590789, 'reachable_time': 19526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255658, 'error': None, 'target': 'ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.921 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6bac91f-f859-4ea6-a636-ede05a76826c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.921 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[e90b1469-a523-4002-bea8-c08424390acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.923 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 770720ed-c9ad-4ee3-be57-1b6fa775a39f in datapath b6bac91f-f859-4ea6-a636-ede05a76826c unbound from our chassis
Nov 29 07:56:08 compute-2 systemd[1]: run-netns-ovnmeta\x2db6bac91f\x2df859\x2d4ea6\x2da636\x2dede05a76826c.mount: Deactivated successfully.
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.924 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6bac91f-f859-4ea6-a636-ede05a76826c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.925 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5964252f-bea1-4586-9a37-92ce90b81f86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.926 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 770720ed-c9ad-4ee3-be57-1b6fa775a39f in datapath b6bac91f-f859-4ea6-a636-ede05a76826c unbound from our chassis
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.927 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6bac91f-f859-4ea6-a636-ede05a76826c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:56:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:08.928 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[27edecf3-a15f-404b-ac3d-a6aecd8db267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.206 232432 DEBUG nova.compute.manager [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.207 232432 DEBUG oslo_concurrency.lockutils [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.207 232432 DEBUG oslo_concurrency.lockutils [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.207 232432 DEBUG oslo_concurrency.lockutils [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.207 232432 DEBUG nova.compute.manager [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.207 232432 DEBUG nova.compute.manager [req-325ad801-1c0f-4e27-a57a-3acb2f78abad req-69ccf7e2-a32d-4178-b2e9-a3b42cf1b2b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.212 232432 INFO nova.virt.libvirt.driver [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Deleting instance files /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299_del
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.212 232432 INFO nova.virt.libvirt.driver [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Deletion of /var/lib/nova/instances/a0b17694-217b-4e21-ba29-56995925b299_del complete
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.287 232432 INFO nova.compute.manager [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Took 1.60 seconds to destroy the instance on the hypervisor.
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.288 232432 DEBUG oslo.service.loopingcall [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.288 232432 DEBUG nova.compute.manager [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:56:09 compute-2 nova_compute[232428]: 2025-11-29 07:56:09.289 232432 DEBUG nova.network.neutron [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:56:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:09.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:09 compute-2 ceph-mon[77138]: osdmap e190: 3 total, 3 up, 3 in
Nov 29 07:56:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:10.432 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:10 compute-2 nova_compute[232428]: 2025-11-29 07:56:10.607 232432 DEBUG nova.network.neutron [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:10 compute-2 nova_compute[232428]: 2025-11-29 07:56:10.633 232432 INFO nova.compute.manager [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] Took 1.34 seconds to deallocate network for instance.
Nov 29 07:56:10 compute-2 nova_compute[232428]: 2025-11-29 07:56:10.679 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:10 compute-2 nova_compute[232428]: 2025-11-29 07:56:10.680 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:10 compute-2 nova_compute[232428]: 2025-11-29 07:56:10.737 232432 DEBUG oslo_concurrency.processutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:10 compute-2 ceph-mon[77138]: pgmap v1591: 305 pgs: 305 active+clean; 281 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 9.6 MiB/s wr, 247 op/s
Nov 29 07:56:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1496196141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.180 232432 DEBUG oslo_concurrency.processutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.188 232432 DEBUG nova.compute.provider_tree [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.205 232432 DEBUG nova.scheduler.client.report [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.229 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.260 232432 INFO nova.scheduler.client.report [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Deleted allocations for instance a0b17694-217b-4e21-ba29-56995925b299
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.330 232432 DEBUG oslo_concurrency.lockutils [None req-45058902-4315-4d48-90e2-40318a19eea5 8f4bf9b09ffd4b6ebd538fb75ec8125b e666e8b3cc744c97a39b55c135d7769f - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.520 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.520 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.520 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.521 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.521 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.521 232432 WARNING nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state deleted and task_state None.
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.522 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.522 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.522 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.522 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.523 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.523 232432 WARNING nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state deleted and task_state None.
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.523 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.523 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.524 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.524 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.524 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.524 232432 WARNING nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state deleted and task_state None.
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.525 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.525 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.525 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.525 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.526 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.526 232432 WARNING nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-unplugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state deleted and task_state None.
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.526 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.526 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a0b17694-217b-4e21-ba29-56995925b299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.527 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.527 232432 DEBUG oslo_concurrency.lockutils [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a0b17694-217b-4e21-ba29-56995925b299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.527 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] No waiting events found dispatching network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.527 232432 WARNING nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received unexpected event network-vif-plugged-770720ed-c9ad-4ee3-be57-1b6fa775a39f for instance with vm_state deleted and task_state None.
Nov 29 07:56:11 compute-2 nova_compute[232428]: 2025-11-29 07:56:11.528 232432 DEBUG nova.compute.manager [req-874ee8ec-8e47-4008-8977-8dc879e9a323 req-e4facdc9-8b10-410e-a5dc-992c95af7ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a0b17694-217b-4e21-ba29-56995925b299] Received event network-vif-deleted-770720ed-c9ad-4ee3-be57-1b6fa775a39f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:11.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1496196141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:13 compute-2 ceph-mon[77138]: pgmap v1592: 305 pgs: 305 active+clean; 232 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.5 MiB/s wr, 228 op/s
Nov 29 07:56:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1247941767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:13 compute-2 nova_compute[232428]: 2025-11-29 07:56:13.773 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:13.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:13.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 07:56:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3924029452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:14 compute-2 ceph-mon[77138]: osdmap e191: 3 total, 3 up, 3 in
Nov 29 07:56:14 compute-2 nova_compute[232428]: 2025-11-29 07:56:14.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:15 compute-2 nova_compute[232428]: 2025-11-29 07:56:15.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:15 compute-2 ceph-mon[77138]: pgmap v1593: 305 pgs: 305 active+clean; 201 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 183 op/s
Nov 29 07:56:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:15.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:17 compute-2 ceph-mon[77138]: pgmap v1595: 305 pgs: 305 active+clean; 45 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.3 MiB/s wr, 330 op/s
Nov 29 07:56:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:17.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:18 compute-2 nova_compute[232428]: 2025-11-29 07:56:18.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:19 compute-2 nova_compute[232428]: 2025-11-29 07:56:19.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:19 compute-2 nova_compute[232428]: 2025-11-29 07:56:19.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:19.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:20 compute-2 ceph-mon[77138]: pgmap v1596: 305 pgs: 305 active+clean; 45 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 560 KiB/s wr, 269 op/s
Nov 29 07:56:20 compute-2 sudo[255688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:20 compute-2 sudo[255688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:20 compute-2 sudo[255688]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:20 compute-2 podman[255712]: 2025-11-29 07:56:20.998015077 +0000 UTC m=+0.060974521 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:56:21 compute-2 sudo[255719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:21 compute-2 sudo[255719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:21 compute-2 sudo[255719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:21.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:21.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:22 compute-2 ceph-mon[77138]: pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 67 KiB/s wr, 204 op/s
Nov 29 07:56:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1982467621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:23 compute-2 ceph-mon[77138]: pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 38 KiB/s wr, 141 op/s
Nov 29 07:56:23 compute-2 nova_compute[232428]: 2025-11-29 07:56:23.732 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402968.7311172, a0b17694-217b-4e21-ba29-56995925b299 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:23 compute-2 nova_compute[232428]: 2025-11-29 07:56:23.733 232432 INFO nova.compute.manager [-] [instance: a0b17694-217b-4e21-ba29-56995925b299] VM Stopped (Lifecycle Event)
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.764369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983764448, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2549, "num_deletes": 259, "total_data_size": 5843232, "memory_usage": 5906768, "flush_reason": "Manual Compaction"}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 29 07:56:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:23.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983816598, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3805713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29353, "largest_seqno": 31897, "table_properties": {"data_size": 3795240, "index_size": 6711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22599, "raw_average_key_size": 21, "raw_value_size": 3773958, "raw_average_value_size": 3527, "num_data_blocks": 289, "num_entries": 1070, "num_filter_entries": 1070, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402783, "oldest_key_time": 1764402783, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 52298 microseconds, and 9437 cpu microseconds.
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:56:23 compute-2 nova_compute[232428]: 2025-11-29 07:56:23.817 232432 DEBUG nova.compute.manager [None req-6281c79d-bc5a-4b9d-aa87-669932b55e6e - - - - - -] [instance: a0b17694-217b-4e21-ba29-56995925b299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.816669) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3805713 bytes OK
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.816694) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.818227) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.818245) EVENT_LOG_v1 {"time_micros": 1764402983818240, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.818264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5831898, prev total WAL file size 5831898, number of live WAL files 2.
Nov 29 07:56:23 compute-2 nova_compute[232428]: 2025-11-29 07:56:23.818 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.820481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3716KB)], [57(7551KB)]
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983820557, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11538275, "oldest_snapshot_seqno": -1}
Nov 29 07:56:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6000 keys, 9552241 bytes, temperature: kUnknown
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983879824, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9552241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9512073, "index_size": 24058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 154309, "raw_average_key_size": 25, "raw_value_size": 9404055, "raw_average_value_size": 1567, "num_data_blocks": 966, "num_entries": 6000, "num_filter_entries": 6000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.880141) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9552241 bytes
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.882266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.3 rd, 160.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 6536, records dropped: 536 output_compression: NoCompression
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.882285) EVENT_LOG_v1 {"time_micros": 1764402983882274, "job": 34, "event": "compaction_finished", "compaction_time_micros": 59380, "compaction_time_cpu_micros": 22710, "output_level": 6, "num_output_files": 1, "total_output_size": 9552241, "num_input_records": 6536, "num_output_records": 6000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983883246, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983884627, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.820409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.884762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.884774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.884776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.884779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:23 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:56:23.884781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:56:24 compute-2 nova_compute[232428]: 2025-11-29 07:56:24.207 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:24 compute-2 nova_compute[232428]: 2025-11-29 07:56:24.295 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:24 compute-2 ceph-mon[77138]: pgmap v1599: 305 pgs: 305 active+clean; 45 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 217 KiB/s wr, 132 op/s
Nov 29 07:56:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1250436712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:25 compute-2 podman[255760]: 2025-11-29 07:56:25.679988978 +0000 UTC m=+0.077251782 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 07:56:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:25.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3315717158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1529461897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:26 compute-2 ceph-mon[77138]: pgmap v1600: 305 pgs: 305 active+clean; 88 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 29 07:56:27 compute-2 nova_compute[232428]: 2025-11-29 07:56:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:27.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:27.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3663190925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3663190925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-2 nova_compute[232428]: 2025-11-29 07:56:28.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:28 compute-2 nova_compute[232428]: 2025-11-29 07:56:28.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:29 compute-2 nova_compute[232428]: 2025-11-29 07:56:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:29 compute-2 nova_compute[232428]: 2025-11-29 07:56:29.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:56:29 compute-2 nova_compute[232428]: 2025-11-29 07:56:29.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:56:29 compute-2 ceph-mon[77138]: pgmap v1601: 305 pgs: 305 active+clean; 88 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 07:56:29 compute-2 nova_compute[232428]: 2025-11-29 07:56:29.223 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:56:29 compute-2 nova_compute[232428]: 2025-11-29 07:56:29.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:29.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:29.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3107489570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:31 compute-2 ceph-mon[77138]: pgmap v1602: 305 pgs: 305 active+clean; 111 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.8 MiB/s wr, 62 op/s
Nov 29 07:56:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4230568782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:31.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:32 compute-2 sshd-session[255784]: Invalid user sol from 45.148.10.240 port 40764
Nov 29 07:56:32 compute-2 sshd-session[255784]: Connection closed by invalid user sol 45.148.10.240 port 40764 [preauth]
Nov 29 07:56:32 compute-2 nova_compute[232428]: 2025-11-29 07:56:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:32 compute-2 nova_compute[232428]: 2025-11-29 07:56:32.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:33 compute-2 ceph-mon[77138]: pgmap v1603: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 07:56:33 compute-2 nova_compute[232428]: 2025-11-29 07:56:33.822 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:34 compute-2 nova_compute[232428]: 2025-11-29 07:56:34.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/58463716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:34 compute-2 podman[255787]: 2025-11-29 07:56:34.722295151 +0000 UTC m=+0.113523569 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:56:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:35 compute-2 nova_compute[232428]: 2025-11-29 07:56:35.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:35 compute-2 nova_compute[232428]: 2025-11-29 07:56:35.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:56:35 compute-2 ceph-mon[77138]: pgmap v1604: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 07:56:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2308935255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:35.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:36 compute-2 nova_compute[232428]: 2025-11-29 07:56:36.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 07:56:37 compute-2 ceph-mon[77138]: pgmap v1605: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 111 op/s
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.263 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.264 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.264 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.264 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.264 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2226435116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.704 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.926 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.928 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4653MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.929 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:37 compute-2 nova_compute[232428]: 2025-11-29 07:56:37.929 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 07:56:38 compute-2 ceph-mon[77138]: osdmap e192: 3 total, 3 up, 3 in
Nov 29 07:56:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2226435116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.064 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.064 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.079 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/438076833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.562 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.569 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.592 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.627 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.628 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:38 compute-2 nova_compute[232428]: 2025-11-29 07:56:38.824 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 07:56:39 compute-2 ceph-mon[77138]: pgmap v1607: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Nov 29 07:56:39 compute-2 ceph-mon[77138]: osdmap e193: 3 total, 3 up, 3 in
Nov 29 07:56:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/438076833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:39 compute-2 nova_compute[232428]: 2025-11-29 07:56:39.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:40 compute-2 nova_compute[232428]: 2025-11-29 07:56:40.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:40 compute-2 nova_compute[232428]: 2025-11-29 07:56:40.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:56:40 compute-2 nova_compute[232428]: 2025-11-29 07:56:40.220 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:56:40 compute-2 ceph-mon[77138]: osdmap e194: 3 total, 3 up, 3 in
Nov 29 07:56:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/294212724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:41 compute-2 sudo[255861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:41 compute-2 sudo[255861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:41 compute-2 sudo[255861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:41 compute-2 sudo[255886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:41 compute-2 sudo[255886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:41 compute-2 sudo[255886]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:41 compute-2 nova_compute[232428]: 2025-11-29 07:56:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:41 compute-2 nova_compute[232428]: 2025-11-29 07:56:41.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:56:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:41.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:41 compute-2 ceph-mon[77138]: pgmap v1610: 305 pgs: 305 active+clean; 143 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.1 MiB/s wr, 334 op/s
Nov 29 07:56:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4174121997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2157266730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 07:56:43 compute-2 ceph-mon[77138]: pgmap v1611: 305 pgs: 305 active+clean; 139 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.5 MiB/s wr, 311 op/s
Nov 29 07:56:43 compute-2 ceph-mon[77138]: osdmap e195: 3 total, 3 up, 3 in
Nov 29 07:56:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:56:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:43.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:56:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:43 compute-2 nova_compute[232428]: 2025-11-29 07:56:43.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:44 compute-2 nova_compute[232428]: 2025-11-29 07:56:44.474 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:45 compute-2 ceph-mon[77138]: pgmap v1613: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 130 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.6 MiB/s wr, 321 op/s
Nov 29 07:56:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:45.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:46 compute-2 sudo[255914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:46 compute-2 sudo[255914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:46 compute-2 sudo[255914]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:46 compute-2 sudo[255939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:46 compute-2 sudo[255939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:46 compute-2 sudo[255939]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:46 compute-2 sudo[255964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:46 compute-2 sudo[255964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:46 compute-2 sudo[255964]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:46 compute-2 sudo[255989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:56:46 compute-2 sudo[255989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:46 compute-2 nova_compute[232428]: 2025-11-29 07:56:46.904 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:46.904 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:46.906 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:56:46 compute-2 sudo[255989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:48 compute-2 ceph-mon[77138]: pgmap v1614: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Nov 29 07:56:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:56:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:56:48 compute-2 nova_compute[232428]: 2025-11-29 07:56:48.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 07:56:49 compute-2 nova_compute[232428]: 2025-11-29 07:56:49.476 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/481685325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:49 compute-2 ceph-mon[77138]: pgmap v1615: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 102 op/s
Nov 29 07:56:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:56:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:56:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:56:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:49 compute-2 nova_compute[232428]: 2025-11-29 07:56:49.591 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:49.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:50 compute-2 ovn_controller[134375]: 2025-11-29T07:56:50Z|00192|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 07:56:50 compute-2 ceph-mon[77138]: osdmap e196: 3 total, 3 up, 3 in
Nov 29 07:56:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:56:50.908 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:51 compute-2 podman[256049]: 2025-11-29 07:56:51.706999843 +0000 UTC m=+0.098147116 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:56:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:51.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:52 compute-2 ceph-mon[77138]: pgmap v1617: 305 pgs: 305 active+clean; 66 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 75 op/s
Nov 29 07:56:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 07:56:53 compute-2 ceph-mon[77138]: pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 3.8 KiB/s wr, 69 op/s
Nov 29 07:56:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/782884504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:56:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:53.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:56:53 compute-2 nova_compute[232428]: 2025-11-29 07:56:53.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 07:56:54 compute-2 ceph-mon[77138]: osdmap e197: 3 total, 3 up, 3 in
Nov 29 07:56:54 compute-2 ceph-mon[77138]: osdmap e198: 3 total, 3 up, 3 in
Nov 29 07:56:54 compute-2 nova_compute[232428]: 2025-11-29 07:56:54.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:55 compute-2 ceph-mon[77138]: pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.9 KiB/s wr, 50 op/s
Nov 29 07:56:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:55.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:55.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:56 compute-2 podman[256071]: 2025-11-29 07:56:56.665752415 +0000 UTC m=+0.068965001 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:56:56 compute-2 sudo[256092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:56 compute-2 sudo[256092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:56 compute-2 sudo[256092]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:57 compute-2 sudo[256117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:56:57 compute-2 sudo[256117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:57 compute-2 sudo[256117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:57.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:57.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:57 compute-2 ceph-mon[77138]: pgmap v1622: 305 pgs: 305 active+clean; 75 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Nov 29 07:56:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:56:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:56:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1559886720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:58 compute-2 nova_compute[232428]: 2025-11-29 07:56:58.947 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-2 nova_compute[232428]: 2025-11-29 07:56:59.821 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:59.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:56:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:56:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:56:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:59.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:00 compute-2 ceph-mon[77138]: pgmap v1623: 305 pgs: 305 active+clean; 75 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Nov 29 07:57:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2083138911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:01 compute-2 sudo[256145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:01 compute-2 sudo[256145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:01 compute-2 sudo[256145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:01 compute-2 sudo[256170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:01 compute-2 sudo[256170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:01 compute-2 sudo[256170]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:01 compute-2 ceph-mon[77138]: pgmap v1624: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 59 op/s
Nov 29 07:57:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:01.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:03.303 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:03.303 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:03.303 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:03 compute-2 nova_compute[232428]: 2025-11-29 07:57:03.949 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:04 compute-2 ceph-mon[77138]: pgmap v1625: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Nov 29 07:57:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:04 compute-2 nova_compute[232428]: 2025-11-29 07:57:04.824 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:05 compute-2 podman[256197]: 2025-11-29 07:57:05.696061701 +0000 UTC m=+0.094105309 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 07:57:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:05.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:06 compute-2 ceph-mon[77138]: pgmap v1626: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 29 07:57:07 compute-2 ceph-mon[77138]: pgmap v1627: 305 pgs: 305 active+clean; 54 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 884 KiB/s rd, 1009 KiB/s wr, 89 op/s
Nov 29 07:57:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2915494390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:07.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:08 compute-2 nova_compute[232428]: 2025-11-29 07:57:08.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 07:57:09 compute-2 ceph-mon[77138]: pgmap v1628: 305 pgs: 305 active+clean; 54 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 843 KiB/s rd, 510 KiB/s wr, 72 op/s
Nov 29 07:57:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:09 compute-2 nova_compute[232428]: 2025-11-29 07:57:09.825 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:11.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:12 compute-2 ceph-mon[77138]: pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 843 KiB/s rd, 510 KiB/s wr, 72 op/s
Nov 29 07:57:13 compute-2 ceph-mon[77138]: pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 836 KiB/s rd, 14 KiB/s wr, 63 op/s
Nov 29 07:57:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 07:57:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:13 compute-2 nova_compute[232428]: 2025-11-29 07:57:13.954 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:14 compute-2 nova_compute[232428]: 2025-11-29 07:57:14.827 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:15 compute-2 ceph-mon[77138]: osdmap e199: 3 total, 3 up, 3 in
Nov 29 07:57:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:15.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:16 compute-2 ceph-mon[77138]: pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 906 KiB/s rd, 16 KiB/s wr, 75 op/s
Nov 29 07:57:17 compute-2 ceph-mon[77138]: pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:57:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:17.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:18 compute-2 nova_compute[232428]: 2025-11-29 07:57:18.955 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 07:57:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:19 compute-2 nova_compute[232428]: 2025-11-29 07:57:19.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:19.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:21 compute-2 ceph-mon[77138]: pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:57:21 compute-2 sudo[256231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:21 compute-2 sudo[256231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-2 sudo[256231]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:21 compute-2 sudo[256256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:21 compute-2 sudo[256256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-2 sudo[256256]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:21.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:22 compute-2 podman[256281]: 2025-11-29 07:57:22.660239314 +0000 UTC m=+0.055161729 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:57:22 compute-2 ceph-mon[77138]: osdmap e200: 3 total, 3 up, 3 in
Nov 29 07:57:22 compute-2 ceph-mon[77138]: pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Nov 29 07:57:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:23.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:23 compute-2 nova_compute[232428]: 2025-11-29 07:57:23.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:24 compute-2 nova_compute[232428]: 2025-11-29 07:57:24.795 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:24.795 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:24.799 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:57:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:24 compute-2 nova_compute[232428]: 2025-11-29 07:57:24.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-2 ceph-mon[77138]: pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 28 op/s
Nov 29 07:57:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 07:57:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:57:25.803 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:25.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:26 compute-2 ceph-mon[77138]: pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 23 op/s
Nov 29 07:57:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1380635584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:26 compute-2 ceph-mon[77138]: osdmap e201: 3 total, 3 up, 3 in
Nov 29 07:57:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 07:57:27 compute-2 nova_compute[232428]: 2025-11-29 07:57:27.224 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:27 compute-2 ceph-mon[77138]: pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 43 op/s
Nov 29 07:57:27 compute-2 ceph-mon[77138]: osdmap e202: 3 total, 3 up, 3 in
Nov 29 07:57:27 compute-2 podman[256306]: 2025-11-29 07:57:27.665480992 +0000 UTC m=+0.063228001 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:57:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:27.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:27.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2709245885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2709245885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:28 compute-2 nova_compute[232428]: 2025-11-29 07:57:28.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:29 compute-2 nova_compute[232428]: 2025-11-29 07:57:29.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 07:57:29 compute-2 ceph-mon[77138]: pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.6 KiB/s wr, 17 op/s
Nov 29 07:57:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:29 compute-2 nova_compute[232428]: 2025-11-29 07:57:29.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:30 compute-2 nova_compute[232428]: 2025-11-29 07:57:30.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:30 compute-2 nova_compute[232428]: 2025-11-29 07:57:30.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:57:30 compute-2 nova_compute[232428]: 2025-11-29 07:57:30.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:57:30 compute-2 nova_compute[232428]: 2025-11-29 07:57:30.308 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:57:30 compute-2 ceph-mon[77138]: osdmap e203: 3 total, 3 up, 3 in
Nov 29 07:57:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3804298917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1787394271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:31 compute-2 ceph-mon[77138]: pgmap v1644: 305 pgs: 305 active+clean; 68 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Nov 29 07:57:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:32 compute-2 nova_compute[232428]: 2025-11-29 07:57:32.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:33 compute-2 ceph-mon[77138]: pgmap v1645: 305 pgs: 305 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.4 MiB/s wr, 98 op/s
Nov 29 07:57:33 compute-2 nova_compute[232428]: 2025-11-29 07:57:33.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:33.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:33.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:33 compute-2 nova_compute[232428]: 2025-11-29 07:57:33.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2663283178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:34 compute-2 nova_compute[232428]: 2025-11-29 07:57:34.862 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:35 compute-2 ceph-mon[77138]: pgmap v1646: 305 pgs: 305 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.706 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a0ebd5be-6171-41dc-8014-4a7eee729935" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.706 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.740 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:57:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:35.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.945 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.946 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.955 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:57:35 compute-2 nova_compute[232428]: 2025-11-29 07:57:35.956 232432 INFO nova.compute.claims [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.166 232432 DEBUG nova.scheduler.client.report [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.184 232432 DEBUG nova.scheduler.client.report [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.184 232432 DEBUG nova.compute.provider_tree [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:57:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2192504686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.199 232432 DEBUG nova.scheduler.client.report [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.226 232432 DEBUG nova.scheduler.client.report [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.270 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:36 compute-2 podman[256349]: 2025-11-29 07:57:36.697118169 +0000 UTC m=+0.094769970 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:57:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2040827526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.726 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.734 232432 DEBUG nova.compute.provider_tree [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.827 232432 DEBUG nova.scheduler.client.report [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.849 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.850 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.911 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.911 232432 DEBUG nova.network.neutron [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.933 232432 INFO nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:57:36 compute-2 nova_compute[232428]: 2025-11-29 07:57:36.948 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.039 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.041 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.042 232432 INFO nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Creating image(s)
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.076 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.103 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.130 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.136 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.221 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.222 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.222 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:37 compute-2 ceph-mon[77138]: pgmap v1647: 305 pgs: 305 active+clean; 110 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 155 op/s
Nov 29 07:57:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/188750448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3443759431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2040827526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1285861010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/34004069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.223 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.257 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.262 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a0ebd5be-6171-41dc-8014-4a7eee729935_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.302 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.303 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.303 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.304 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.305 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.400 232432 DEBUG nova.network.neutron [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.401 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.684 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a0ebd5be-6171-41dc-8014-4a7eee729935_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.770 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] resizing rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:57:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/654530713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.816 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:37.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.994 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.995 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4622MB free_disk=20.960533142089844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.995 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:37 compute-2 nova_compute[232428]: 2025-11-29 07:57:37.996 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.130 232432 DEBUG nova.objects.instance [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'migration_context' on Instance uuid a0ebd5be-6171-41dc-8014-4a7eee729935 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.228 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.228 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Ensure instance console log exists: /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.229 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.229 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.229 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.231 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.237 232432 WARNING nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.262 232432 DEBUG nova.virt.libvirt.host [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.263 232432 DEBUG nova.virt.libvirt.host [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.267 232432 DEBUG nova.virt.libvirt.host [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.268 232432 DEBUG nova.virt.libvirt.host [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.269 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.270 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.270 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.270 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.270 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.271 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.272 232432 DEBUG nova.virt.hardware [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.275 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.309 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance a0ebd5be-6171-41dc-8014-4a7eee729935 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.310 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.310 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.369 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/654530713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1639132123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3691503490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1847159811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.816 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.864 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.872 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.912 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.923 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.950 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.980 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.981 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:38 compute-2 nova_compute[232428]: 2025-11-29 07:57:38.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3859746744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.363 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.368 232432 DEBUG nova.objects.instance [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0ebd5be-6171-41dc-8014-4a7eee729935 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.392 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <uuid>a0ebd5be-6171-41dc-8014-4a7eee729935</uuid>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <name>instance-00000035</name>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:name>tempest-ListImageFiltersTestJSON-server-61174115</nova:name>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:57:38</nova:creationTime>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:user uuid="c2aeea466c9049d3a023483ec2e5b4f6">tempest-ListImageFiltersTestJSON-667978844-project-member</nova:user>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <nova:project uuid="30d42ce85b6840c6942b24cf4a7b9d64">tempest-ListImageFiltersTestJSON-667978844</nova:project>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <system>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="serial">a0ebd5be-6171-41dc-8014-4a7eee729935</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="uuid">a0ebd5be-6171-41dc-8014-4a7eee729935</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </system>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <os>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </os>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <features>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </features>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a0ebd5be-6171-41dc-8014-4a7eee729935_disk">
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </source>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config">
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </source>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:57:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/console.log" append="off"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <video>
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </video>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:57:39 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:57:39 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:57:39 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:57:39 compute-2 nova_compute[232428]: </domain>
Nov 29 07:57:39 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.453 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.454 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.455 232432 INFO nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Using config drive
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.498 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.688 232432 INFO nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Creating config drive at /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.699 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0s2g_tf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.839 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0s2g_tf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.877 232432 DEBUG nova.storage.rbd_utils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.882 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:39 compute-2 nova_compute[232428]: 2025-11-29 07:57:39.982 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:40 compute-2 ceph-mon[77138]: pgmap v1648: 305 pgs: 305 active+clean; 110 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Nov 29 07:57:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3691503490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1847159811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3859746744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:40 compute-2 nova_compute[232428]: 2025-11-29 07:57:40.701 232432 DEBUG oslo_concurrency.processutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config a0ebd5be-6171-41dc-8014-4a7eee729935_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:40 compute-2 nova_compute[232428]: 2025-11-29 07:57:40.702 232432 INFO nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Deleting local config drive /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935/disk.config because it was imported into RBD.
Nov 29 07:57:40 compute-2 systemd-machined[194747]: New machine qemu-25-instance-00000035.
Nov 29 07:57:40 compute-2 systemd[1]: Started Virtual Machine qemu-25-instance-00000035.
Nov 29 07:57:41 compute-2 sudo[256725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:41 compute-2 sudo[256725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:41 compute-2 sudo[256725]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:41 compute-2 sudo[256750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:41 compute-2 sudo[256750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:41 compute-2 sudo[256750]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:41.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:42 compute-2 ceph-mon[77138]: pgmap v1649: 305 pgs: 305 active+clean; 169 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 198 op/s
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.242 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.244 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.246 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403062.241551, a0ebd5be-6171-41dc-8014-4a7eee729935 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.246 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] VM Resumed (Lifecycle Event)
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.256 232432 INFO nova.virt.libvirt.driver [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance spawned successfully.
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.256 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.274 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.283 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.286 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.286 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.287 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.287 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.287 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.288 232432 DEBUG nova.virt.libvirt.driver [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.326 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.327 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403062.2426798, a0ebd5be-6171-41dc-8014-4a7eee729935 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.327 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] VM Started (Lifecycle Event)
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.406 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.409 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.436 232432 INFO nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Took 5.40 seconds to spawn the instance on the hypervisor.
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.436 232432 DEBUG nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.438 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.611 232432 INFO nova.compute.manager [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Took 6.82 seconds to build instance.
Nov 29 07:57:42 compute-2 nova_compute[232428]: 2025-11-29 07:57:42.638 232432 DEBUG oslo_concurrency.lockutils [None req-523aecff-b961-4810-b44c-d0300e9a5bc7 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:43 compute-2 ceph-mon[77138]: pgmap v1650: 305 pgs: 305 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 205 op/s
Nov 29 07:57:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:43 compute-2 nova_compute[232428]: 2025-11-29 07:57:43.987 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:44 compute-2 ovn_controller[134375]: 2025-11-29T07:57:44Z|00193|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 07:57:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:44 compute-2 nova_compute[232428]: 2025-11-29 07:57:44.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:44 compute-2 ceph-mon[77138]: pgmap v1651: 305 pgs: 305 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 197 op/s
Nov 29 07:57:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 07:57:47 compute-2 ceph-mon[77138]: pgmap v1652: 305 pgs: 305 active+clean; 190 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 259 op/s
Nov 29 07:57:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:48 compute-2 ceph-mon[77138]: osdmap e204: 3 total, 3 up, 3 in
Nov 29 07:57:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1035584264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 07:57:49 compute-2 nova_compute[232428]: 2025-11-29 07:57:49.215 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:49 compute-2 ceph-mon[77138]: pgmap v1654: 305 pgs: 305 active+clean; 190 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 236 op/s
Nov 29 07:57:49 compute-2 ceph-mon[77138]: osdmap e205: 3 total, 3 up, 3 in
Nov 29 07:57:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 07:57:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:49 compute-2 nova_compute[232428]: 2025-11-29 07:57:49.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:49.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:49.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:50 compute-2 ceph-mon[77138]: osdmap e206: 3 total, 3 up, 3 in
Nov 29 07:57:51 compute-2 ceph-mon[77138]: pgmap v1657: 305 pgs: 305 active+clean; 259 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.9 MiB/s wr, 301 op/s
Nov 29 07:57:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3223520210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:53 compute-2 ceph-mon[77138]: pgmap v1658: 305 pgs: 305 active+clean; 302 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 9.1 MiB/s wr, 308 op/s
Nov 29 07:57:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4012813973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:53 compute-2 nova_compute[232428]: 2025-11-29 07:57:53.381 232432 DEBUG nova.compute.manager [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:53 compute-2 nova_compute[232428]: 2025-11-29 07:57:53.559 232432 INFO nova.compute.manager [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] instance snapshotting
Nov 29 07:57:53 compute-2 podman[256823]: 2025-11-29 07:57:53.667912386 +0000 UTC m=+0.058760683 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 07:57:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:53.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:57:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:53.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:57:54 compute-2 nova_compute[232428]: 2025-11-29 07:57:54.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:54 compute-2 nova_compute[232428]: 2025-11-29 07:57:54.592 232432 INFO nova.virt.libvirt.driver [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Beginning live snapshot process
Nov 29 07:57:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:54 compute-2 nova_compute[232428]: 2025-11-29 07:57:54.881 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:54 compute-2 nova_compute[232428]: 2025-11-29 07:57:54.887 232432 DEBUG nova.virt.libvirt.imagebackend [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 07:57:55 compute-2 ceph-mon[77138]: pgmap v1659: 305 pgs: 305 active+clean; 314 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 9.8 MiB/s wr, 338 op/s
Nov 29 07:57:55 compute-2 nova_compute[232428]: 2025-11-29 07:57:55.575 232432 DEBUG nova.storage.rbd_utils [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(a322560aca1c494199ea5cc0b59d1b22) on rbd image(a0ebd5be-6171-41dc-8014-4a7eee729935_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:57:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:55.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:55.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:57:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 07:57:57 compute-2 sudo[256894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:57 compute-2 sudo[256894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:57 compute-2 sudo[256894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:57 compute-2 sudo[256919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:57 compute-2 sudo[256919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:57 compute-2 sudo[256919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:57 compute-2 sudo[256944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:57 compute-2 sudo[256944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:57 compute-2 sudo[256944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:57 compute-2 sudo[256970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:57:57 compute-2 sudo[256970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:57 compute-2 ceph-mon[77138]: pgmap v1660: 305 pgs: 305 active+clean; 347 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 372 op/s
Nov 29 07:57:57 compute-2 ceph-mon[77138]: osdmap e207: 3 total, 3 up, 3 in
Nov 29 07:57:57 compute-2 nova_compute[232428]: 2025-11-29 07:57:57.597 232432 DEBUG nova.storage.rbd_utils [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] cloning vms/a0ebd5be-6171-41dc-8014-4a7eee729935_disk@a322560aca1c494199ea5cc0b59d1b22 to images/df78a6c7-19bb-4390-bc30-7251b87a9d47 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:57:57 compute-2 sudo[256970]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 07:57:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:57.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 07:57:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:57:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:57:58 compute-2 nova_compute[232428]: 2025-11-29 07:57:58.231 232432 DEBUG nova.storage.rbd_utils [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] flattening images/df78a6c7-19bb-4390-bc30-7251b87a9d47 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 07:57:58 compute-2 podman[257080]: 2025-11-29 07:57:58.709157904 +0000 UTC m=+0.088503215 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:57:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:59 compute-2 nova_compute[232428]: 2025-11-29 07:57:59.232 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:59 compute-2 nova_compute[232428]: 2025-11-29 07:57:59.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:57:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:57:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 07:58:00 compute-2 ceph-mon[77138]: pgmap v1662: 305 pgs: 305 active+clean; 347 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Nov 29 07:58:00 compute-2 nova_compute[232428]: 2025-11-29 07:58:00.431 232432 DEBUG nova.storage.rbd_utils [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] removing snapshot(a322560aca1c494199ea5cc0b59d1b22) on rbd image(a0ebd5be-6171-41dc-8014-4a7eee729935_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:58:01 compute-2 ceph-mon[77138]: pgmap v1663: 305 pgs: 305 active+clean; 350 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 241 op/s
Nov 29 07:58:01 compute-2 ceph-mon[77138]: osdmap e208: 3 total, 3 up, 3 in
Nov 29 07:58:01 compute-2 sudo[257122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:01 compute-2 sudo[257122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:01 compute-2 sudo[257122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:01.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:01 compute-2 sudo[257147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:01 compute-2 sudo[257147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:01 compute-2 sudo[257147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:02.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 07:58:02 compute-2 nova_compute[232428]: 2025-11-29 07:58:02.881 232432 DEBUG nova.storage.rbd_utils [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(snap) on rbd image(df78a6c7-19bb-4390-bc30-7251b87a9d47) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:03.303 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:03.305 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:03.305 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:03 compute-2 ceph-mon[77138]: pgmap v1665: 305 pgs: 305 active+clean; 388 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 7.7 MiB/s wr, 301 op/s
Nov 29 07:58:03 compute-2 ceph-mon[77138]: osdmap e209: 3 total, 3 up, 3 in
Nov 29 07:58:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 07:58:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:03.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:04.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:04 compute-2 nova_compute[232428]: 2025-11-29 07:58:04.259 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:04 compute-2 ceph-mon[77138]: osdmap e210: 3 total, 3 up, 3 in
Nov 29 07:58:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:04 compute-2 nova_compute[232428]: 2025-11-29 07:58:04.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:05.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:06.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:06 compute-2 nova_compute[232428]: 2025-11-29 07:58:06.184 232432 INFO nova.virt.libvirt.driver [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Snapshot image upload complete
Nov 29 07:58:06 compute-2 nova_compute[232428]: 2025-11-29 07:58:06.185 232432 INFO nova.compute.manager [None req-c515a89d-5d34-4af8-a53f-d39d561214bf c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Took 12.62 seconds to snapshot the instance on the hypervisor.
Nov 29 07:58:06 compute-2 ceph-mon[77138]: pgmap v1668: 305 pgs: 305 active+clean; 399 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.8 MiB/s wr, 338 op/s
Nov 29 07:58:07 compute-2 podman[257193]: 2025-11-29 07:58:07.715941744 +0000 UTC m=+0.109469881 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:58:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:07.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:09 compute-2 nova_compute[232428]: 2025-11-29 07:58:09.262 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:09 compute-2 ceph-mon[77138]: pgmap v1669: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 9.8 MiB/s wr, 325 op/s
Nov 29 07:58:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:09 compute-2 nova_compute[232428]: 2025-11-29 07:58:09.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:09.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:10.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:10 compute-2 sudo[257221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:10 compute-2 sudo[257221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-2 sudo[257221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:10 compute-2 sudo[257246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:58:10 compute-2 sudo[257246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-2 sudo[257246]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:10 compute-2 ceph-mon[77138]: pgmap v1670: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 165 op/s
Nov 29 07:58:10 compute-2 ceph-mon[77138]: osdmap e211: 3 total, 3 up, 3 in
Nov 29 07:58:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:58:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:58:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 07:58:11 compute-2 ceph-mon[77138]: pgmap v1672: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 111 op/s
Nov 29 07:58:11 compute-2 ceph-mon[77138]: osdmap e212: 3 total, 3 up, 3 in
Nov 29 07:58:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:11.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:12.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-2 ceph-mon[77138]: pgmap v1674: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 725 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Nov 29 07:58:13 compute-2 ceph-mon[77138]: osdmap e213: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:13.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 07:58:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:14.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:14 compute-2 nova_compute[232428]: 2025-11-29 07:58:14.264 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:14 compute-2 nova_compute[232428]: 2025-11-29 07:58:14.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 07:58:14 compute-2 ceph-mon[77138]: pgmap v1676: 305 pgs: 305 active+clean; 469 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 102 op/s
Nov 29 07:58:14 compute-2 ceph-mon[77138]: osdmap e214: 3 total, 3 up, 3 in
Nov 29 07:58:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 07:58:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:15 compute-2 ceph-mon[77138]: osdmap e215: 3 total, 3 up, 3 in
Nov 29 07:58:15 compute-2 ceph-mon[77138]: osdmap e216: 3 total, 3 up, 3 in
Nov 29 07:58:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:17 compute-2 ceph-mon[77138]: pgmap v1680: 305 pgs: 305 active+clean; 559 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 17 MiB/s rd, 16 MiB/s wr, 333 op/s
Nov 29 07:58:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:17.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:18 compute-2 nova_compute[232428]: 2025-11-29 07:58:18.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:18.918 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:18.920 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:58:19 compute-2 nova_compute[232428]: 2025-11-29 07:58:19.267 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:19 compute-2 ceph-mon[77138]: pgmap v1681: 305 pgs: 305 active+clean; 559 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 252 op/s
Nov 29 07:58:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:19 compute-2 nova_compute[232428]: 2025-11-29 07:58:19.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:19.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:21 compute-2 ceph-mon[77138]: pgmap v1682: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 8.1 MiB/s wr, 201 op/s
Nov 29 07:58:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:21.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:22 compute-2 sudo[257277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:22 compute-2 sudo[257277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:22 compute-2 sudo[257277]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:22 compute-2 sudo[257302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:22 compute-2 sudo[257302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:22 compute-2 sudo[257302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:22 compute-2 ceph-mon[77138]: pgmap v1683: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 171 op/s
Nov 29 07:58:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 07:58:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:23.923 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:23 compute-2 ceph-mon[77138]: osdmap e217: 3 total, 3 up, 3 in
Nov 29 07:58:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:23.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:24 compute-2 nova_compute[232428]: 2025-11-29 07:58:24.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:24 compute-2 podman[257328]: 2025-11-29 07:58:24.674871668 +0000 UTC m=+0.073423068 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 07:58:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:24 compute-2 nova_compute[232428]: 2025-11-29 07:58:24.881 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:25 compute-2 ceph-mon[77138]: pgmap v1685: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 88 op/s
Nov 29 07:58:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 07:58:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:25.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:26 compute-2 ceph-mon[77138]: osdmap e218: 3 total, 3 up, 3 in
Nov 29 07:58:27 compute-2 nova_compute[232428]: 2025-11-29 07:58:27.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:27 compute-2 nova_compute[232428]: 2025-11-29 07:58:27.232 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802580737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802580737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:29 compute-2 nova_compute[232428]: 2025-11-29 07:58:29.270 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:29 compute-2 ceph-mon[77138]: pgmap v1687: 305 pgs: 305 active+clean; 531 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 437 KiB/s wr, 65 op/s
Nov 29 07:58:29 compute-2 podman[257352]: 2025-11-29 07:58:29.674281793 +0000 UTC m=+0.074080047 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:58:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:29 compute-2 nova_compute[232428]: 2025-11-29 07:58:29.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:29.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:30.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.540 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.540 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.541 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.541 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a0ebd5be-6171-41dc-8014-4a7eee729935 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:31.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:31 compute-2 nova_compute[232428]: 2025-11-29 07:58:31.974 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:58:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:32.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:32 compute-2 nova_compute[232428]: 2025-11-29 07:58:32.580 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:32 compute-2 nova_compute[232428]: 2025-11-29 07:58:32.601 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:32 compute-2 nova_compute[232428]: 2025-11-29 07:58:32.602 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:58:32 compute-2 nova_compute[232428]: 2025-11-29 07:58:32.603 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:33 compute-2 nova_compute[232428]: 2025-11-29 07:58:33.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:33.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:34 compute-2 ceph-mon[77138]: pgmap v1688: 305 pgs: 305 active+clean; 531 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 39 op/s
Nov 29 07:58:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1802580737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1802580737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2195951477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1424215325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:34 compute-2 nova_compute[232428]: 2025-11-29 07:58:34.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:34 compute-2 nova_compute[232428]: 2025-11-29 07:58:34.885 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:35.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:36.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:37 compute-2 nova_compute[232428]: 2025-11-29 07:58:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:37 compute-2 nova_compute[232428]: 2025-11-29 07:58:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:37 compute-2 nova_compute[232428]: 2025-11-29 07:58:37.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:58:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:37.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:38 compute-2 ceph-mon[77138]: pgmap v1689: 305 pgs: 305 active+clean; 484 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 16 KiB/s wr, 62 op/s
Nov 29 07:58:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1351949704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:38 compute-2 ceph-mon[77138]: pgmap v1690: 305 pgs: 305 active+clean; 393 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 18 KiB/s wr, 105 op/s
Nov 29 07:58:38 compute-2 ceph-mon[77138]: pgmap v1691: 305 pgs: 305 active+clean; 393 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 14 KiB/s wr, 86 op/s
Nov 29 07:58:38 compute-2 sshd-session[257376]: Invalid user funded from 45.148.10.240 port 39588
Nov 29 07:58:38 compute-2 sshd-session[257376]: Connection closed by invalid user funded 45.148.10.240 port 39588 [preauth]
Nov 29 07:58:38 compute-2 podman[257378]: 2025-11-29 07:58:38.376536323 +0000 UTC m=+0.123394531 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 07:58:39 compute-2 ceph-mon[77138]: pgmap v1692: 305 pgs: 305 active+clean; 393 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 15 KiB/s wr, 80 op/s
Nov 29 07:58:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2247265864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3457876871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/955524022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.243 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.244 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.244 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.245 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.245 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1920553967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.809 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.887 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.914 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:58:39 compute-2 nova_compute[232428]: 2025-11-29 07:58:39.915 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:58:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:39.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.129 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.131 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4450MB free_disk=20.897319793701172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.131 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.132 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 07:58:40 compute-2 ceph-mon[77138]: pgmap v1693: 305 pgs: 305 active+clean; 393 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.9 KiB/s wr, 67 op/s
Nov 29 07:58:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3329070535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1920553967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:40 compute-2 ceph-mon[77138]: osdmap e219: 3 total, 3 up, 3 in
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.604 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance a0ebd5be-6171-41dc-8014-4a7eee729935 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.604 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.604 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:58:40 compute-2 nova_compute[232428]: 2025-11-29 07:58:40.698 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4037855702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:41 compute-2 nova_compute[232428]: 2025-11-29 07:58:41.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:41 compute-2 nova_compute[232428]: 2025-11-29 07:58:41.245 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:41 compute-2 nova_compute[232428]: 2025-11-29 07:58:41.266 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:41 compute-2 nova_compute[232428]: 2025-11-29 07:58:41.290 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:58:41 compute-2 nova_compute[232428]: 2025-11-29 07:58:41.291 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 07:58:41 compute-2 ceph-mon[77138]: pgmap v1694: 305 pgs: 305 active+clean; 395 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 107 KiB/s wr, 78 op/s
Nov 29 07:58:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:41.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:42 compute-2 sudo[257451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:42 compute-2 sudo[257451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:42 compute-2 sudo[257451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:42 compute-2 nova_compute[232428]: 2025-11-29 07:58:42.291 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:42 compute-2 sudo[257476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:42 compute-2 sudo[257476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:42 compute-2 sudo[257476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4037855702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:42 compute-2 ceph-mon[77138]: osdmap e220: 3 total, 3 up, 3 in
Nov 29 07:58:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/665550622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3803348600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:43 compute-2 ceph-mon[77138]: pgmap v1697: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 424 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 51 op/s
Nov 29 07:58:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:43.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:44.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:44 compute-2 nova_compute[232428]: 2025-11-29 07:58:44.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2982941923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 07:58:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:44 compute-2 nova_compute[232428]: 2025-11-29 07:58:44.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:45 compute-2 ceph-mon[77138]: pgmap v1698: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 416 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 49 op/s
Nov 29 07:58:45 compute-2 ceph-mon[77138]: osdmap e221: 3 total, 3 up, 3 in
Nov 29 07:58:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:45.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:46.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 07:58:47 compute-2 ceph-mon[77138]: pgmap v1700: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 309 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 3.4 MiB/s wr, 125 op/s
Nov 29 07:58:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:47.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:48.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:48 compute-2 ceph-mon[77138]: osdmap e222: 3 total, 3 up, 3 in
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.931 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a0ebd5be-6171-41dc-8014-4a7eee729935" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.931 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.931 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a0ebd5be-6171-41dc-8014-4a7eee729935-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.932 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.932 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.934 232432 INFO nova.compute.manager [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Terminating instance
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.935 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.935 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquired lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:58:48 compute-2 nova_compute[232428]: 2025-11-29 07:58:48.935 232432 DEBUG nova.network.neutron [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:58:49 compute-2 ceph-mon[77138]: pgmap v1702: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 309 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 948 KiB/s wr, 83 op/s
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.427 232432 DEBUG nova.network.neutron [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:58:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.858 232432 DEBUG nova.network.neutron [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.873 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Releasing lock "refresh_cache-a0ebd5be-6171-41dc-8014-4a7eee729935" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.874 232432 DEBUG nova.compute.manager [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:58:49 compute-2 nova_compute[232428]: 2025-11-29 07:58:49.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:49.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:50 compute-2 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 29 07:58:50 compute-2 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000035.scope: Consumed 17.094s CPU time.
Nov 29 07:58:50 compute-2 systemd-machined[194747]: Machine qemu-25-instance-00000035 terminated.
Nov 29 07:58:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:50.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:50 compute-2 nova_compute[232428]: 2025-11-29 07:58:50.106 232432 INFO nova.virt.libvirt.driver [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance destroyed successfully.
Nov 29 07:58:50 compute-2 nova_compute[232428]: 2025-11-29 07:58:50.107 232432 DEBUG nova.objects.instance [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'resources' on Instance uuid a0ebd5be-6171-41dc-8014-4a7eee729935 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 07:58:50 compute-2 nova_compute[232428]: 2025-11-29 07:58:50.949 232432 INFO nova.virt.libvirt.driver [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Deleting instance files /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935_del
Nov 29 07:58:50 compute-2 nova_compute[232428]: 2025-11-29 07:58:50.950 232432 INFO nova.virt.libvirt.driver [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Deletion of /var/lib/nova/instances/a0ebd5be-6171-41dc-8014-4a7eee729935_del complete
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.016 232432 INFO nova.compute.manager [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Took 1.14 seconds to destroy the instance on the hypervisor.
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.016 232432 DEBUG oslo.service.loopingcall [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.017 232432 DEBUG nova.compute.manager [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.017 232432 DEBUG nova.network.neutron [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.303 232432 DEBUG nova.network.neutron [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.321 232432 DEBUG nova.network.neutron [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.336 232432 INFO nova.compute.manager [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Took 0.32 seconds to deallocate network for instance.
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.402 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.402 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.492 232432 DEBUG oslo_concurrency.processutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3170569126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.944 232432 DEBUG oslo_concurrency.processutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.952 232432 DEBUG nova.compute.provider_tree [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:51 compute-2 nova_compute[232428]: 2025-11-29 07:58:51.983 232432 DEBUG nova.scheduler.client.report [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:58:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:58:52 compute-2 nova_compute[232428]: 2025-11-29 07:58:52.054 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:52.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:52 compute-2 nova_compute[232428]: 2025-11-29 07:58:52.112 232432 INFO nova.scheduler.client.report [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Deleted allocations for instance a0ebd5be-6171-41dc-8014-4a7eee729935
Nov 29 07:58:52 compute-2 nova_compute[232428]: 2025-11-29 07:58:52.190 232432 DEBUG oslo_concurrency.lockutils [None req-f2fe8735-ec8d-49ca-b044-747c1acb2996 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a0ebd5be-6171-41dc-8014-4a7eee729935" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:53 compute-2 ceph-mon[77138]: pgmap v1703: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 310 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 923 KiB/s rd, 1.4 MiB/s wr, 106 op/s
Nov 29 07:58:53 compute-2 ceph-mon[77138]: osdmap e223: 3 total, 3 up, 3 in
Nov 29 07:58:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:54.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:54 compute-2 ceph-mon[77138]: pgmap v1705: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 304 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 268 op/s
Nov 29 07:58:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3170569126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/212407046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2854085679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:54 compute-2 nova_compute[232428]: 2025-11-29 07:58:54.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:54 compute-2 nova_compute[232428]: 2025-11-29 07:58:54.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-2 podman[257553]: 2025-11-29 07:58:55.647574127 +0000 UTC m=+0.055769425 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 07:58:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:56.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:56 compute-2 nova_compute[232428]: 2025-11-29 07:58:56.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:56.877 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:58:56.878 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:58:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:58:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:58:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:58:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:58:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:58.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:58:58 compute-2 ceph-mon[77138]: pgmap v1706: 305 pgs: 305 active+clean; 274 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Nov 29 07:58:59 compute-2 nova_compute[232428]: 2025-11-29 07:58:59.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:59 compute-2 nova_compute[232428]: 2025-11-29 07:58:59.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:00 compute-2 podman[257574]: 2025-11-29 07:59:00.707271436 +0000 UTC m=+0.104123538 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:59:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:02 compute-2 sudo[257595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:02 compute-2 sudo[257595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:02 compute-2 sudo[257595]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:02 compute-2 sudo[257620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:02 compute-2 sudo[257620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:02 compute-2 sudo[257620]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 07:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:03.305 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:03.305 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:03.305 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:03.881 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:04 compute-2 nova_compute[232428]: 2025-11-29 07:59:04.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:04 compute-2 nova_compute[232428]: 2025-11-29 07:59:04.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:05 compute-2 nova_compute[232428]: 2025-11-29 07:59:05.106 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403130.1043055, a0ebd5be-6171-41dc-8014-4a7eee729935 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:05 compute-2 nova_compute[232428]: 2025-11-29 07:59:05.106 232432 INFO nova.compute.manager [-] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] VM Stopped (Lifecycle Event)
Nov 29 07:59:05 compute-2 nova_compute[232428]: 2025-11-29 07:59:05.152 232432 DEBUG nova.compute.manager [None req-bead876d-20b3-48e1-b89f-c1feaefad107 - - - - - -] [instance: a0ebd5be-6171-41dc-8014-4a7eee729935] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:05 compute-2 ceph-mon[77138]: pgmap v1707: 305 pgs: 305 active+clean; 157 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 29 07:59:05 compute-2 ceph-mon[77138]: pgmap v1708: 305 pgs: 305 active+clean; 157 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 29 07:59:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:06.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:08.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:08 compute-2 ceph-mon[77138]: pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Nov 29 07:59:08 compute-2 ceph-mon[77138]: pgmap v1710: 305 pgs: 305 active+clean; 135 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 496 KiB/s wr, 74 op/s
Nov 29 07:59:08 compute-2 ceph-mon[77138]: pgmap v1711: 305 pgs: 305 active+clean; 136 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 836 KiB/s wr, 74 op/s
Nov 29 07:59:08 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:59:08 compute-2 ceph-mon[77138]: osdmap e224: 3 total, 3 up, 3 in
Nov 29 07:59:08 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:59:08 compute-2 podman[257649]: 2025-11-29 07:59:08.706275333 +0000 UTC m=+0.108127832 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:59:09 compute-2 nova_compute[232428]: 2025-11-29 07:59:09.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:09 compute-2 ceph-mon[77138]: pgmap v1713: 305 pgs: 305 active+clean; 147 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 07:59:09 compute-2 ceph-mon[77138]: pgmap v1714: 305 pgs: 305 active+clean; 147 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 07:59:09 compute-2 nova_compute[232428]: 2025-11-29 07:59:09.900 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:10 compute-2 sudo[257676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:10 compute-2 sudo[257676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:10 compute-2 sudo[257676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:10 compute-2 sudo[257701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:10 compute-2 sudo[257701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:10 compute-2 sudo[257701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:10 compute-2 sudo[257726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:10 compute-2 sudo[257726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:10 compute-2 sudo[257726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:10 compute-2 sudo[257751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:59:10 compute-2 sudo[257751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:11 compute-2 sudo[257751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:12.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:13 compute-2 ceph-mon[77138]: pgmap v1715: 305 pgs: 305 active+clean; 150 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Nov 29 07:59:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:14.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:14 compute-2 nova_compute[232428]: 2025-11-29 07:59:14.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:14 compute-2 ceph-mon[77138]: pgmap v1716: 305 pgs: 305 active+clean; 155 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Nov 29 07:59:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3976739805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:14 compute-2 nova_compute[232428]: 2025-11-29 07:59:14.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 07:59:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:16 compute-2 ceph-mon[77138]: pgmap v1717: 305 pgs: 305 active+clean; 155 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 29 07:59:17 compute-2 ceph-mon[77138]: pgmap v1718: 305 pgs: 305 active+clean; 160 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 85 op/s
Nov 29 07:59:17 compute-2 ceph-mon[77138]: osdmap e225: 3 total, 3 up, 3 in
Nov 29 07:59:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:59:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:59:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:59:19 compute-2 nova_compute[232428]: 2025-11-29 07:59:19.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:19 compute-2 nova_compute[232428]: 2025-11-29 07:59:19.904 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:20 compute-2 ceph-mon[77138]: pgmap v1720: 305 pgs: 305 active+clean; 160 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 639 KiB/s wr, 97 op/s
Nov 29 07:59:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:59:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:59:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:59:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 07:59:21 compute-2 ceph-mon[77138]: pgmap v1721: 305 pgs: 305 active+clean; 161 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 477 KiB/s wr, 134 op/s
Nov 29 07:59:21 compute-2 ceph-mon[77138]: osdmap e226: 3 total, 3 up, 3 in
Nov 29 07:59:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:22 compute-2 sudo[257813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:22 compute-2 sudo[257813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:22 compute-2 sudo[257813]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:22 compute-2 sudo[257838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:22 compute-2 sudo[257838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:22 compute-2 sudo[257838]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:24 compute-2 nova_compute[232428]: 2025-11-29 07:59:24.317 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:24 compute-2 ceph-mon[77138]: pgmap v1723: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 230 KiB/s wr, 199 op/s
Nov 29 07:59:24 compute-2 nova_compute[232428]: 2025-11-29 07:59:24.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:26 compute-2 ceph-mon[77138]: pgmap v1724: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 794 KiB/s rd, 157 KiB/s wr, 93 op/s
Nov 29 07:59:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:26 compute-2 podman[257865]: 2025-11-29 07:59:26.679553753 +0000 UTC m=+0.080824869 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Nov 29 07:59:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 07:59:27 compute-2 ceph-mon[77138]: pgmap v1725: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 142 KiB/s wr, 105 op/s
Nov 29 07:59:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2970944801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2970944801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:28 compute-2 nova_compute[232428]: 2025-11-29 07:59:28.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:28 compute-2 ceph-mon[77138]: osdmap e227: 3 total, 3 up, 3 in
Nov 29 07:59:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2970944801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2970944801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:29 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 07:59:29 compute-2 ceph-mon[77138]: pgmap v1727: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 94 KiB/s wr, 76 op/s
Nov 29 07:59:29 compute-2 nova_compute[232428]: 2025-11-29 07:59:29.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-2 nova_compute[232428]: 2025-11-29 07:59:29.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:31 compute-2 nova_compute[232428]: 2025-11-29 07:59:31.190 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:31 compute-2 podman[257887]: 2025-11-29 07:59:31.665228059 +0000 UTC m=+0.064401095 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:59:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:32 compute-2 nova_compute[232428]: 2025-11-29 07:59:32.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:33 compute-2 nova_compute[232428]: 2025-11-29 07:59:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:33 compute-2 nova_compute[232428]: 2025-11-29 07:59:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:59:33 compute-2 nova_compute[232428]: 2025-11-29 07:59:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:59:33 compute-2 ceph-mon[77138]: pgmap v1728: 305 pgs: 305 active+clean; 169 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 151 KiB/s wr, 45 op/s
Nov 29 07:59:33 compute-2 nova_compute[232428]: 2025-11-29 07:59:33.220 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:59:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 07:59:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:34 compute-2 ceph-mon[77138]: pgmap v1729: 305 pgs: 305 active+clean; 198 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 115 op/s
Nov 29 07:59:34 compute-2 ceph-mon[77138]: osdmap e228: 3 total, 3 up, 3 in
Nov 29 07:59:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 07:59:34 compute-2 nova_compute[232428]: 2025-11-29 07:59:34.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-2 nova_compute[232428]: 2025-11-29 07:59:34.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:35 compute-2 sudo[257908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:35 compute-2 sudo[257908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:35 compute-2 sudo[257908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:35 compute-2 sudo[257933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:59:35 compute-2 sudo[257933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:35 compute-2 sudo[257933]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:35 compute-2 nova_compute[232428]: 2025-11-29 07:59:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:37 compute-2 ceph-mon[77138]: pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 131 op/s
Nov 29 07:59:37 compute-2 ceph-mon[77138]: osdmap e229: 3 total, 3 up, 3 in
Nov 29 07:59:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:59:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 07:59:37 compute-2 nova_compute[232428]: 2025-11-29 07:59:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:38.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:38 compute-2 nova_compute[232428]: 2025-11-29 07:59:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:38 compute-2 nova_compute[232428]: 2025-11-29 07:59:38.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.330 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.412 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.412 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.427 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.511 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.511 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.521 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.522 232432 INFO nova.compute.claims [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Claim successful on node compute-2.ctlplane.example.com
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.638 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:39 compute-2 podman[257961]: 2025-11-29 07:59:39.696925826 +0000 UTC m=+0.100655159 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:59:39 compute-2 nova_compute[232428]: 2025-11-29 07:59:39.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3293050972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.118 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.129 232432 DEBUG nova.compute.provider_tree [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.159 232432 DEBUG nova.scheduler.client.report [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:40.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.186 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.187 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.237 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.286 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.287 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.312 232432 INFO nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.343 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:59:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4073074558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.726 232432 INFO nova.virt.block_device [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Booting with volume 46608b98-4ab9-406c-9f99-bd236172e09a at /dev/vda
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.730 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 07:59:40 compute-2 ceph-mon[77138]: pgmap v1733: 305 pgs: 305 active+clean; 264 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.3 MiB/s wr, 248 op/s
Nov 29 07:59:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1064891767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1545253227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.986 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.989 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4664MB free_disk=20.897350311279297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.989 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:40 compute-2 nova_compute[232428]: 2025-11-29 07:59:40.990 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.060 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 311dcf4c-2f7d-4167-89b8-38b38bc69a97 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.061 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.061 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.138 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.183 232432 DEBUG os_brick.utils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.186 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.210 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.211 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[21b1992c-f286-4981-aa89-eb3ed551a029]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.214 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.233 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.234 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[3d01870c-8081-42cb-b264-ca35d6bf1057]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.238 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.256 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.257 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe8f3c3-6330-4435-aec3-6894507469d1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.260 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5c8c63-111c-4ef4-a7c7-ac30f420b124]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.261 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.313 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "nvme version" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.318 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.319 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.319 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.320 232432 DEBUG os_brick.utils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] <== get_connector_properties: return (135ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.320 232432 DEBUG nova.virt.block_device [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updating existing volume attachment record: 344269a2-a2ff-4225-a493-789452434f2f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.330 232432 DEBUG nova.policy [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cbc1fad0f0b481797ed6cafcd266d99', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '19b3dc1655684be39c6d284805874456', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:59:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2492698352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.642 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.652 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.671 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.699 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:59:41 compute-2 nova_compute[232428]: 2025-11-29 07:59:41.700 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1608916804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.649 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.651 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.652 232432 INFO nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Creating image(s)
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.653 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.654 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Ensure instance console log exists: /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.654 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.655 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.655 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.700 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:42 compute-2 sudo[258062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:42 compute-2 sudo[258062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:42 compute-2 sudo[258062]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:42 compute-2 nova_compute[232428]: 2025-11-29 07:59:42.803 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Successfully created port: 8a3d21ee-d888-4edc-8f03-46b1a30ea49d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:59:42 compute-2 sudo[258087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:42 compute-2 sudo[258087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:42 compute-2 sudo[258087]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:42 compute-2 ceph-mon[77138]: pgmap v1734: 305 pgs: 305 active+clean; 264 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 234 op/s
Nov 29 07:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1208097719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:42 compute-2 ceph-mon[77138]: pgmap v1735: 305 pgs: 305 active+clean; 325 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 8.5 MiB/s wr, 178 op/s
Nov 29 07:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3293050972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1698824759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4073074558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:42 compute-2 ceph-mon[77138]: osdmap e230: 3 total, 3 up, 3 in
Nov 29 07:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2492698352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 07:59:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:44.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:44 compute-2 ceph-mon[77138]: pgmap v1737: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 152 op/s
Nov 29 07:59:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1608916804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:44 compute-2 ceph-mon[77138]: osdmap e231: 3 total, 3 up, 3 in
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.383 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.662 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Successfully updated port: 8a3d21ee-d888-4edc-8f03-46b1a30ea49d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.678 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.678 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquired lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.678 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.907 232432 DEBUG nova.compute.manager [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-changed-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.908 232432 DEBUG nova.compute.manager [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Refreshing instance network info cache due to event network-changed-8a3d21ee-d888-4edc-8f03-46b1a30ea49d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.908 232432 DEBUG oslo_concurrency.lockutils [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:44 compute-2 nova_compute[232428]: 2025-11-29 07:59:44.916 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:45 compute-2 nova_compute[232428]: 2025-11-29 07:59:45.299 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:59:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 07:59:46 compute-2 ceph-mon[77138]: pgmap v1739: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 36 op/s
Nov 29 07:59:46 compute-2 nova_compute[232428]: 2025-11-29 07:59:46.986 232432 DEBUG nova.network.neutron [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updating instance_info_cache with network_info: [{"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.013 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Releasing lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.013 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Instance network_info: |[{"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.013 232432 DEBUG oslo_concurrency.lockutils [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.014 232432 DEBUG nova.network.neutron [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Refreshing network info cache for port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.018 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Start _get_guest_xml network_info=[{"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-46608b98-4ab9-406c-9f99-bd236172e09a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '46608b98-4ab9-406c-9f99-bd236172e09a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '311dcf4c-2f7d-4167-89b8-38b38bc69a97', 'attached_at': '', 'detached_at': '', 'volume_id': '46608b98-4ab9-406c-9f99-bd236172e09a', 'serial': '46608b98-4ab9-406c-9f99-bd236172e09a'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'attachment_id': '344269a2-a2ff-4225-a493-789452434f2f', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.024 232432 WARNING nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.029 232432 DEBUG nova.virt.libvirt.host [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.030 232432 DEBUG nova.virt.libvirt.host [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.038 232432 DEBUG nova.virt.libvirt.host [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.039 232432 DEBUG nova.virt.libvirt.host [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.041 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.042 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.042 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.043 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.043 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.043 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.044 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.044 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.045 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.046 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.046 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.047 232432 DEBUG nova.virt.hardware [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.089 232432 DEBUG nova.storage.rbd_utils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] rbd image 311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.094 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/697158351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.553 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.579 232432 DEBUG nova.virt.libvirt.vif [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-15491888',display_name='tempest-ServersTestBootFromVolume-server-15491888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-15491888',id=57,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJjs6htVYe+wZ7NYjwEtT/P0WTPUh39fnxoi14b9V7CcHZwiJnJb0C3gZN9ek1lJ0UhRiNmmWDpPCnEL07S/4mdyLcXCKKcUCpP+fR4Uo57apDGHTIQYcgntkomgGw3QA==',key_name='tempest-keypair-696215134',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19b3dc1655684be39c6d284805874456',ramdisk_id='',reservation_id='r-2f9ofgt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1981313472',owner_user_name='tempest-ServersTestBootFromVolume-1981313472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0cbc1fad0f0b481797ed6cafcd266d99',uuid=311dcf4c-2f7d-4167-89b8-38b38bc69a97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.580 232432 DEBUG nova.network.os_vif_util [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converting VIF {"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.581 232432 DEBUG nova.network.os_vif_util [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.583 232432 DEBUG nova.objects.instance [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lazy-loading 'pci_devices' on Instance uuid 311dcf4c-2f7d-4167-89b8-38b38bc69a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.601 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <uuid>311dcf4c-2f7d-4167-89b8-38b38bc69a97</uuid>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <name>instance-00000039</name>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <metadata>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestBootFromVolume-server-15491888</nova:name>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 07:59:47</nova:creationTime>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:user uuid="0cbc1fad0f0b481797ed6cafcd266d99">tempest-ServersTestBootFromVolume-1981313472-project-member</nova:user>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:project uuid="19b3dc1655684be39c6d284805874456">tempest-ServersTestBootFromVolume-1981313472</nova:project>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <nova:port uuid="8a3d21ee-d888-4edc-8f03-46b1a30ea49d">
Nov 29 07:59:47 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </metadata>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <system>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="serial">311dcf4c-2f7d-4167-89b8-38b38bc69a97</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="uuid">311dcf4c-2f7d-4167-89b8-38b38bc69a97</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </system>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <os>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </os>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <features>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <apic/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </features>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </clock>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </cpu>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   <devices>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config">
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </source>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-46608b98-4ab9-406c-9f99-bd236172e09a">
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </source>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 07:59:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <serial>46608b98-4ab9-406c-9f99-bd236172e09a</serial>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:05:66:b2"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <target dev="tap8a3d21ee-d8"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </interface>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/console.log" append="off"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </serial>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <video>
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </video>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </rng>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 07:59:47 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 07:59:47 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 07:59:47 compute-2 nova_compute[232428]:   </devices>
Nov 29 07:59:47 compute-2 nova_compute[232428]: </domain>
Nov 29 07:59:47 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.604 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Preparing to wait for external event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.604 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.605 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.605 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.606 232432 DEBUG nova.virt.libvirt.vif [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-15491888',display_name='tempest-ServersTestBootFromVolume-server-15491888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-15491888',id=57,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJjs6htVYe+wZ7NYjwEtT/P0WTPUh39fnxoi14b9V7CcHZwiJnJb0C3gZN9ek1lJ0UhRiNmmWDpPCnEL07S/4mdyLcXCKKcUCpP+fR4Uo57apDGHTIQYcgntkomgGw3QA==',key_name='tempest-keypair-696215134',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19b3dc1655684be39c6d284805874456',ramdisk_id='',reservation_id='r-2f9ofgt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1981313472',owner_user_name='tempest-ServersTestBootFromVolume-1981313472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0cbc1fad0f0b481797ed6cafcd266d99',uuid=311dcf4c-2f7d-4167-89b8-38b38bc69a97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.607 232432 DEBUG nova.network.os_vif_util [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converting VIF {"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.608 232432 DEBUG nova.network.os_vif_util [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.609 232432 DEBUG os_vif [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.610 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.611 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.612 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.616 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.617 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a3d21ee-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.617 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a3d21ee-d8, col_values=(('external_ids', {'iface-id': '8a3d21ee-d888-4edc-8f03-46b1a30ea49d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:66:b2', 'vm-uuid': '311dcf4c-2f7d-4167-89b8-38b38bc69a97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:47 compute-2 NetworkManager[48993]: <info>  [1764403187.6214] manager: (tap8a3d21ee-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.628 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.629 232432 INFO os_vif [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8')
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.685 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.686 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.686 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] No VIF found with MAC fa:16:3e:05:66:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.687 232432 INFO nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Using config drive
Nov 29 07:59:47 compute-2 nova_compute[232428]: 2025-11-29 07:59:47.723 232432 DEBUG nova.storage.rbd_utils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] rbd image 311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:48.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:48 compute-2 ceph-mon[77138]: pgmap v1740: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 66 op/s
Nov 29 07:59:48 compute-2 ceph-mon[77138]: osdmap e232: 3 total, 3 up, 3 in
Nov 29 07:59:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/697158351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.376 232432 INFO nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Creating config drive at /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.385 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu1jea1mv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.538 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu1jea1mv" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.573 232432 DEBUG nova.storage.rbd_utils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] rbd image 311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.578 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config 311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.777 232432 DEBUG oslo_concurrency.processutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config 311dcf4c-2f7d-4167-89b8-38b38bc69a97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.779 232432 INFO nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Deleting local config drive /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97/disk.config because it was imported into RBD.
Nov 29 07:59:49 compute-2 kernel: tap8a3d21ee-d8: entered promiscuous mode
Nov 29 07:59:49 compute-2 NetworkManager[48993]: <info>  [1764403189.8447] manager: (tap8a3d21ee-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 07:59:49 compute-2 ovn_controller[134375]: 2025-11-29T07:59:49Z|00194|binding|INFO|Claiming lport 8a3d21ee-d888-4edc-8f03-46b1a30ea49d for this chassis.
Nov 29 07:59:49 compute-2 ovn_controller[134375]: 2025-11-29T07:59:49Z|00195|binding|INFO|8a3d21ee-d888-4edc-8f03-46b1a30ea49d: Claiming fa:16:3e:05:66:b2 10.100.0.4
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.847 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.871 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:66:b2 10.100.0.4'], port_security=['fa:16:3e:05:66:b2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '311dcf4c-2f7d-4167-89b8-38b38bc69a97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-102b9a2a-4498-4e54-b68f-b012204f34c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19b3dc1655684be39c6d284805874456', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c9c3baea-0b58-4170-9fab-9a72227c3fb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe36a77-2f90-42a7-af4d-2970ecd2e39c, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=8a3d21ee-d888-4edc-8f03-46b1a30ea49d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.872 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d in datapath 102b9a2a-4498-4e54-b68f-b012204f34c0 bound to our chassis
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.873 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 102b9a2a-4498-4e54-b68f-b012204f34c0
Nov 29 07:59:49 compute-2 systemd-machined[194747]: New machine qemu-26-instance-00000039.
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.890 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[da32f9ad-d456-4bfe-ae90-472df8bc2395]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.891 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap102b9a2a-41 in ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.894 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap102b9a2a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9acee0-20ad-4b30-b983-4c2af7be8a97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.895 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[034220a7-147f-41b7-b1a1-a8f6c90f013a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 systemd[1]: Started Virtual Machine qemu-26-instance-00000039.
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.909 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[31c742c0-e927-4592-a414-08e1c6212eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 systemd-udevd[258232]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:59:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.939 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0337b0af-6f4e-4621-99c6-9b27d0849cc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 NetworkManager[48993]: <info>  [1764403189.9451] device (tap8a3d21ee-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:59:49 compute-2 NetworkManager[48993]: <info>  [1764403189.9462] device (tap8a3d21ee-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:59:49 compute-2 ovn_controller[134375]: 2025-11-29T07:59:49Z|00196|binding|INFO|Setting lport 8a3d21ee-d888-4edc-8f03-46b1a30ea49d ovn-installed in OVS
Nov 29 07:59:49 compute-2 ovn_controller[134375]: 2025-11-29T07:59:49Z|00197|binding|INFO|Setting lport 8a3d21ee-d888-4edc-8f03-46b1a30ea49d up in Southbound
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:49 compute-2 nova_compute[232428]: 2025-11-29 07:59:49.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.988 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d73127-c1cc-48ab-8ca7-1d16c9d92009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:49.995 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2044fa81-a23d-4176-8e46-f00b722101ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:49 compute-2 NetworkManager[48993]: <info>  [1764403189.9965] manager: (tap102b9a2a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 07:59:49 compute-2 systemd-udevd[258239]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.042 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[10df068c-283e-4a2d-9310-b417cf50b21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.047 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[652bbca4-4b0e-4e0c-a935-251e9da5f0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:50 compute-2 NetworkManager[48993]: <info>  [1764403190.0816] device (tap102b9a2a-40): carrier: link connected
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.095 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4e22fc66-8f2f-4179-9886-01f1752e0c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.124 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c94687-7c7a-4a7e-85db-1708f160ca26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap102b9a2a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:f7:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615634, 'reachable_time': 35078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258262, 'error': None, 'target': 'ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.142 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7541a595-daa1-4ca3-b0e8-fd0861d19e24]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:f7a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615634, 'tstamp': 615634}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258263, 'error': None, 'target': 'ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.170 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6f99c76b-1760-49d9-b321-14f661795005]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap102b9a2a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:f7:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615634, 'reachable_time': 35078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258264, 'error': None, 'target': 'ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:50.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.209 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[38d27439-baf4-4364-b026-1dd5673352ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.313 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7928126d-4c78-4dae-a0ae-f6858116d6f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.315 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap102b9a2a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.315 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.316 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap102b9a2a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:50 compute-2 kernel: tap102b9a2a-40: entered promiscuous mode
Nov 29 07:59:50 compute-2 NetworkManager[48993]: <info>  [1764403190.3202] manager: (tap102b9a2a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.322 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap102b9a2a-40, col_values=(('external_ids', {'iface-id': 'cfbfae18-c6d1-4a8a-b44a-ba56ce0c6028'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.324 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-2 ovn_controller[134375]: 2025-11-29T07:59:50Z|00198|binding|INFO|Releasing lport cfbfae18-c6d1-4a8a-b44a-ba56ce0c6028 from this chassis (sb_readonly=0)
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.345 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/102b9a2a-4498-4e54-b68f-b012204f34c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/102b9a2a-4498-4e54-b68f-b012204f34c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.347 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[73ebf870-e947-49a6-8e02-e306e7159fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.347 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: global
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-102b9a2a-4498-4e54-b68f-b012204f34c0
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/102b9a2a-4498-4e54-b68f-b012204f34c0.pid.haproxy
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 102b9a2a-4498-4e54-b68f-b012204f34c0
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:59:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:50.348 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0', 'env', 'PROCESS_TAG=haproxy-102b9a2a-4498-4e54-b68f-b012204f34c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/102b9a2a-4498-4e54-b68f-b012204f34c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.703 232432 DEBUG nova.compute.manager [req-d64deafd-14f2-4df1-893d-bd65d0470641 req-5c871583-6442-45ed-9dab-5bc6d8bae77e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.704 232432 DEBUG oslo_concurrency.lockutils [req-d64deafd-14f2-4df1-893d-bd65d0470641 req-5c871583-6442-45ed-9dab-5bc6d8bae77e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.704 232432 DEBUG oslo_concurrency.lockutils [req-d64deafd-14f2-4df1-893d-bd65d0470641 req-5c871583-6442-45ed-9dab-5bc6d8bae77e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.704 232432 DEBUG oslo_concurrency.lockutils [req-d64deafd-14f2-4df1-893d-bd65d0470641 req-5c871583-6442-45ed-9dab-5bc6d8bae77e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.704 232432 DEBUG nova.compute.manager [req-d64deafd-14f2-4df1-893d-bd65d0470641 req-5c871583-6442-45ed-9dab-5bc6d8bae77e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Processing event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:59:50 compute-2 podman[258311]: 2025-11-29 07:59:50.788219315 +0000 UTC m=+0.079016462 container create 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:59:50 compute-2 ceph-mon[77138]: pgmap v1742: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 4.4 KiB/s wr, 35 op/s
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.827100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190827219, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2660, "num_deletes": 267, "total_data_size": 6129528, "memory_usage": 6216984, "flush_reason": "Manual Compaction"}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 29 07:59:50 compute-2 podman[258311]: 2025-11-29 07:59:50.74457466 +0000 UTC m=+0.035371917 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:59:50 compute-2 systemd[1]: Started libpod-conmon-969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821.scope.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190860825, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4002433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31902, "largest_seqno": 34557, "table_properties": {"data_size": 3991328, "index_size": 7215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23577, "raw_average_key_size": 21, "raw_value_size": 3969064, "raw_average_value_size": 3598, "num_data_blocks": 310, "num_entries": 1103, "num_filter_entries": 1103, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402984, "oldest_key_time": 1764402984, "file_creation_time": 1764403190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 33774 microseconds, and 15137 cpu microseconds.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.860885) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4002433 bytes OK
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.860919) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.863616) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.863649) EVENT_LOG_v1 {"time_micros": 1764403190863640, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.863673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6117673, prev total WAL file size 6117673, number of live WAL files 2.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.865873) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3908KB)], [60(9328KB)]
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190865986, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13554674, "oldest_snapshot_seqno": -1}
Nov 29 07:59:50 compute-2 systemd[1]: Started libcrun container.
Nov 29 07:59:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312d4d4628eb8d931ffcfdbde3a8bcad0c5ec694b03e6d3570543888b42c394b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-2 podman[258311]: 2025-11-29 07:59:50.89005659 +0000 UTC m=+0.180853767 container init 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:59:50 compute-2 podman[258311]: 2025-11-29 07:59:50.896681507 +0000 UTC m=+0.187478654 container start 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.907 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403190.9066598, 311dcf4c-2f7d-4167-89b8-38b38bc69a97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.908 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] VM Started (Lifecycle Event)
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.910 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.915 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.919 232432 INFO nova.virt.libvirt.driver [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Instance spawned successfully.
Nov 29 07:59:50 compute-2 nova_compute[232428]: 2025-11-29 07:59:50.919 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:59:50 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [NOTICE]   (258354) : New worker (258356) forked
Nov 29 07:59:50 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [NOTICE]   (258354) : Loading success.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6566 keys, 11703741 bytes, temperature: kUnknown
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190968545, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11703741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11657538, "index_size": 28683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 167470, "raw_average_key_size": 25, "raw_value_size": 11537405, "raw_average_value_size": 1757, "num_data_blocks": 1154, "num_entries": 6566, "num_filter_entries": 6566, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.968837) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11703741 bytes
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.973849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.0 rd, 114.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 7103, records dropped: 537 output_compression: NoCompression
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.973873) EVENT_LOG_v1 {"time_micros": 1764403190973860, "job": 36, "event": "compaction_finished", "compaction_time_micros": 102649, "compaction_time_cpu_micros": 29778, "output_level": 6, "num_output_files": 1, "total_output_size": 11703741, "num_input_records": 7103, "num_output_records": 6566, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190974737, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190977514, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.865754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.977694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.977710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.977712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.977713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:50 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-07:59:50.977715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.025 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.033 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.036 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.037 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.037 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.038 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.038 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.039 232432 DEBUG nova.virt.libvirt.driver [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.080 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.081 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403190.9068859, 311dcf4c-2f7d-4167-89b8-38b38bc69a97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.082 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] VM Paused (Lifecycle Event)
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.113 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.116 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403190.9142468, 311dcf4c-2f7d-4167-89b8-38b38bc69a97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.116 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] VM Resumed (Lifecycle Event)
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.255 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.264 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.284 232432 INFO nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Took 8.63 seconds to spawn the instance on the hypervisor.
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.285 232432 DEBUG nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.300 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.358 232432 INFO nova.compute.manager [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Took 11.88 seconds to build instance.
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.438 232432 DEBUG oslo_concurrency.lockutils [None req-0960cb1a-1ae9-49db-9734-bba2a48fba76 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.671 232432 DEBUG nova.network.neutron [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updated VIF entry in instance network info cache for port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.672 232432 DEBUG nova.network.neutron [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updating instance_info_cache with network_info: [{"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:51 compute-2 nova_compute[232428]: 2025-11-29 07:59:51.717 232432 DEBUG oslo_concurrency.lockutils [req-0fe25a3a-b87f-438f-8a85-4fc44ac982ca req-4704d3eb-76be-444d-be01-f9d457958840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:52 compute-2 ceph-mon[77138]: pgmap v1743: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 3.9 KiB/s wr, 31 op/s
Nov 29 07:59:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:52.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.621 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.852 232432 DEBUG nova.compute.manager [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.853 232432 DEBUG oslo_concurrency.lockutils [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.853 232432 DEBUG oslo_concurrency.lockutils [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.854 232432 DEBUG oslo_concurrency.lockutils [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.854 232432 DEBUG nova.compute.manager [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] No waiting events found dispatching network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:59:52 compute-2 nova_compute[232428]: 2025-11-29 07:59:52.854 232432 WARNING nova.compute.manager [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received unexpected event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d for instance with vm_state active and task_state None.
Nov 29 07:59:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:54.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:54 compute-2 NetworkManager[48993]: <info>  [1764403194.5142] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 07:59:54 compute-2 NetworkManager[48993]: <info>  [1764403194.5150] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 07:59:54 compute-2 nova_compute[232428]: 2025-11-29 07:59:54.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:54 compute-2 nova_compute[232428]: 2025-11-29 07:59:54.646 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:54 compute-2 ovn_controller[134375]: 2025-11-29T07:59:54Z|00199|binding|INFO|Releasing lport cfbfae18-c6d1-4a8a-b44a-ba56ce0c6028 from this chassis (sb_readonly=0)
Nov 29 07:59:54 compute-2 nova_compute[232428]: 2025-11-29 07:59:54.663 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:54 compute-2 nova_compute[232428]: 2025-11-29 07:59:54.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:55 compute-2 ceph-mon[77138]: pgmap v1744: 305 pgs: 305 active+clean; 280 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 25 KiB/s wr, 53 op/s
Nov 29 07:59:55 compute-2 nova_compute[232428]: 2025-11-29 07:59:55.548 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 07:59:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:56.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 07:59:56 compute-2 nova_compute[232428]: 2025-11-29 07:59:56.388 232432 DEBUG nova.compute.manager [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-changed-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:56 compute-2 nova_compute[232428]: 2025-11-29 07:59:56.389 232432 DEBUG nova.compute.manager [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Refreshing instance network info cache due to event network-changed-8a3d21ee-d888-4edc-8f03-46b1a30ea49d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:59:56 compute-2 nova_compute[232428]: 2025-11-29 07:59:56.389 232432 DEBUG oslo_concurrency.lockutils [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:56 compute-2 nova_compute[232428]: 2025-11-29 07:59:56.389 232432 DEBUG oslo_concurrency.lockutils [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:56 compute-2 nova_compute[232428]: 2025-11-29 07:59:56.389 232432 DEBUG nova.network.neutron [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Refreshing network info cache for port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:59:56 compute-2 ceph-mon[77138]: pgmap v1745: 305 pgs: 305 active+clean; 224 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 23 KiB/s wr, 98 op/s
Nov 29 07:59:57 compute-2 nova_compute[232428]: 2025-11-29 07:59:57.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:57 compute-2 podman[258370]: 2025-11-29 07:59:57.67665817 +0000 UTC m=+0.076003068 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 07:59:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:57.743 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:57 compute-2 nova_compute[232428]: 2025-11-29 07:59:57.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 07:59:57.746 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:59:57 compute-2 ceph-mon[77138]: pgmap v1746: 305 pgs: 305 active+clean; 167 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 129 op/s
Nov 29 07:59:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4006629938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 07:59:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 07:59:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 07:59:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 07:59:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 07:59:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 07:59:59 compute-2 nova_compute[232428]: 2025-11-29 07:59:59.392 232432 DEBUG nova.network.neutron [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updated VIF entry in instance network info cache for port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:59:59 compute-2 nova_compute[232428]: 2025-11-29 07:59:59.393 232432 DEBUG nova.network.neutron [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updating instance_info_cache with network_info: [{"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:59 compute-2 ceph-mon[77138]: pgmap v1747: 305 pgs: 305 active+clean; 167 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 117 op/s
Nov 29 07:59:59 compute-2 ceph-mon[77138]: osdmap e233: 3 total, 3 up, 3 in
Nov 29 07:59:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/616399820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:59 compute-2 nova_compute[232428]: 2025-11-29 07:59:59.415 232432 DEBUG oslo_concurrency.lockutils [req-3b0ec3de-b109-4281-927c-2a251cc60e49 req-b67e1a7c-3a51-4c33-956c-acff2e462ff2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-311dcf4c-2f7d-4167-89b8-38b38bc69a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:59 compute-2 nova_compute[232428]: 2025-11-29 07:59:59.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:00.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2478403709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:00:01 compute-2 ceph-mon[77138]: pgmap v1749: 305 pgs: 305 active+clean; 157 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 76 KiB/s wr, 171 op/s
Nov 29 08:00:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:02 compute-2 nova_compute[232428]: 2025-11-29 08:00:02.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:02 compute-2 nova_compute[232428]: 2025-11-29 08:00:02.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:02 compute-2 podman[258392]: 2025-11-29 08:00:02.693388317 +0000 UTC m=+0.087244319 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:00:02 compute-2 sudo[258412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:02 compute-2 sudo[258412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:02 compute-2 sudo[258412]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:03 compute-2 sudo[258437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:03 compute-2 sudo[258437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:03 compute-2 sudo[258437]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:03.306 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:03.306 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:03.307 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:03 compute-2 ceph-mon[77138]: pgmap v1750: 305 pgs: 305 active+clean; 134 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 183 op/s
Nov 29 08:00:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1426721546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/911836961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:04 compute-2 nova_compute[232428]: 2025-11-29 08:00:04.964 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:05 compute-2 ovn_controller[134375]: 2025-11-29T08:00:05Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:66:b2 10.100.0.4
Nov 29 08:00:05 compute-2 ovn_controller[134375]: 2025-11-29T08:00:05Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:66:b2 10.100.0.4
Nov 29 08:00:05 compute-2 ceph-mon[77138]: pgmap v1751: 305 pgs: 305 active+clean; 134 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 29 08:00:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:06.749 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:07 compute-2 ceph-mon[77138]: pgmap v1752: 305 pgs: 305 active+clean; 146 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 3.5 MiB/s wr, 133 op/s
Nov 29 08:00:07 compute-2 nova_compute[232428]: 2025-11-29 08:00:07.628 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:09 compute-2 nova_compute[232428]: 2025-11-29 08:00:09.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:10 compute-2 ceph-mon[77138]: pgmap v1753: 305 pgs: 305 active+clean; 146 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 3.5 MiB/s wr, 133 op/s
Nov 29 08:00:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:10 compute-2 podman[258466]: 2025-11-29 08:00:10.716514981 +0000 UTC m=+0.104711016 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:00:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1837080993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:11 compute-2 ceph-mon[77138]: pgmap v1754: 305 pgs: 305 active+clean; 165 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 4.0 MiB/s wr, 95 op/s
Nov 29 08:00:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/815686413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:12.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:12 compute-2 nova_compute[232428]: 2025-11-29 08:00:12.631 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 ceph-mon[77138]: pgmap v1755: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Nov 29 08:00:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.419 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.420 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.420 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.420 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.420 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.422 232432 INFO nova.compute.manager [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Terminating instance
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.422 232432 DEBUG nova.compute.manager [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:00:13 compute-2 kernel: tap8a3d21ee-d8 (unregistering): left promiscuous mode
Nov 29 08:00:13 compute-2 NetworkManager[48993]: <info>  [1764403213.5285] device (tap8a3d21ee-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.542 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 ovn_controller[134375]: 2025-11-29T08:00:13Z|00200|binding|INFO|Releasing lport 8a3d21ee-d888-4edc-8f03-46b1a30ea49d from this chassis (sb_readonly=0)
Nov 29 08:00:13 compute-2 ovn_controller[134375]: 2025-11-29T08:00:13Z|00201|binding|INFO|Setting lport 8a3d21ee-d888-4edc-8f03-46b1a30ea49d down in Southbound
Nov 29 08:00:13 compute-2 ovn_controller[134375]: 2025-11-29T08:00:13Z|00202|binding|INFO|Removing iface tap8a3d21ee-d8 ovn-installed in OVS
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.547 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.587 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:66:b2 10.100.0.4'], port_security=['fa:16:3e:05:66:b2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '311dcf4c-2f7d-4167-89b8-38b38bc69a97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-102b9a2a-4498-4e54-b68f-b012204f34c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19b3dc1655684be39c6d284805874456', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c9c3baea-0b58-4170-9fab-9a72227c3fb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe36a77-2f90-42a7-af4d-2970ecd2e39c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=8a3d21ee-d888-4edc-8f03-46b1a30ea49d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.589 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 8a3d21ee-d888-4edc-8f03-46b1a30ea49d in datapath 102b9a2a-4498-4e54-b68f-b012204f34c0 unbound from our chassis
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.592 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 102b9a2a-4498-4e54-b68f-b012204f34c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.595 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2dd250c-ffe7-4a04-828d-85cbe52fcca7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.596 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0 namespace which is not needed anymore
Nov 29 08:00:13 compute-2 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 29 08:00:13 compute-2 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000039.scope: Consumed 15.392s CPU time.
Nov 29 08:00:13 compute-2 systemd-machined[194747]: Machine qemu-26-instance-00000039 terminated.
Nov 29 08:00:13 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [NOTICE]   (258354) : haproxy version is 2.8.14-c23fe91
Nov 29 08:00:13 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [NOTICE]   (258354) : path to executable is /usr/sbin/haproxy
Nov 29 08:00:13 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [WARNING]  (258354) : Exiting Master process...
Nov 29 08:00:13 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [ALERT]    (258354) : Current worker (258356) exited with code 143 (Terminated)
Nov 29 08:00:13 compute-2 neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0[258349]: [WARNING]  (258354) : All workers exited. Exiting... (0)
Nov 29 08:00:13 compute-2 systemd[1]: libpod-969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821.scope: Deactivated successfully.
Nov 29 08:00:13 compute-2 podman[258521]: 2025-11-29 08:00:13.788622412 +0000 UTC m=+0.061887667 container died 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:00:13 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821-userdata-shm.mount: Deactivated successfully.
Nov 29 08:00:13 compute-2 systemd[1]: var-lib-containers-storage-overlay-312d4d4628eb8d931ffcfdbde3a8bcad0c5ec694b03e6d3570543888b42c394b-merged.mount: Deactivated successfully.
Nov 29 08:00:13 compute-2 podman[258521]: 2025-11-29 08:00:13.851620902 +0000 UTC m=+0.124886127 container cleanup 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:00:13 compute-2 NetworkManager[48993]: <info>  [1764403213.8517] manager: (tap8a3d21ee-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 08:00:13 compute-2 systemd[1]: libpod-conmon-969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821.scope: Deactivated successfully.
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.873 232432 INFO nova.virt.libvirt.driver [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Instance destroyed successfully.
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.874 232432 DEBUG nova.objects.instance [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lazy-loading 'resources' on Instance uuid 311dcf4c-2f7d-4167-89b8-38b38bc69a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.888 232432 DEBUG nova.virt.libvirt.vif [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-15491888',display_name='tempest-ServersTestBootFromVolume-server-15491888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-15491888',id=57,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJjs6htVYe+wZ7NYjwEtT/P0WTPUh39fnxoi14b9V7CcHZwiJnJb0C3gZN9ek1lJ0UhRiNmmWDpPCnEL07S/4mdyLcXCKKcUCpP+fR4Uo57apDGHTIQYcgntkomgGw3QA==',key_name='tempest-keypair-696215134',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19b3dc1655684be39c6d284805874456',ramdisk_id='',reservation_id='r-2f9ofgt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1981313472',owner_user_name='tempest-ServersTestBootFromVolume-1981313472-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0cbc1fad0f0b481797ed6cafcd266d99',uuid=311dcf4c-2f7d-4167-89b8-38b38bc69a97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.889 232432 DEBUG nova.network.os_vif_util [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converting VIF {"id": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "address": "fa:16:3e:05:66:b2", "network": {"id": "102b9a2a-4498-4e54-b68f-b012204f34c0", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-1492673654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19b3dc1655684be39c6d284805874456", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3d21ee-d8", "ovs_interfaceid": "8a3d21ee-d888-4edc-8f03-46b1a30ea49d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.890 232432 DEBUG nova.network.os_vif_util [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.890 232432 DEBUG os_vif [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.894 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.894 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a3d21ee-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.900 232432 INFO os_vif [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:66:b2,bridge_name='br-int',has_traffic_filtering=True,id=8a3d21ee-d888-4edc-8f03-46b1a30ea49d,network=Network(102b9a2a-4498-4e54-b68f-b012204f34c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3d21ee-d8')
Nov 29 08:00:13 compute-2 podman[258558]: 2025-11-29 08:00:13.928471295 +0000 UTC m=+0.050191890 container remove 969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.936 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c129ce61-5c6b-4239-a84e-a54a4855e652]: (4, ('Sat Nov 29 08:00:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0 (969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821)\n969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821\nSat Nov 29 08:00:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0 (969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821)\n969547360d810cf8685c814433571bcb9815a99bd95fd4f1c4e424efb319f821\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.939 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[abe0f78a-bf8b-4b5f-aa9f-a2c76ee8bcdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.941 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap102b9a2a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.944 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 kernel: tap102b9a2a-40: left promiscuous mode
Nov 29 08:00:13 compute-2 nova_compute[232428]: 2025-11-29 08:00:13.961 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4b152e-74ab-46e4-b576-2c3bdcf95fa8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.981 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[11191fc2-ea14-46d6-9437-e294d6d87552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:13.984 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[daa3b212-6d16-4e50-9843-be3308ddb725]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:14.005 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[79b88832-e2dc-4d93-a924-4db54e979dbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615624, 'reachable_time': 40762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258594, 'error': None, 'target': 'ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:14 compute-2 systemd[1]: run-netns-ovnmeta\x2d102b9a2a\x2d4498\x2d4e54\x2db68f\x2db012204f34c0.mount: Deactivated successfully.
Nov 29 08:00:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:14.010 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-102b9a2a-4498-4e54-b68f-b012204f34c0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:00:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:14.011 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2fe7e4-4c43-4fc4-b953-73bd27999bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:14 compute-2 ceph-mon[77138]: osdmap e234: 3 total, 3 up, 3 in
Nov 29 08:00:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:14 compute-2 nova_compute[232428]: 2025-11-29 08:00:14.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.076 232432 INFO nova.virt.libvirt.driver [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Deleting instance files /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97_del
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.077 232432 INFO nova.virt.libvirt.driver [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Deletion of /var/lib/nova/instances/311dcf4c-2f7d-4167-89b8-38b38bc69a97_del complete
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.140 232432 INFO nova.compute.manager [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Took 1.72 seconds to destroy the instance on the hypervisor.
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.141 232432 DEBUG oslo.service.loopingcall [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.141 232432 DEBUG nova.compute.manager [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.141 232432 DEBUG nova.network.neutron [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:00:15 compute-2 ceph-mon[77138]: pgmap v1757: 305 pgs: 305 active+clean; 197 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 173 op/s
Nov 29 08:00:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3148484917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2823277088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.869 232432 DEBUG nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-unplugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.870 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.871 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.872 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.872 232432 DEBUG nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] No waiting events found dispatching network-vif-unplugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.872 232432 DEBUG nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-unplugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.873 232432 DEBUG nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.873 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.874 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.874 232432 DEBUG oslo_concurrency.lockutils [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.875 232432 DEBUG nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] No waiting events found dispatching network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:00:15 compute-2 nova_compute[232428]: 2025-11-29 08:00:15.875 232432 WARNING nova.compute.manager [req-e71f4706-0308-4c90-88d9-7fe4cc07dec7 req-803019b1-2413-4ac6-a060-d241868481ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received unexpected event network-vif-plugged-8a3d21ee-d888-4edc-8f03-46b1a30ea49d for instance with vm_state active and task_state deleting.
Nov 29 08:00:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.398 232432 DEBUG nova.network.neutron [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.427 232432 INFO nova.compute.manager [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Took 1.29 seconds to deallocate network for instance.
Nov 29 08:00:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/329891320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:16 compute-2 ceph-mon[77138]: osdmap e235: 3 total, 3 up, 3 in
Nov 29 08:00:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2551938721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.549 232432 DEBUG nova.compute.manager [req-a7ee42c5-ec5d-416d-978c-81fcbd8e7fa2 req-9cb906be-2720-44ee-b29a-f2825ca35516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Received event network-vif-deleted-8a3d21ee-d888-4edc-8f03-46b1a30ea49d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.650 232432 INFO nova.compute.manager [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Took 0.22 seconds to detach 1 volumes for instance.
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.652 232432 DEBUG nova.compute.manager [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Deleting volume: 46608b98-4ab9-406c-9f99-bd236172e09a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.907 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.908 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:16 compute-2 nova_compute[232428]: 2025-11-29 08:00:16.963 232432 DEBUG oslo_concurrency.processutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2268793441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.476 232432 DEBUG oslo_concurrency.processutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.483 232432 DEBUG nova.compute.provider_tree [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.503 232432 DEBUG nova.scheduler.client.report [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.528 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:17 compute-2 ceph-mon[77138]: pgmap v1759: 305 pgs: 305 active+clean; 298 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 9.0 MiB/s wr, 281 op/s
Nov 29 08:00:17 compute-2 ceph-mon[77138]: osdmap e236: 3 total, 3 up, 3 in
Nov 29 08:00:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2268793441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1966415464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1966415464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.578 232432 INFO nova.scheduler.client.report [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Deleted allocations for instance 311dcf4c-2f7d-4167-89b8-38b38bc69a97
Nov 29 08:00:17 compute-2 nova_compute[232428]: 2025-11-29 08:00:17.645 232432 DEBUG oslo_concurrency.lockutils [None req-9da04463-8ff1-4672-82de-0ba0032f92cd 0cbc1fad0f0b481797ed6cafcd266d99 19b3dc1655684be39c6d284805874456 - - default default] Lock "311dcf4c-2f7d-4167-89b8-38b38bc69a97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:18 compute-2 nova_compute[232428]: 2025-11-29 08:00:18.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:19 compute-2 ceph-mon[77138]: pgmap v1761: 305 pgs: 305 active+clean; 298 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 10 MiB/s wr, 194 op/s
Nov 29 08:00:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:19 compute-2 nova_compute[232428]: 2025-11-29 08:00:19.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 08:00:21 compute-2 ceph-mon[77138]: pgmap v1762: 305 pgs: 305 active+clean; 260 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.4 MiB/s wr, 253 op/s
Nov 29 08:00:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:22.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:22 compute-2 nova_compute[232428]: 2025-11-29 08:00:22.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:22 compute-2 nova_compute[232428]: 2025-11-29 08:00:22.683 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:22 compute-2 ceph-mon[77138]: osdmap e237: 3 total, 3 up, 3 in
Nov 29 08:00:23 compute-2 sudo[258623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:23 compute-2 sudo[258623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:23 compute-2 sudo[258623]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:23 compute-2 sudo[258648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:23 compute-2 sudo[258648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:23 compute-2 sudo[258648]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 08:00:23 compute-2 nova_compute[232428]: 2025-11-29 08:00:23.899 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:24 compute-2 ceph-mon[77138]: pgmap v1764: 305 pgs: 305 active+clean; 227 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 583 KiB/s wr, 333 op/s
Nov 29 08:00:24 compute-2 ceph-mon[77138]: osdmap e238: 3 total, 3 up, 3 in
Nov 29 08:00:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:24 compute-2 nova_compute[232428]: 2025-11-29 08:00:24.974 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:25 compute-2 ceph-mon[77138]: pgmap v1766: 305 pgs: 305 active+clean; 219 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 511 KiB/s wr, 346 op/s
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.700 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.700 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.731 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.918 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.919 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.928 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:00:25 compute-2 nova_compute[232428]: 2025-11-29 08:00:25.928 232432 INFO nova.compute.claims [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.030 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3233852602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.506 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.514 232432 DEBUG nova.compute.provider_tree [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.538 232432 DEBUG nova.scheduler.client.report [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.576 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.577 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.699 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.700 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.753 232432 INFO nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.783 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:00:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3233852602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/522656446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.869 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.870 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.870 232432 INFO nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Creating image(s)
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.897 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.927 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.955 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:26 compute-2 nova_compute[232428]: 2025-11-29 08:00:26.960 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.001 232432 DEBUG nova.policy [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.040 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.041 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.041 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.042 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.068 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.073 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5f638465-c65b-4824-bedc-60f4b695402a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.390 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5f638465-c65b-4824-bedc-60f4b695402a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.461 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] resizing rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.590 232432 DEBUG nova.objects.instance [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f638465-c65b-4824-bedc-60f4b695402a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.617 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.618 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Ensure instance console log exists: /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.618 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.619 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:27 compute-2 nova_compute[232428]: 2025-11-29 08:00:27.619 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:27 compute-2 ceph-mon[77138]: pgmap v1767: 305 pgs: 305 active+clean; 138 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 467 KiB/s wr, 364 op/s
Nov 29 08:00:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3383625548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3383625548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-2 nova_compute[232428]: 2025-11-29 08:00:28.646 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Successfully created port: 9b28bb10-34a8-47b1-823d-fd7d39b929ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:00:28 compute-2 podman[258864]: 2025-11-29 08:00:28.65014643 +0000 UTC m=+0.053157404 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:00:28 compute-2 ceph-mon[77138]: pgmap v1768: 305 pgs: 305 active+clean; 138 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 24 KiB/s wr, 286 op/s
Nov 29 08:00:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3383625548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3383625548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-2 ceph-mon[77138]: osdmap e239: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-2 nova_compute[232428]: 2025-11-29 08:00:28.872 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403213.870557, 311dcf4c-2f7d-4167-89b8-38b38bc69a97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:00:28 compute-2 nova_compute[232428]: 2025-11-29 08:00:28.873 232432 INFO nova.compute.manager [-] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] VM Stopped (Lifecycle Event)
Nov 29 08:00:28 compute-2 nova_compute[232428]: 2025-11-29 08:00:28.892 232432 DEBUG nova.compute.manager [None req-d686c24d-072e-400e-8354-0f0e684636e0 - - - - - -] [instance: 311dcf4c-2f7d-4167-89b8-38b38bc69a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:28 compute-2 nova_compute[232428]: 2025-11-29 08:00:28.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:29 compute-2 nova_compute[232428]: 2025-11-29 08:00:29.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:30 compute-2 nova_compute[232428]: 2025-11-29 08:00:30.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:30.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:30 compute-2 nova_compute[232428]: 2025-11-29 08:00:30.490 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.179 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Successfully updated port: 9b28bb10-34a8-47b1-823d-fd7d39b929ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.253 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.253 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.253 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.393 232432 DEBUG nova.compute.manager [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-changed-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.393 232432 DEBUG nova.compute.manager [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Refreshing instance network info cache due to event network-changed-9b28bb10-34a8-47b1-823d-fd7d39b929ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.394 232432 DEBUG oslo_concurrency.lockutils [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:00:31 compute-2 nova_compute[232428]: 2025-11-29 08:00:31.556 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:00:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:32 compute-2 ceph-mon[77138]: pgmap v1770: 305 pgs: 305 active+clean; 157 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 995 KiB/s wr, 113 op/s
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.904 232432 DEBUG nova.network.neutron [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Updating instance_info_cache with network_info: [{"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.947 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.948 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Instance network_info: |[{"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.948 232432 DEBUG oslo_concurrency.lockutils [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.948 232432 DEBUG nova.network.neutron [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Refreshing network info cache for port 9b28bb10-34a8-47b1-823d-fd7d39b929ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.952 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Start _get_guest_xml network_info=[{"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.957 232432 WARNING nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.965 232432 DEBUG nova.virt.libvirt.host [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.966 232432 DEBUG nova.virt.libvirt.host [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.974 232432 DEBUG nova.virt.libvirt.host [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.974 232432 DEBUG nova.virt.libvirt.host [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.975 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.976 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.976 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.976 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.976 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.977 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.977 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.977 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.977 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.977 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.978 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.978 232432 DEBUG nova.virt.hardware [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:00:32 compute-2 nova_compute[232428]: 2025-11-29 08:00:32.980 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.236 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.236 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:00:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1349017655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:33 compute-2 ceph-mon[77138]: pgmap v1771: 305 pgs: 305 active+clean; 197 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.452 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.478 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.486 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:33 compute-2 podman[258928]: 2025-11-29 08:00:33.672247866 +0000 UTC m=+0.068610626 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1359111750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.996 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:33 compute-2 nova_compute[232428]: 2025-11-29 08:00:33.999 232432 DEBUG nova.virt.libvirt.vif [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-624518862',display_name='tempest-ImagesTestJSON-server-624518862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-624518862',id=61,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-ph7wosta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:26Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=5f638465-c65b-4824-bedc-60f4b695402a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.000 232432 DEBUG nova.network.os_vif_util [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.002 232432 DEBUG nova.network.os_vif_util [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.004 232432 DEBUG nova.objects.instance [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f638465-c65b-4824-bedc-60f4b695402a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.022 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <uuid>5f638465-c65b-4824-bedc-60f4b695402a</uuid>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <name>instance-0000003d</name>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:name>tempest-ImagesTestJSON-server-624518862</nova:name>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:00:32</nova:creationTime>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <nova:port uuid="9b28bb10-34a8-47b1-823d-fd7d39b929ac">
Nov 29 08:00:34 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <system>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="serial">5f638465-c65b-4824-bedc-60f4b695402a</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="uuid">5f638465-c65b-4824-bedc-60f4b695402a</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </system>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <os>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </os>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <features>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </features>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5f638465-c65b-4824-bedc-60f4b695402a_disk">
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5f638465-c65b-4824-bedc-60f4b695402a_disk.config">
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:00:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:0f:ea:c0"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <target dev="tap9b28bb10-34"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/console.log" append="off"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <video>
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </video>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:00:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:00:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:00:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:00:34 compute-2 nova_compute[232428]: </domain>
Nov 29 08:00:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.024 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Preparing to wait for external event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.025 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.025 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.025 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.026 232432 DEBUG nova.virt.libvirt.vif [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-624518862',display_name='tempest-ImagesTestJSON-server-624518862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-624518862',id=61,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-ph7wosta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:26Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=5f638465-c65b-4824-bedc-60f4b695402a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.027 232432 DEBUG nova.network.os_vif_util [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.027 232432 DEBUG nova.network.os_vif_util [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.028 232432 DEBUG os_vif [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.029 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.029 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.030 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.035 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.036 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b28bb10-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.037 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b28bb10-34, col_values=(('external_ids', {'iface-id': '9b28bb10-34a8-47b1-823d-fd7d39b929ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:ea:c0', 'vm-uuid': '5f638465-c65b-4824-bedc-60f4b695402a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:34 compute-2 NetworkManager[48993]: <info>  [1764403234.0413] manager: (tap9b28bb10-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.041 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.051 232432 INFO os_vif [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34')
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.122 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.123 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.123 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:0f:ea:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.124 232432 INFO nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Using config drive
Nov 29 08:00:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.155 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1349017655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1359111750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.631 232432 INFO nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Creating config drive at /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.644 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2olkydrv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.798 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2olkydrv" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.844 232432 DEBUG nova.storage.rbd_utils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 5f638465-c65b-4824-bedc-60f4b695402a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.850 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config 5f638465-c65b-4824-bedc-60f4b695402a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.891 232432 DEBUG nova.network.neutron [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Updated VIF entry in instance network info cache for port 9b28bb10-34a8-47b1-823d-fd7d39b929ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.893 232432 DEBUG nova.network.neutron [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Updating instance_info_cache with network_info: [{"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.921 232432 DEBUG oslo_concurrency.lockutils [req-95300599-bc0c-482c-b274-2aed7318ad86 req-01854bb8-c421-4870-aac3-bde8ba3b11a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5f638465-c65b-4824-bedc-60f4b695402a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:00:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:34 compute-2 nova_compute[232428]: 2025-11-29 08:00:34.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:35 compute-2 sudo[259028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:35 compute-2 sudo[259028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-2 sudo[259028]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-2 sudo[259054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:35 compute-2 sudo[259054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-2 sudo[259054]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-2 sudo[259079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:35 compute-2 sudo[259079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-2 sudo[259079]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-2 ceph-mon[77138]: pgmap v1772: 305 pgs: 305 active+clean; 212 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 448 KiB/s rd, 5.3 MiB/s wr, 146 op/s
Nov 29 08:00:35 compute-2 sudo[259104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:00:35 compute-2 sudo[259104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.665 232432 DEBUG oslo_concurrency.processutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config 5f638465-c65b-4824-bedc-60f4b695402a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.815s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.667 232432 INFO nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Deleting local config drive /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a/disk.config because it was imported into RBD.
Nov 29 08:00:35 compute-2 kernel: tap9b28bb10-34: entered promiscuous mode
Nov 29 08:00:35 compute-2 NetworkManager[48993]: <info>  [1764403235.7499] manager: (tap9b28bb10-34): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-2 ovn_controller[134375]: 2025-11-29T08:00:35Z|00203|binding|INFO|Claiming lport 9b28bb10-34a8-47b1-823d-fd7d39b929ac for this chassis.
Nov 29 08:00:35 compute-2 ovn_controller[134375]: 2025-11-29T08:00:35Z|00204|binding|INFO|9b28bb10-34a8-47b1-823d-fd7d39b929ac: Claiming fa:16:3e:0f:ea:c0 10.100.0.4
Nov 29 08:00:35 compute-2 systemd-udevd[259145]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:00:35 compute-2 systemd-machined[194747]: New machine qemu-27-instance-0000003d.
Nov 29 08:00:35 compute-2 NetworkManager[48993]: <info>  [1764403235.8033] device (tap9b28bb10-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:00:35 compute-2 NetworkManager[48993]: <info>  [1764403235.8045] device (tap9b28bb10-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:00:35 compute-2 systemd[1]: Started Virtual Machine qemu-27-instance-0000003d.
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-2 ovn_controller[134375]: 2025-11-29T08:00:35Z|00205|binding|INFO|Setting lport 9b28bb10-34a8-47b1-823d-fd7d39b929ac ovn-installed in OVS
Nov 29 08:00:35 compute-2 nova_compute[232428]: 2025-11-29 08:00:35.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-2 ovn_controller[134375]: 2025-11-29T08:00:35Z|00206|binding|INFO|Setting lport 9b28bb10-34a8-47b1-823d-fd7d39b929ac up in Southbound
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.853 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:ea:c0 10.100.0.4'], port_security=['fa:16:3e:0f:ea:c0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5f638465-c65b-4824-bedc-60f4b695402a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9b28bb10-34a8-47b1-823d-fd7d39b929ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.854 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9b28bb10-34a8-47b1-823d-fd7d39b929ac in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.855 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.871 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9c500d32-413d-497e-af28-3f4c788353b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.872 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.874 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.874 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7392ddb-df1a-4005-8aa9-84c0c30f08da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.875 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[00c1fafe-0a41-4149-9222-46d1c633e538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.890 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[71334bec-b719-464a-810a-048411d66482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.923 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0ef4ce-b37c-44f9-893d-fd72fb06b26f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.965 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaf3c8e-4c2d-4b00-9305-42ae5a4e3052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:35.971 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1ed0c6-8f08-4cc7-8972-ee66bbe6beb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:35 compute-2 NetworkManager[48993]: <info>  [1764403235.9732] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 08:00:35 compute-2 systemd-udevd[259156]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.015 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[10fe0248-15ba-4c9b-9763-1aad73ef5cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.019 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f51bad32-8fca-4d16-b798-71c2e074f39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 NetworkManager[48993]: <info>  [1764403236.0511] device (tap7471f45a-d0): carrier: link connected
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.062 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[966daed7-137b-4db4-8c2e-3624dbb284c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.082 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c74db6f9-718a-4566-94dc-a3622ab90456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620231, 'reachable_time': 21002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259197, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.102 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c15b7a70-da87-4024-b05f-a51104b68fe5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620231, 'tstamp': 620231}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259198, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.126 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2e212716-7908-4fc9-a186-9456a3d283f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620231, 'reachable_time': 21002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259199, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.162 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bb424dc5-5dc4-4511-97a6-b67223d56ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 sudo[259104]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.232 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[463d0e1c-9cdb-43cb-938f-aac2262b2c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.234 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.234 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.234 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:36 compute-2 NetworkManager[48993]: <info>  [1764403236.2368] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 08:00:36 compute-2 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.240 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:36 compute-2 ovn_controller[134375]: 2025-11-29T08:00:36Z|00207|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.252 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.256 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.257 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f182ef65-95cd-4fab-b43d-83cd9ba21d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.257 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:00:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:36.258 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:00:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.336 232432 DEBUG nova.compute.manager [req-541c4394-f445-4542-8e8f-c0d8053b29ca req-9bc18990-c22a-41c9-b098-5023937ebc9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.337 232432 DEBUG oslo_concurrency.lockutils [req-541c4394-f445-4542-8e8f-c0d8053b29ca req-9bc18990-c22a-41c9-b098-5023937ebc9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.337 232432 DEBUG oslo_concurrency.lockutils [req-541c4394-f445-4542-8e8f-c0d8053b29ca req-9bc18990-c22a-41c9-b098-5023937ebc9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.337 232432 DEBUG oslo_concurrency.lockutils [req-541c4394-f445-4542-8e8f-c0d8053b29ca req-9bc18990-c22a-41c9-b098-5023937ebc9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.337 232432 DEBUG nova.compute.manager [req-541c4394-f445-4542-8e8f-c0d8053b29ca req-9bc18990-c22a-41c9-b098-5023937ebc9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Processing event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.657 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403236.656727, 5f638465-c65b-4824-bedc-60f4b695402a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.657 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] VM Started (Lifecycle Event)
Nov 29 08:00:36 compute-2 podman[259282]: 2025-11-29 08:00:36.65944411 +0000 UTC m=+0.056230190 container create aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.661 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.671 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.675 232432 INFO nova.virt.libvirt.driver [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Instance spawned successfully.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.676 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:00:36 compute-2 systemd[1]: Started libpod-conmon-aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811.scope.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.699 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.705 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.709 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.709 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.710 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.710 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.711 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.711 232432 DEBUG nova.virt.libvirt.driver [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:00:36 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:00:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c735a4e4a94b88c16f0af34ca5e48760fd220f189e9ebc8437c004c215db3d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:36 compute-2 podman[259282]: 2025-11-29 08:00:36.631125574 +0000 UTC m=+0.027911674 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:00:36 compute-2 podman[259282]: 2025-11-29 08:00:36.742822707 +0000 UTC m=+0.139608807 container init aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.745 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.746 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403236.6577783, 5f638465-c65b-4824-bedc-60f4b695402a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.746 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] VM Paused (Lifecycle Event)
Nov 29 08:00:36 compute-2 podman[259282]: 2025-11-29 08:00:36.748942619 +0000 UTC m=+0.145728699 container start aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:00:36 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [NOTICE]   (259302) : New worker (259304) forked
Nov 29 08:00:36 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [NOTICE]   (259302) : Loading success.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.779 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.783 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403236.6703954, 5f638465-c65b-4824-bedc-60f4b695402a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.783 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] VM Resumed (Lifecycle Event)
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.801 232432 INFO nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Took 9.93 seconds to spawn the instance on the hypervisor.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.802 232432 DEBUG nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.803 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.809 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.846 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.874 232432 INFO nova.compute.manager [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Took 11.00 seconds to build instance.
Nov 29 08:00:36 compute-2 nova_compute[232428]: 2025-11-29 08:00:36.891 232432 DEBUG oslo_concurrency.lockutils [None req-6800c939-ad87-47bb-9ca7-2b9928d78817 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:37.021 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:37.022 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:00:37 compute-2 nova_compute[232428]: 2025-11-29 08:00:37.065 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:37 compute-2 ceph-mon[77138]: pgmap v1773: 305 pgs: 305 active+clean; 234 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 7.0 MiB/s wr, 161 op/s
Nov 29 08:00:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3998648191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:00:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:00:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.475 232432 DEBUG nova.compute.manager [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.476 232432 DEBUG oslo_concurrency.lockutils [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.476 232432 DEBUG oslo_concurrency.lockutils [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.476 232432 DEBUG oslo_concurrency.lockutils [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.477 232432 DEBUG nova.compute.manager [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] No waiting events found dispatching network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:00:38 compute-2 nova_compute[232428]: 2025-11-29 08:00:38.478 232432 WARNING nova.compute.manager [req-e3c47b88-e1f3-44cc-95c0-c68082a499e9 req-afb6143a-e0a0-4634-bec7-f789b98e9a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received unexpected event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac for instance with vm_state active and task_state None.
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4066026751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2155591358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:00:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:39 compute-2 nova_compute[232428]: 2025-11-29 08:00:39.040 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:39 compute-2 nova_compute[232428]: 2025-11-29 08:00:39.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:39 compute-2 nova_compute[232428]: 2025-11-29 08:00:39.469 232432 DEBUG nova.compute.manager [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:39 compute-2 nova_compute[232428]: 2025-11-29 08:00:39.522 232432 INFO nova.compute.manager [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] instance snapshotting
Nov 29 08:00:39 compute-2 ceph-mon[77138]: pgmap v1774: 305 pgs: 305 active+clean; 234 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 7.0 MiB/s wr, 161 op/s
Nov 29 08:00:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:39 compute-2 nova_compute[232428]: 2025-11-29 08:00:39.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:40 compute-2 nova_compute[232428]: 2025-11-29 08:00:40.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:40 compute-2 nova_compute[232428]: 2025-11-29 08:00:40.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:00:40 compute-2 nova_compute[232428]: 2025-11-29 08:00:40.243 232432 INFO nova.virt.libvirt.driver [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Beginning live snapshot process
Nov 29 08:00:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:40.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:40 compute-2 nova_compute[232428]: 2025-11-29 08:00:40.382 232432 DEBUG nova.virt.libvirt.imagebackend [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 08:00:40 compute-2 nova_compute[232428]: 2025-11-29 08:00:40.683 232432 DEBUG nova.storage.rbd_utils [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(f9a9719f6a864b579a0d4e13859d8e1c) on rbd image(5f638465-c65b-4824-bedc-60f4b695402a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:41 compute-2 ceph-mon[77138]: pgmap v1775: 305 pgs: 305 active+clean; 241 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 173 op/s
Nov 29 08:00:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2556255479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.237 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.237 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.327 232432 DEBUG nova.storage.rbd_utils [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning vms/5f638465-c65b-4824-bedc-60f4b695402a_disk@f9a9719f6a864b579a0d4e13859d8e1c to images/bb915321-974b-4310-80ce-22fe787becec clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.472 232432 DEBUG nova.storage.rbd_utils [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] flattening images/bb915321-974b-4310-80ce-22fe787becec flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:00:41 compute-2 podman[259441]: 2025-11-29 08:00:41.692297452 +0000 UTC m=+0.088191730 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:00:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/770138657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.725 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.802 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.802 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.878 232432 DEBUG nova.storage.rbd_utils [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(f9a9719f6a864b579a0d4e13859d8e1c) on rbd image(5f638465-c65b-4824-bedc-60f4b695402a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.996 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.998 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4416MB free_disk=20.87720489501953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.998 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:41 compute-2 nova_compute[232428]: 2025-11-29 08:00:41.998 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.077 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5f638465-c65b-4824-bedc-60f4b695402a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.077 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.077 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.115 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 08:00:42 compute-2 ceph-mon[77138]: osdmap e240: 3 total, 3 up, 3 in
Nov 29 08:00:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/770138657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3165533345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:42.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.315 232432 DEBUG nova.storage.rbd_utils [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(snap) on rbd image(bb915321-974b-4310-80ce-22fe787becec) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:00:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1584390984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.601 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.607 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.630 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.649 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:00:42 compute-2 nova_compute[232428]: 2025-11-29 08:00:42.650 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:00:43.025 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:43 compute-2 sudo[259528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:43 compute-2 sudo[259528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:43 compute-2 sudo[259528]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:43 compute-2 sudo[259553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:43 compute-2 sudo[259553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:43 compute-2 sudo[259553]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.528 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.529 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.554 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.622 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.623 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.631 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.632 232432 INFO nova.compute.claims [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:00:43 compute-2 nova_compute[232428]: 2025-11-29 08:00:43.798 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.042 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4087350325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.267 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.276 232432 DEBUG nova.compute.provider_tree [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:00:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:44.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.321 232432 DEBUG nova.scheduler.client.report [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:44 compute-2 sshd-session[259598]: Invalid user sol from 45.148.10.240 port 42954
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.406 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.408 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:00:44 compute-2 sshd-session[259598]: Connection closed by invalid user sol 45.148.10.240 port 42954 [preauth]
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.469 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.470 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.499 232432 INFO nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.521 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.625 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.628 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:00:44 compute-2 nova_compute[232428]: 2025-11-29 08:00:44.629 232432 INFO nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Creating image(s)
Nov 29 08:00:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:45 compute-2 ceph-mon[77138]: pgmap v1777: 305 pgs: 305 active+clean; 268 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 213 op/s
Nov 29 08:00:45 compute-2 ceph-mon[77138]: osdmap e241: 3 total, 3 up, 3 in
Nov 29 08:00:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2429097201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1584390984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3539241879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.490 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.529 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.559 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.563 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.600 232432 DEBUG nova.policy [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fccef0dffe5046debab8211997669052', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9cfb4837af3c440e93179ccec8e1811d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.603 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.639 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.640 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.641 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.641 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.669 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:46 compute-2 nova_compute[232428]: 2025-11-29 08:00:46.674 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:47 compute-2 nova_compute[232428]: 2025-11-29 08:00:47.767 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Successfully created port: 778f8d73-c000-4576-9ffa-a7945446e0ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:00:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:48.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.459 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Successfully updated port: 778f8d73-c000-4576-9ffa-a7945446e0ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.474 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.475 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquired lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.475 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.556 232432 DEBUG nova.compute.manager [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-changed-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.556 232432 DEBUG nova.compute.manager [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Refreshing instance network info cache due to event network-changed-778f8d73-c000-4576-9ffa-a7945446e0ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.556 232432 DEBUG oslo_concurrency.lockutils [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.598 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:00:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:49 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 08:00:49 compute-2 nova_compute[232428]: 2025-11-29 08:00:49.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:50.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:50 compute-2 nova_compute[232428]: 2025-11-29 08:00:50.642 232432 DEBUG nova.network.neutron [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updating instance_info_cache with network_info: [{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:50 compute-2 nova_compute[232428]: 2025-11-29 08:00:50.668 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Releasing lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:00:50 compute-2 nova_compute[232428]: 2025-11-29 08:00:50.668 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Instance network_info: |[{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:00:50 compute-2 nova_compute[232428]: 2025-11-29 08:00:50.669 232432 DEBUG oslo_concurrency.lockutils [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:00:50 compute-2 nova_compute[232428]: 2025-11-29 08:00:50.669 232432 DEBUG nova.network.neutron [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Refreshing network info cache for port 778f8d73-c000-4576-9ffa-a7945446e0ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:00:51 compute-2 ceph-mon[77138]: pgmap v1779: 305 pgs: 305 active+clean; 310 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Nov 29 08:00:51 compute-2 ceph-mon[77138]: osdmap e242: 3 total, 3 up, 3 in
Nov 29 08:00:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:52 compute-2 nova_compute[232428]: 2025-11-29 08:00:52.599 232432 DEBUG nova.network.neutron [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updated VIF entry in instance network info cache for port 778f8d73-c000-4576-9ffa-a7945446e0ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:00:52 compute-2 nova_compute[232428]: 2025-11-29 08:00:52.600 232432 DEBUG nova.network.neutron [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updating instance_info_cache with network_info: [{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:52 compute-2 nova_compute[232428]: 2025-11-29 08:00:52.622 232432 DEBUG oslo_concurrency.lockutils [req-1b4f0bac-aef3-468f-bedc-0382d89fa80c req-7ec31d54-fd56-4091-94f9-8cbfea10dd23 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:00:54 compute-2 nova_compute[232428]: 2025-11-29 08:00:54.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4087350325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:54 compute-2 ceph-mon[77138]: pgmap v1781: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.3 MiB/s wr, 286 op/s
Nov 29 08:00:54 compute-2 ceph-mon[77138]: pgmap v1782: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 126 op/s
Nov 29 08:00:54 compute-2 ceph-mon[77138]: pgmap v1783: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 104 op/s
Nov 29 08:00:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:54.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:54 compute-2 nova_compute[232428]: 2025-11-29 08:00:54.671 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.997s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:54 compute-2 nova_compute[232428]: 2025-11-29 08:00:54.771 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] resizing rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:00:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:54 compute-2 nova_compute[232428]: 2025-11-29 08:00:54.987 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.146 232432 DEBUG nova.objects.instance [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lazy-loading 'migration_context' on Instance uuid 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.163 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.163 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Ensure instance console log exists: /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.164 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.164 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.164 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.167 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Start _get_guest_xml network_info=[{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.172 232432 WARNING nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.177 232432 DEBUG nova.virt.libvirt.host [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.178 232432 DEBUG nova.virt.libvirt.host [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.181 232432 DEBUG nova.virt.libvirt.host [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.182 232432 DEBUG nova.virt.libvirt.host [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.184 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.184 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.184 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.185 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.185 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.185 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.185 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.185 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.186 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.186 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.186 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.186 232432 DEBUG nova.virt.hardware [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.189 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:55 compute-2 ovn_controller[134375]: 2025-11-29T08:00:55Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:ea:c0 10.100.0.4
Nov 29 08:00:55 compute-2 ovn_controller[134375]: 2025-11-29T08:00:55Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:ea:c0 10.100.0.4
Nov 29 08:00:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:55 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2129874481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:55 compute-2 ceph-mon[77138]: pgmap v1784: 305 pgs: 305 active+clean; 346 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 104 op/s
Nov 29 08:00:55 compute-2 ceph-mon[77138]: pgmap v1785: 305 pgs: 305 active+clean; 351 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 86 op/s
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.640 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.673 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:55 compute-2 nova_compute[232428]: 2025-11-29 08:00:55.679 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3293358051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:00:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:00:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 08:00:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:00:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:00:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.496 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.817s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.498 232432 DEBUG nova.virt.libvirt.vif [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=63,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERGBfgA9qTqpGYP1UZ461aaKd4Io9zBf7WJ7nmmTBlIyLugi+Y1cFtKGTdfwdX8j1bvOMvNJuL6NYhsRx6mPAL1WoWkednVReV7dvjxu/jg7IvYoHFnzQZoIlNmmLG/1w==',key_name='tempest-keypair-939772258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9cfb4837af3c440e93179ccec8e1811d',ramdisk_id='',reservation_id='r-e000piny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-434830629',owner_user_name='tempest-ServersTestFqdnHostnames-434830629-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fccef0dffe5046debab8211997669052',uuid=08c645cb-08cb-4b4d-b0e8-37da8adc8c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.499 232432 DEBUG nova.network.os_vif_util [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converting VIF {"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.500 232432 DEBUG nova.network.os_vif_util [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.501 232432 DEBUG nova.objects.instance [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lazy-loading 'pci_devices' on Instance uuid 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.592 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <uuid>08c645cb-08cb-4b4d-b0e8-37da8adc8c02</uuid>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <name>instance-0000003f</name>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:name>guest-instance-1.domain.com</nova:name>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:00:55</nova:creationTime>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:user uuid="fccef0dffe5046debab8211997669052">tempest-ServersTestFqdnHostnames-434830629-project-member</nova:user>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:project uuid="9cfb4837af3c440e93179ccec8e1811d">tempest-ServersTestFqdnHostnames-434830629</nova:project>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <nova:port uuid="778f8d73-c000-4576-9ffa-a7945446e0ac">
Nov 29 08:00:58 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <system>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="serial">08c645cb-08cb-4b4d-b0e8-37da8adc8c02</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="uuid">08c645cb-08cb-4b4d-b0e8-37da8adc8c02</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </system>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <os>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </os>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <features>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </features>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk">
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config">
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:00:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:a8:2f:8a"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <target dev="tap778f8d73-c0"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/console.log" append="off"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <video>
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </video>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:00:58 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:00:58 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:00:58 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:00:58 compute-2 nova_compute[232428]: </domain>
Nov 29 08:00:58 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.594 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Preparing to wait for external event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.594 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.594 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.594 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.595 232432 DEBUG nova.virt.libvirt.vif [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=63,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERGBfgA9qTqpGYP1UZ461aaKd4Io9zBf7WJ7nmmTBlIyLugi+Y1cFtKGTdfwdX8j1bvOMvNJuL6NYhsRx6mPAL1WoWkednVReV7dvjxu/jg7IvYoHFnzQZoIlNmmLG/1w==',key_name='tempest-keypair-939772258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9cfb4837af3c440e93179ccec8e1811d',ramdisk_id='',reservation_id='r-e000piny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-434830629',owner_user_name='tempest-ServersTestFqdnHostnames-434830629-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fccef0dffe5046debab8211997669052',uuid=08c645cb-08cb-4b4d-b0e8-37da8adc8c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.595 232432 DEBUG nova.network.os_vif_util [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converting VIF {"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.596 232432 DEBUG nova.network.os_vif_util [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.596 232432 DEBUG os_vif [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.597 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.598 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.601 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.602 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap778f8d73-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.602 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap778f8d73-c0, col_values=(('external_ids', {'iface-id': '778f8d73-c000-4576-9ffa-a7945446e0ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:2f:8a', 'vm-uuid': '08c645cb-08cb-4b4d-b0e8-37da8adc8c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:58 compute-2 NetworkManager[48993]: <info>  [1764403258.6062] manager: (tap778f8d73-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.607 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.619 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:58 compute-2 nova_compute[232428]: 2025-11-29 08:00:58.620 232432 INFO os_vif [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0')
Nov 29 08:00:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2129874481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.127 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.127 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.128 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] No VIF found with MAC fa:16:3e:a8:2f:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.128 232432 INFO nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Using config drive
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.158 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:00:59 compute-2 podman[259858]: 2025-11-29 08:00:59.681078575 +0000 UTC m=+0.077284087 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:00:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:59 compute-2 nova_compute[232428]: 2025-11-29 08:00:59.990 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:00.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:00 compute-2 nova_compute[232428]: 2025-11-29 08:01:00.411 232432 INFO nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Creating config drive at /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config
Nov 29 08:01:00 compute-2 nova_compute[232428]: 2025-11-29 08:01:00.427 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7a0akf6p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:00 compute-2 nova_compute[232428]: 2025-11-29 08:01:00.590 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7a0akf6p" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:00 compute-2 nova_compute[232428]: 2025-11-29 08:01:00.643 232432 DEBUG nova.storage.rbd_utils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] rbd image 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:00 compute-2 nova_compute[232428]: 2025-11-29 08:01:00.649 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:01 compute-2 ceph-mon[77138]: pgmap v1786: 305 pgs: 305 active+clean; 393 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.1 MiB/s wr, 103 op/s
Nov 29 08:01:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3293358051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:01 compute-2 ceph-mon[77138]: pgmap v1787: 305 pgs: 305 active+clean; 393 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 2.9 MiB/s wr, 53 op/s
Nov 29 08:01:01 compute-2 ceph-mon[77138]: osdmap e243: 3 total, 3 up, 3 in
Nov 29 08:01:01 compute-2 nova_compute[232428]: 2025-11-29 08:01:01.968 232432 INFO nova.virt.libvirt.driver [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Snapshot image upload complete
Nov 29 08:01:01 compute-2 nova_compute[232428]: 2025-11-29 08:01:01.969 232432 INFO nova.compute.manager [None req-7a8817cc-a8b7-4bcf-b9cf-e6c40dc4641d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Took 22.44 seconds to snapshot the instance on the hypervisor.
Nov 29 08:01:01 compute-2 CROND[259920]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 08:01:01 compute-2 run-parts[259923]: (/etc/cron.hourly) starting 0anacron
Nov 29 08:01:02 compute-2 run-parts[259930]: (/etc/cron.hourly) finished 0anacron
Nov 29 08:01:02 compute-2 CROND[259919]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 08:01:02 compute-2 sudo[259929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:02 compute-2 sudo[259929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:02 compute-2 sudo[259929]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:02 compute-2 sudo[259955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:01:02 compute-2 sudo[259955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:02 compute-2 sudo[259955]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:02 compute-2 nova_compute[232428]: 2025-11-29 08:01:02.680 232432 DEBUG oslo_concurrency.processutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config 08c645cb-08cb-4b4d-b0e8-37da8adc8c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:02 compute-2 nova_compute[232428]: 2025-11-29 08:01:02.682 232432 INFO nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Deleting local config drive /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02/disk.config because it was imported into RBD.
Nov 29 08:01:02 compute-2 ceph-mon[77138]: pgmap v1789: 305 pgs: 305 active+clean; 407 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 4.6 MiB/s wr, 100 op/s
Nov 29 08:01:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:01:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:01:02 compute-2 kernel: tap778f8d73-c0: entered promiscuous mode
Nov 29 08:01:02 compute-2 NetworkManager[48993]: <info>  [1764403262.7561] manager: (tap778f8d73-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 08:01:02 compute-2 ovn_controller[134375]: 2025-11-29T08:01:02Z|00208|binding|INFO|Claiming lport 778f8d73-c000-4576-9ffa-a7945446e0ac for this chassis.
Nov 29 08:01:02 compute-2 ovn_controller[134375]: 2025-11-29T08:01:02Z|00209|binding|INFO|778f8d73-c000-4576-9ffa-a7945446e0ac: Claiming fa:16:3e:a8:2f:8a 10.100.0.3
Nov 29 08:01:02 compute-2 nova_compute[232428]: 2025-11-29 08:01:02.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.768 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:2f:8a 10.100.0.3'], port_security=['fa:16:3e:a8:2f:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08c645cb-08cb-4b4d-b0e8-37da8adc8c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e18b65-6dc6-41e6-a291-e35356bef842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9cfb4837af3c440e93179ccec8e1811d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37099615-e931-4792-9383-c29f945c5d7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eabb894-c6ca-4097-b275-9b0ea9bf27d9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=778f8d73-c000-4576-9ffa-a7945446e0ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.770 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 778f8d73-c000-4576-9ffa-a7945446e0ac in datapath 40e18b65-6dc6-41e6-a291-e35356bef842 bound to our chassis
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.772 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40e18b65-6dc6-41e6-a291-e35356bef842
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.788 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f03c043c-b0ab-43d9-bdd7-8e16f04138a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.789 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40e18b65-61 in ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.791 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40e18b65-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.792 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c4846ca7-3395-4142-a98d-7f121bf30c4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfbcc9e-fa5b-40c5-ae5e-431cfce06509]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 systemd-udevd[259993]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.808 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[9b15954b-9d8e-4d73-90f6-576de7ec165c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 systemd-machined[194747]: New machine qemu-28-instance-0000003f.
Nov 29 08:01:02 compute-2 NetworkManager[48993]: <info>  [1764403262.8191] device (tap778f8d73-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:01:02 compute-2 NetworkManager[48993]: <info>  [1764403262.8202] device (tap778f8d73-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:01:02 compute-2 systemd[1]: Started Virtual Machine qemu-28-instance-0000003f.
Nov 29 08:01:02 compute-2 nova_compute[232428]: 2025-11-29 08:01:02.825 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.830 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b84ba1f3-f533-4fe4-ba5f-ef9999e148a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_controller[134375]: 2025-11-29T08:01:02Z|00210|binding|INFO|Setting lport 778f8d73-c000-4576-9ffa-a7945446e0ac ovn-installed in OVS
Nov 29 08:01:02 compute-2 ovn_controller[134375]: 2025-11-29T08:01:02Z|00211|binding|INFO|Setting lport 778f8d73-c000-4576-9ffa-a7945446e0ac up in Southbound
Nov 29 08:01:02 compute-2 nova_compute[232428]: 2025-11-29 08:01:02.833 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.864 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d9390675-66a7-4e56-972f-a8038d84118f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.871 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cd46a116-3fa0-4237-8874-5a8620f1ca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 NetworkManager[48993]: <info>  [1764403262.8722] manager: (tap40e18b65-60): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 08:01:02 compute-2 systemd-udevd[259998]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.906 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0d51930d-9869-4105-9eff-d149a6538c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.910 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[652f82d0-1ca4-445b-8864-e932a7752028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 NetworkManager[48993]: <info>  [1764403262.9384] device (tap40e18b65-60): carrier: link connected
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.950 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e213552b-6a17-4ca2-b218-a34018428b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.972 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f27652c8-6f8e-429e-9b95-750d112c641c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e18b65-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:34:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622920, 'reachable_time': 28676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260026, 'error': None, 'target': 'ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:02.993 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3adb9f41-f120-4daa-87b8-54bf1b89e515]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:3483'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622920, 'tstamp': 622920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260027, 'error': None, 'target': 'ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.030 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ca90a0e6-6c48-41a2-b147-b2e627c718d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e18b65-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:34:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622920, 'reachable_time': 28676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260028, 'error': None, 'target': 'ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.080 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae19f5f-48aa-4c84-896a-939afc0b0582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.189 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b20a88-84fa-47fa-9c35-68ac7d1a8257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.191 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e18b65-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.191 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.192 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40e18b65-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.193 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:03 compute-2 kernel: tap40e18b65-60: entered promiscuous mode
Nov 29 08:01:03 compute-2 NetworkManager[48993]: <info>  [1764403263.1943] manager: (tap40e18b65-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.196 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40e18b65-60, col_values=(('external_ids', {'iface-id': '3231bf86-2685-4c72-bec1-e76752b95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.197 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:03 compute-2 ovn_controller[134375]: 2025-11-29T08:01:03Z|00212|binding|INFO|Releasing lport 3231bf86-2685-4c72-bec1-e76752b95f3b from this chassis (sb_readonly=0)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.212 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.213 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40e18b65-6dc6-41e6-a291-e35356bef842.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40e18b65-6dc6-41e6-a291-e35356bef842.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1fd985-2222-4ed6-8adc-f70483892b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.215 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-40e18b65-6dc6-41e6-a291-e35356bef842
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/40e18b65-6dc6-41e6-a291-e35356bef842.pid.haproxy
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 40e18b65-6dc6-41e6-a291-e35356bef842
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.215 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842', 'env', 'PROCESS_TAG=haproxy-40e18b65-6dc6-41e6-a291-e35356bef842', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40e18b65-6dc6-41e6-a291-e35356bef842.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.306 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.307 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:03.308 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.380 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403263.3799753, 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.381 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] VM Started (Lifecycle Event)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.412 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.416 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403263.3829749, 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.417 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] VM Paused (Lifecycle Event)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.435 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.440 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.455 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:03 compute-2 sudo[260081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:03 compute-2 sudo[260081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:03 compute-2 sudo[260081]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:03 compute-2 sudo[260119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:03 compute-2 sudo[260119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:03 compute-2 sudo[260119]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:03 compute-2 podman[260150]: 2025-11-29 08:01:03.597888463 +0000 UTC m=+0.053060450 container create c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:03 compute-2 systemd[1]: Started libpod-conmon-c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d.scope.
Nov 29 08:01:03 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:01:03 compute-2 podman[260150]: 2025-11-29 08:01:03.570426744 +0000 UTC m=+0.025598752 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:01:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf73530930587f719ffe34df07491dea477a81d78ccc8da310d8923c81082175/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:03 compute-2 podman[260150]: 2025-11-29 08:01:03.68539335 +0000 UTC m=+0.140565377 container init c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:01:03 compute-2 podman[260150]: 2025-11-29 08:01:03.691181321 +0000 UTC m=+0.146353318 container start c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:01:03 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [NOTICE]   (260171) : New worker (260173) forked
Nov 29 08:01:03 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [NOTICE]   (260171) : Loading success.
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.778 232432 DEBUG nova.compute.manager [req-aecbbfd0-e13b-4fea-89a7-c6ccd9d62942 req-5ac6c9ee-258c-4b35-b3e7-74d8c58b5282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.778 232432 DEBUG oslo_concurrency.lockutils [req-aecbbfd0-e13b-4fea-89a7-c6ccd9d62942 req-5ac6c9ee-258c-4b35-b3e7-74d8c58b5282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.778 232432 DEBUG oslo_concurrency.lockutils [req-aecbbfd0-e13b-4fea-89a7-c6ccd9d62942 req-5ac6c9ee-258c-4b35-b3e7-74d8c58b5282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.779 232432 DEBUG oslo_concurrency.lockutils [req-aecbbfd0-e13b-4fea-89a7-c6ccd9d62942 req-5ac6c9ee-258c-4b35-b3e7-74d8c58b5282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.779 232432 DEBUG nova.compute.manager [req-aecbbfd0-e13b-4fea-89a7-c6ccd9d62942 req-5ac6c9ee-258c-4b35-b3e7-74d8c58b5282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Processing event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.779 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.783 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403263.7836597, 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.784 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] VM Resumed (Lifecycle Event)
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.785 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.789 232432 INFO nova.virt.libvirt.driver [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Instance spawned successfully.
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.789 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:01:03 compute-2 ceph-mon[77138]: pgmap v1790: 305 pgs: 305 active+clean; 411 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.818 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.824 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.826 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.826 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.827 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.827 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.827 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.828 232432 DEBUG nova.virt.libvirt.driver [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.855 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.891 232432 INFO nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Took 19.26 seconds to spawn the instance on the hypervisor.
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.891 232432 DEBUG nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.955 232432 INFO nova.compute.manager [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Took 20.35 seconds to build instance.
Nov 29 08:01:03 compute-2 nova_compute[232428]: 2025-11-29 08:01:03.972 232432 DEBUG oslo_concurrency.lockutils [None req-66265e33-9752-4473-8d9c-41f7075f2acd fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:04.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:04 compute-2 podman[260182]: 2025-11-29 08:01:04.748118436 +0000 UTC m=+0.130835452 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 08:01:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:04 compute-2 ceph-mon[77138]: pgmap v1791: 305 pgs: 305 active+clean; 411 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 174 op/s
Nov 29 08:01:04 compute-2 nova_compute[232428]: 2025-11-29 08:01:04.992 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.873 232432 DEBUG nova.compute.manager [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.874 232432 DEBUG oslo_concurrency.lockutils [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.874 232432 DEBUG oslo_concurrency.lockutils [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.874 232432 DEBUG oslo_concurrency.lockutils [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.874 232432 DEBUG nova.compute.manager [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] No waiting events found dispatching network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:05 compute-2 nova_compute[232428]: 2025-11-29 08:01:05.874 232432 WARNING nova.compute.manager [req-9866ca40-6af3-48eb-a580-c69eea1144ae req-1f3971cc-6796-48bd-b655-8fc8cf90b6d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received unexpected event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac for instance with vm_state active and task_state None.
Nov 29 08:01:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:07 compute-2 ceph-mon[77138]: pgmap v1792: 305 pgs: 305 active+clean; 417 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 29 08:01:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/708369811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.071 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:08 compute-2 NetworkManager[48993]: <info>  [1764403268.0749] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 08:01:08 compute-2 NetworkManager[48993]: <info>  [1764403268.0759] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 08:01:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.188 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:08 compute-2 ovn_controller[134375]: 2025-11-29T08:01:08Z|00213|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 08:01:08 compute-2 ovn_controller[134375]: 2025-11-29T08:01:08Z|00214|binding|INFO|Releasing lport 3231bf86-2685-4c72-bec1-e76752b95f3b from this chassis (sb_readonly=0)
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.319 232432 DEBUG nova.compute.manager [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-changed-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.319 232432 DEBUG nova.compute.manager [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Refreshing instance network info cache due to event network-changed-778f8d73-c000-4576-9ffa-a7945446e0ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.320 232432 DEBUG oslo_concurrency.lockutils [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.320 232432 DEBUG oslo_concurrency.lockutils [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.320 232432 DEBUG nova.network.neutron [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Refreshing network info cache for port 778f8d73-c000-4576-9ffa-a7945446e0ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:01:08 compute-2 nova_compute[232428]: 2025-11-29 08:01:08.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:09 compute-2 ceph-mon[77138]: pgmap v1793: 305 pgs: 305 active+clean; 417 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 29 08:01:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/299655272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:10 compute-2 nova_compute[232428]: 2025-11-29 08:01:10.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/951801490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:10.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:10 compute-2 nova_compute[232428]: 2025-11-29 08:01:10.779 232432 DEBUG nova.network.neutron [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updated VIF entry in instance network info cache for port 778f8d73-c000-4576-9ffa-a7945446e0ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:01:10 compute-2 nova_compute[232428]: 2025-11-29 08:01:10.779 232432 DEBUG nova.network.neutron [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updating instance_info_cache with network_info: [{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:10 compute-2 nova_compute[232428]: 2025-11-29 08:01:10.806 232432 DEBUG oslo_concurrency.lockutils [req-71cbf85a-05b9-432b-afbd-1f2b0cfc0f77 req-16bb2d44-f7ff-4640-b91f-0ef7eb1d9d68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:11 compute-2 ceph-mon[77138]: pgmap v1794: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 120 KiB/s wr, 171 op/s
Nov 29 08:01:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/826107425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:12 compute-2 podman[260207]: 2025-11-29 08:01:12.732096363 +0000 UTC m=+0.125196326 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 08:01:13 compute-2 ceph-mon[77138]: pgmap v1795: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 118 KiB/s wr, 199 op/s
Nov 29 08:01:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/377233110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:13 compute-2 nova_compute[232428]: 2025-11-29 08:01:13.608 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:14.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:15 compute-2 nova_compute[232428]: 2025-11-29 08:01:15.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:15 compute-2 ceph-mon[77138]: pgmap v1796: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 123 KiB/s wr, 155 op/s
Nov 29 08:01:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:16.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:17 compute-2 ceph-mon[77138]: pgmap v1797: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 65 KiB/s wr, 185 op/s
Nov 29 08:01:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1927726628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:18 compute-2 nova_compute[232428]: 2025-11-29 08:01:18.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:19 compute-2 ovn_controller[134375]: 2025-11-29T08:01:19Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:2f:8a 10.100.0.3
Nov 29 08:01:19 compute-2 ovn_controller[134375]: 2025-11-29T08:01:19Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:2f:8a 10.100.0.3
Nov 29 08:01:19 compute-2 ceph-mon[77138]: pgmap v1798: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 16 KiB/s wr, 143 op/s
Nov 29 08:01:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:20 compute-2 nova_compute[232428]: 2025-11-29 08:01:20.010 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:01:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:01:20 compute-2 nova_compute[232428]: 2025-11-29 08:01:20.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:21 compute-2 ceph-mon[77138]: pgmap v1799: 305 pgs: 305 active+clean; 393 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 17 KiB/s wr, 168 op/s
Nov 29 08:01:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1550873857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:22.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.349 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.349 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.350 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.350 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.350 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.353 232432 INFO nova.compute.manager [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Terminating instance
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.355 232432 DEBUG nova.compute.manager [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:01:23 compute-2 ceph-mon[77138]: pgmap v1800: 305 pgs: 305 active+clean; 396 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 254 op/s
Nov 29 08:01:23 compute-2 ceph-mon[77138]: osdmap e244: 3 total, 3 up, 3 in
Nov 29 08:01:23 compute-2 kernel: tap9b28bb10-34 (unregistering): left promiscuous mode
Nov 29 08:01:23 compute-2 NetworkManager[48993]: <info>  [1764403283.4277] device (tap9b28bb10-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:01:23 compute-2 ovn_controller[134375]: 2025-11-29T08:01:23Z|00215|binding|INFO|Releasing lport 9b28bb10-34a8-47b1-823d-fd7d39b929ac from this chassis (sb_readonly=0)
Nov 29 08:01:23 compute-2 ovn_controller[134375]: 2025-11-29T08:01:23Z|00216|binding|INFO|Setting lport 9b28bb10-34a8-47b1-823d-fd7d39b929ac down in Southbound
Nov 29 08:01:23 compute-2 ovn_controller[134375]: 2025-11-29T08:01:23Z|00217|binding|INFO|Removing iface tap9b28bb10-34 ovn-installed in OVS
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.494 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.497 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.518 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:ea:c0 10.100.0.4'], port_security=['fa:16:3e:0f:ea:c0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5f638465-c65b-4824-bedc-60f4b695402a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9b28bb10-34a8-47b1-823d-fd7d39b929ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.520 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9b28bb10-34a8-47b1-823d-fd7d39b929ac in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.522 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.524 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7f1e2d-8568-4881-949f-5fd09b41b6bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.525 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 29 08:01:23 compute-2 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003d.scope: Consumed 16.473s CPU time.
Nov 29 08:01:23 compute-2 systemd-machined[194747]: Machine qemu-27-instance-0000003d terminated.
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.605 232432 INFO nova.virt.libvirt.driver [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Instance destroyed successfully.
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.605 232432 DEBUG nova.objects.instance [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid 5f638465-c65b-4824-bedc-60f4b695402a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.615 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.621 232432 DEBUG nova.virt.libvirt.vif [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-624518862',display_name='tempest-ImagesTestJSON-server-624518862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-624518862',id=61,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-ph7wosta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:02Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=5f638465-c65b-4824-bedc-60f4b695402a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.621 232432 DEBUG nova.network.os_vif_util [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "address": "fa:16:3e:0f:ea:c0", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b28bb10-34", "ovs_interfaceid": "9b28bb10-34a8-47b1-823d-fd7d39b929ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.622 232432 DEBUG nova.network.os_vif_util [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.623 232432 DEBUG os_vif [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.625 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.626 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b28bb10-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.628 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.633 232432 INFO os_vif [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:ea:c0,bridge_name='br-int',has_traffic_filtering=True,id=9b28bb10-34a8-47b1-823d-fd7d39b929ac,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b28bb10-34')
Nov 29 08:01:23 compute-2 sudo[260253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:23 compute-2 sudo[260253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:23 compute-2 sudo[260253]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.700 232432 DEBUG nova.compute.manager [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-unplugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.700 232432 DEBUG oslo_concurrency.lockutils [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.701 232432 DEBUG oslo_concurrency.lockutils [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.701 232432 DEBUG oslo_concurrency.lockutils [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.701 232432 DEBUG nova.compute.manager [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] No waiting events found dispatching network-vif-unplugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.701 232432 DEBUG nova.compute.manager [req-6fbfa298-d90d-4e7d-bc4a-964880c6ef27 req-b5127426-0816-4a52-be02-5a375faa366e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-unplugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:01:23 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [NOTICE]   (259302) : haproxy version is 2.8.14-c23fe91
Nov 29 08:01:23 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [NOTICE]   (259302) : path to executable is /usr/sbin/haproxy
Nov 29 08:01:23 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [WARNING]  (259302) : Exiting Master process...
Nov 29 08:01:23 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [ALERT]    (259302) : Current worker (259304) exited with code 143 (Terminated)
Nov 29 08:01:23 compute-2 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[259298]: [WARNING]  (259302) : All workers exited. Exiting... (0)
Nov 29 08:01:23 compute-2 systemd[1]: libpod-aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811.scope: Deactivated successfully.
Nov 29 08:01:23 compute-2 sudo[260315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:23 compute-2 sudo[260315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:23 compute-2 sudo[260315]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:23 compute-2 podman[260314]: 2025-11-29 08:01:23.729416461 +0000 UTC m=+0.054137605 container died aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:01:23 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811-userdata-shm.mount: Deactivated successfully.
Nov 29 08:01:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-8c735a4e4a94b88c16f0af34ca5e48760fd220f189e9ebc8437c004c215db3d2-merged.mount: Deactivated successfully.
Nov 29 08:01:23 compute-2 podman[260314]: 2025-11-29 08:01:23.778368572 +0000 UTC m=+0.103089686 container cleanup aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:01:23 compute-2 systemd[1]: libpod-conmon-aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811.scope: Deactivated successfully.
Nov 29 08:01:23 compute-2 podman[260370]: 2025-11-29 08:01:23.851659763 +0000 UTC m=+0.047645031 container remove aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.865 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[53a4fe05-557d-4995-aef4-6c01a1f07f78]: (4, ('Sat Nov 29 08:01:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811)\naa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811\nSat Nov 29 08:01:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (aa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811)\naa7f9becee2437070c6da4a3f09c00a82dfbc634d10d27261b3fe1d8327fd811\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.867 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c232b160-f770-4b6b-9d9a-2c2fc8e5ad90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.868 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 08:01:23 compute-2 nova_compute[232428]: 2025-11-29 08:01:23.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.892 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aef91031-0f32-4321-8f80-28fc5ad302e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.911 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[efa0f15e-616b-4f9f-b87e-cc423c1323b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.912 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6444b67a-2644-4240-8a1e-24eb82936860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.930 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5fcc7a-6138-4e86-a131-0d7f88fd1e32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620222, 'reachable_time': 15601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260386, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:23 compute-2 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.933 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:01:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:23.934 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[351e7f2b-c568-47dc-8b17-48202aeabaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.089 232432 INFO nova.virt.libvirt.driver [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Deleting instance files /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a_del
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.090 232432 INFO nova.virt.libvirt.driver [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Deletion of /var/lib/nova/instances/5f638465-c65b-4824-bedc-60f4b695402a_del complete
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.144 232432 INFO nova.compute.manager [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.145 232432 DEBUG oslo.service.loopingcall [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.146 232432 DEBUG nova.compute.manager [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.147 232432 DEBUG nova.network.neutron [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:01:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:24.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.898 232432 DEBUG nova.network.neutron [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.928 232432 INFO nova.compute.manager [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Took 0.78 seconds to deallocate network for instance.
Nov 29 08:01:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.972 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:24 compute-2 nova_compute[232428]: 2025-11-29 08:01:24.972 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.051 232432 DEBUG oslo_concurrency.processutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.505 232432 DEBUG nova.compute.manager [req-0875bfd0-eed0-4201-b791-55d9cae47859 req-c6687511-8cc8-4173-8372-308fd946a2f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-deleted-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3857171006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.562 232432 DEBUG oslo_concurrency.processutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.571 232432 DEBUG nova.compute.provider_tree [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.585 232432 DEBUG nova.scheduler.client.report [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.603 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.646 232432 INFO nova.scheduler.client.report [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance 5f638465-c65b-4824-bedc-60f4b695402a
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.744 232432 DEBUG oslo_concurrency.lockutils [None req-2cc18706-4396-4aa0-a015-328a010781c4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.884 232432 DEBUG nova.compute.manager [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.885 232432 DEBUG oslo_concurrency.lockutils [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5f638465-c65b-4824-bedc-60f4b695402a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.886 232432 DEBUG oslo_concurrency.lockutils [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.886 232432 DEBUG oslo_concurrency.lockutils [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5f638465-c65b-4824-bedc-60f4b695402a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.886 232432 DEBUG nova.compute.manager [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] No waiting events found dispatching network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:25 compute-2 nova_compute[232428]: 2025-11-29 08:01:25.887 232432 WARNING nova.compute.manager [req-5d5b3904-efbe-4f1e-aec1-acb2134352ca req-179cb93a-50f4-42bd-bdbf-5cfb6ee6d378 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Received unexpected event network-vif-plugged-9b28bb10-34a8-47b1-823d-fd7d39b929ac for instance with vm_state deleted and task_state None.
Nov 29 08:01:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.283631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286283736, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1545, "num_deletes": 511, "total_data_size": 2683491, "memory_usage": 2736136, "flush_reason": "Manual Compaction"}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286300935, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1183926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34562, "largest_seqno": 36102, "table_properties": {"data_size": 1178372, "index_size": 2309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 1164511, "raw_average_value_size": 1370, "num_data_blocks": 101, "num_entries": 850, "num_filter_entries": 850, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403191, "oldest_key_time": 1764403191, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 17449 microseconds, and 10291 cpu microseconds.
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.301077) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1183926 bytes OK
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.301129) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.303408) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.303434) EVENT_LOG_v1 {"time_micros": 1764403286303426, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.303456) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2675237, prev total WAL file size 2675237, number of live WAL files 2.
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.304900) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323631' seq:0, type:0; will stop at (end)
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1156KB)], [63(11MB)]
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286305030, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12887667, "oldest_snapshot_seqno": -1}
Nov 29 08:01:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:26.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6413 keys, 9453775 bytes, temperature: kUnknown
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286415127, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9453775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9411447, "index_size": 25194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 165951, "raw_average_key_size": 25, "raw_value_size": 9296677, "raw_average_value_size": 1449, "num_data_blocks": 1003, "num_entries": 6413, "num_filter_entries": 6413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.415745) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9453775 bytes
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.417616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.7 rd, 85.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(18.9) write-amplify(8.0) OK, records in: 7416, records dropped: 1003 output_compression: NoCompression
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.417652) EVENT_LOG_v1 {"time_micros": 1764403286417636, "job": 38, "event": "compaction_finished", "compaction_time_micros": 110404, "compaction_time_cpu_micros": 49040, "output_level": 6, "num_output_files": 1, "total_output_size": 9453775, "num_input_records": 7416, "num_output_records": 6413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286418261, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286423292, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.304767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.423422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.423431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.423434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.423437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:01:26.423440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:27 compute-2 ceph-mon[77138]: pgmap v1802: 305 pgs: 305 active+clean; 384 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.6 MiB/s wr, 283 op/s
Nov 29 08:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3857171006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:28 compute-2 ceph-mon[77138]: pgmap v1803: 305 pgs: 305 active+clean; 310 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 280 op/s
Nov 29 08:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3977386029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1018320197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1018320197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:28.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:28 compute-2 nova_compute[232428]: 2025-11-29 08:01:28.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:29 compute-2 ceph-mon[77138]: pgmap v1804: 305 pgs: 305 active+clean; 310 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 280 op/s
Nov 29 08:01:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:29.971 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:29 compute-2 nova_compute[232428]: 2025-11-29 08:01:29.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:29.972 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:01:30 compute-2 nova_compute[232428]: 2025-11-29 08:01:30.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2711949263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:30.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 08:01:30 compute-2 podman[260412]: 2025-11-29 08:01:30.701777256 +0000 UTC m=+0.088267201 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:01:31 compute-2 nova_compute[232428]: 2025-11-29 08:01:31.225 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:01:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 6989 writes, 36K keys, 6989 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 6989 writes, 6989 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1673 writes, 8568 keys, 1673 commit groups, 1.0 writes per commit group, ingest: 16.49 MB, 0.03 MB/s
                                           Interval WAL: 1673 writes, 1673 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     75.6      0.59              0.18        19    0.031       0      0       0.0       0.0
                                             L6      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.6    113.3     92.8      1.72              0.58        18    0.096     98K    10K       0.0       0.0
                                            Sum      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6     84.5     88.4      2.31              0.76        37    0.062     98K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2    105.9    104.1      0.55              0.20        10    0.055     33K   3553       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    113.3     92.8      1.72              0.58        18    0.096     98K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     88.5      0.50              0.18        18    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.043, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.19 GB read, 0.07 MB/s read, 2.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 22.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000167 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1213,21.70 MB,7.13948%) FilterBlock(37,284.73 KB,0.0914674%) IndexBlock(37,493.89 KB,0.158656%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:01:32 compute-2 ceph-mon[77138]: pgmap v1805: 305 pgs: 305 active+clean; 252 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 258 op/s
Nov 29 08:01:32 compute-2 ceph-mon[77138]: osdmap e245: 3 total, 3 up, 3 in
Nov 29 08:01:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:32.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:33 compute-2 nova_compute[232428]: 2025-11-29 08:01:33.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:33 compute-2 ceph-mon[77138]: pgmap v1807: 305 pgs: 305 active+clean; 242 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 2.5 MiB/s wr, 179 op/s
Nov 29 08:01:33 compute-2 nova_compute[232428]: 2025-11-29 08:01:33.676 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:34 compute-2 nova_compute[232428]: 2025-11-29 08:01:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:34 compute-2 nova_compute[232428]: 2025-11-29 08:01:34.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:01:34 compute-2 nova_compute[232428]: 2025-11-29 08:01:34.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:01:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1470695734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:35 compute-2 nova_compute[232428]: 2025-11-29 08:01:35.019 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:35 compute-2 podman[260436]: 2025-11-29 08:01:35.694305665 +0000 UTC m=+0.083533093 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:01:36 compute-2 ceph-mon[77138]: pgmap v1808: 305 pgs: 305 active+clean; 223 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.2 MiB/s wr, 145 op/s
Nov 29 08:01:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/621022494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:36.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:36.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:36 compute-2 nova_compute[232428]: 2025-11-29 08:01:36.419 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:36 compute-2 nova_compute[232428]: 2025-11-29 08:01:36.419 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:36 compute-2 nova_compute[232428]: 2025-11-29 08:01:36.420 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:01:36 compute-2 nova_compute[232428]: 2025-11-29 08:01:36.420 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:37.974 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:38 compute-2 ceph-mon[77138]: pgmap v1809: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 29 08:01:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:38.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:38.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:38 compute-2 nova_compute[232428]: 2025-11-29 08:01:38.602 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403283.6006835, 5f638465-c65b-4824-bedc-60f4b695402a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:38 compute-2 nova_compute[232428]: 2025-11-29 08:01:38.603 232432 INFO nova.compute.manager [-] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] VM Stopped (Lifecycle Event)
Nov 29 08:01:38 compute-2 nova_compute[232428]: 2025-11-29 08:01:38.679 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/23151161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:39 compute-2 ceph-mon[77138]: pgmap v1810: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 29 08:01:39 compute-2 nova_compute[232428]: 2025-11-29 08:01:39.429 232432 DEBUG nova.compute.manager [None req-5d28ef3b-8295-4779-b543-3374c7125d1b - - - - - -] [instance: 5f638465-c65b-4824-bedc-60f4b695402a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:40 compute-2 nova_compute[232428]: 2025-11-29 08:01:40.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:40.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:40 compute-2 nova_compute[232428]: 2025-11-29 08:01:40.485 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updating instance_info_cache with network_info: [{"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1829546118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:42.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:43 compute-2 ceph-mon[77138]: pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Nov 29 08:01:43 compute-2 ceph-mon[77138]: pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 50 KiB/s wr, 36 op/s
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.681 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:43 compute-2 podman[260461]: 2025-11-29 08:01:43.709962245 +0000 UTC m=+0.117200366 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:01:43 compute-2 sudo[260488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:43 compute-2 sudo[260488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:43 compute-2 sudo[260488]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.862 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-08c645cb-08cb-4b4d-b0e8-37da8adc8c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.862 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.863 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.863 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.863 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.864 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.902 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.902 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.903 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.903 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.903 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:43 compute-2 sudo[260513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:43 compute-2 sudo[260513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:43 compute-2 sudo[260513]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.945 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.945 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.946 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.946 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.946 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.948 232432 INFO nova.compute.manager [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Terminating instance
Nov 29 08:01:43 compute-2 nova_compute[232428]: 2025-11-29 08:01:43.949 232432 DEBUG nova.compute.manager [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:01:44 compute-2 kernel: tap778f8d73-c0 (unregistering): left promiscuous mode
Nov 29 08:01:44 compute-2 NetworkManager[48993]: <info>  [1764403304.0173] device (tap778f8d73-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 ovn_controller[134375]: 2025-11-29T08:01:44Z|00218|binding|INFO|Releasing lport 778f8d73-c000-4576-9ffa-a7945446e0ac from this chassis (sb_readonly=0)
Nov 29 08:01:44 compute-2 ovn_controller[134375]: 2025-11-29T08:01:44Z|00219|binding|INFO|Setting lport 778f8d73-c000-4576-9ffa-a7945446e0ac down in Southbound
Nov 29 08:01:44 compute-2 ovn_controller[134375]: 2025-11-29T08:01:44Z|00220|binding|INFO|Removing iface tap778f8d73-c0 ovn-installed in OVS
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.029 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.035 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:2f:8a 10.100.0.3'], port_security=['fa:16:3e:a8:2f:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08c645cb-08cb-4b4d-b0e8-37da8adc8c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e18b65-6dc6-41e6-a291-e35356bef842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9cfb4837af3c440e93179ccec8e1811d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37099615-e931-4792-9383-c29f945c5d7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eabb894-c6ca-4097-b275-9b0ea9bf27d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=778f8d73-c000-4576-9ffa-a7945446e0ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.037 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 778f8d73-c000-4576-9ffa-a7945446e0ac in datapath 40e18b65-6dc6-41e6-a291-e35356bef842 unbound from our chassis
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.039 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40e18b65-6dc6-41e6-a291-e35356bef842, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.040 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c596b5e4-c509-4741-ab2f-ba5afb6cac16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.041 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842 namespace which is not needed anymore
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.066 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 29 08:01:44 compute-2 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003f.scope: Consumed 18.128s CPU time.
Nov 29 08:01:44 compute-2 systemd-machined[194747]: Machine qemu-28-instance-0000003f terminated.
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.201 232432 INFO nova.virt.libvirt.driver [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Instance destroyed successfully.
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.202 232432 DEBUG nova.objects.instance [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lazy-loading 'resources' on Instance uuid 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:44 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [NOTICE]   (260171) : haproxy version is 2.8.14-c23fe91
Nov 29 08:01:44 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [NOTICE]   (260171) : path to executable is /usr/sbin/haproxy
Nov 29 08:01:44 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [WARNING]  (260171) : Exiting Master process...
Nov 29 08:01:44 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [ALERT]    (260171) : Current worker (260173) exited with code 143 (Terminated)
Nov 29 08:01:44 compute-2 neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842[260167]: [WARNING]  (260171) : All workers exited. Exiting... (0)
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.230 232432 DEBUG nova.virt.libvirt.vif [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=63,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERGBfgA9qTqpGYP1UZ461aaKd4Io9zBf7WJ7nmmTBlIyLugi+Y1cFtKGTdfwdX8j1bvOMvNJuL6NYhsRx6mPAL1WoWkednVReV7dvjxu/jg7IvYoHFnzQZoIlNmmLG/1w==',key_name='tempest-keypair-939772258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9cfb4837af3c440e93179ccec8e1811d',ramdisk_id='',reservation_id='r-e000piny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-434830629',owner_user_name='tempest-ServersTestFqdnHostnames-434830629-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fccef0dffe5046debab8211997669052',uuid=08c645cb-08cb-4b4d-b0e8-37da8adc8c02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.230 232432 DEBUG nova.network.os_vif_util [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converting VIF {"id": "778f8d73-c000-4576-9ffa-a7945446e0ac", "address": "fa:16:3e:a8:2f:8a", "network": {"id": "40e18b65-6dc6-41e6-a291-e35356bef842", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-709340867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9cfb4837af3c440e93179ccec8e1811d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap778f8d73-c0", "ovs_interfaceid": "778f8d73-c000-4576-9ffa-a7945446e0ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:44 compute-2 systemd[1]: libpod-c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d.scope: Deactivated successfully.
Nov 29 08:01:44 compute-2 conmon[260167]: conmon c419fe783d593dabbf44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d.scope/container/memory.events
Nov 29 08:01:44 compute-2 podman[260584]: 2025-11-29 08:01:44.236534784 +0000 UTC m=+0.056396815 container died c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.234 232432 DEBUG nova.network.os_vif_util [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.235 232432 DEBUG os_vif [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.237 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.238 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap778f8d73-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.246 232432 INFO os_vif [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:2f:8a,bridge_name='br-int',has_traffic_filtering=True,id=778f8d73-c000-4576-9ffa-a7945446e0ac,network=Network(40e18b65-6dc6-41e6-a291-e35356bef842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap778f8d73-c0')
Nov 29 08:01:44 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:01:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-bf73530930587f719ffe34df07491dea477a81d78ccc8da310d8923c81082175-merged.mount: Deactivated successfully.
Nov 29 08:01:44 compute-2 podman[260584]: 2025-11-29 08:01:44.298711438 +0000 UTC m=+0.118573499 container cleanup c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:01:44 compute-2 systemd[1]: libpod-conmon-c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d.scope: Deactivated successfully.
Nov 29 08:01:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:44.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:44 compute-2 podman[260640]: 2025-11-29 08:01:44.373462836 +0000 UTC m=+0.049922082 container remove c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.381 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[408afa20-9f00-469b-94a4-bc3b4b36ff17]: (4, ('Sat Nov 29 08:01:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842 (c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d)\nc419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d\nSat Nov 29 08:01:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842 (c419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d)\nc419fe783d593dabbf44c1da322a0f142eaf2bab47b3074225c61d6771e0067d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[241b98e9-fe61-4db2-8869-b92f92fe3287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.385 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e18b65-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:44 compute-2 kernel: tap40e18b65-60: left promiscuous mode
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.388 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.406 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.408 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4e3b10-6575-4506-9943-15750488fa4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3327426241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.429 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4303a746-b0b6-4f84-9d7e-031ac8657fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.431 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a65d13cd-8557-44e1-98e9-0446a75a781b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.447 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ea96a6-8d0d-4ee6-a5e7-7ffd01e0babd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622912, 'reachable_time': 36837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260659, 'error': None, 'target': 'ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.449 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:44 compute-2 systemd[1]: run-netns-ovnmeta\x2d40e18b65\x2d6dc6\x2d41e6\x2da291\x2de35356bef842.mount: Deactivated successfully.
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.450 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40e18b65-6dc6-41e6-a291-e35356bef842 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:01:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:01:44.450 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[0921a795-e833-4e17-954d-802c5d0f243e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.544 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.544 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.575 232432 DEBUG nova.compute.manager [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-unplugged-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.576 232432 DEBUG oslo_concurrency.lockutils [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.576 232432 DEBUG oslo_concurrency.lockutils [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.576 232432 DEBUG oslo_concurrency.lockutils [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.576 232432 DEBUG nova.compute.manager [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] No waiting events found dispatching network-vif-unplugged-778f8d73-c000-4576-9ffa-a7945446e0ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.577 232432 DEBUG nova.compute.manager [req-ffb73522-2e95-446e-918a-90e1a5f9fe34 req-cde08cf1-abf0-42d1-aa21-d6520b18070b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-unplugged-778f8d73-c000-4576-9ffa-a7945446e0ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.698 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.701 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4605MB free_disk=20.921955108642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.701 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:44 compute-2 nova_compute[232428]: 2025-11-29 08:01:44.702 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.076 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.093 232432 INFO nova.virt.libvirt.driver [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Deleting instance files /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02_del
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.094 232432 INFO nova.virt.libvirt.driver [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Deletion of /var/lib/nova/instances/08c645cb-08cb-4b4d-b0e8-37da8adc8c02_del complete
Nov 29 08:01:45 compute-2 ceph-mon[77138]: pgmap v1813: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 46 KiB/s wr, 35 op/s
Nov 29 08:01:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2947612651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3327426241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.243 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.243 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.243 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.286 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.720 232432 INFO nova.compute.manager [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Took 1.77 seconds to destroy the instance on the hypervisor.
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.720 232432 DEBUG oslo.service.loopingcall [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.721 232432 DEBUG nova.compute.manager [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.721 232432 DEBUG nova.network.neutron [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:01:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2851012820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.760 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:45 compute-2 nova_compute[232428]: 2025-11-29 08:01:45.766 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.086 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.135 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.135 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.136 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.136 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.204 232432 DEBUG nova.compute.manager [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.205 232432 DEBUG oslo_concurrency.lockutils [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.205 232432 DEBUG oslo_concurrency.lockutils [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.206 232432 DEBUG oslo_concurrency.lockutils [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.206 232432 DEBUG nova.compute.manager [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] No waiting events found dispatching network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.207 232432 WARNING nova.compute.manager [req-83b53586-79f6-451c-bdc2-244d24c6fe6f req-0f85fe31-a7aa-4e93-95c2-7d204929ab09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received unexpected event network-vif-plugged-778f8d73-c000-4576-9ffa-a7945446e0ac for instance with vm_state active and task_state deleting.
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.226 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.250 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:01:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/808961810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2851012820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.435 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:47 compute-2 nova_compute[232428]: 2025-11-29 08:01:47.659 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:49 compute-2 ceph-mon[77138]: pgmap v1814: 305 pgs: 305 active+clean; 106 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 15 KiB/s wr, 51 op/s
Nov 29 08:01:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1890790093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:49 compute-2 nova_compute[232428]: 2025-11-29 08:01:49.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:50 compute-2 nova_compute[232428]: 2025-11-29 08:01:50.078 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:50 compute-2 ceph-mon[77138]: pgmap v1815: 305 pgs: 305 active+clean; 106 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 14 KiB/s wr, 35 op/s
Nov 29 08:01:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:50.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:51 compute-2 ceph-mon[77138]: pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 15 KiB/s wr, 48 op/s
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.157 232432 DEBUG nova.network.neutron [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.201 232432 INFO nova.compute.manager [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Took 6.48 seconds to deallocate network for instance.
Nov 29 08:01:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:52.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.257 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.258 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.311 232432 DEBUG nova.compute.manager [req-6ef29a13-933c-4421-83e0-54c2b26e36bd req-a4b9d232-6d13-4820-b361-ae83b7616217 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Received event network-vif-deleted-778f8d73-c000-4576-9ffa-a7945446e0ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.318 232432 DEBUG oslo_concurrency.processutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:52.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/615541722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.795 232432 DEBUG oslo_concurrency.processutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.808 232432 DEBUG nova.compute.provider_tree [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.841 232432 DEBUG nova.scheduler.client.report [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.862 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:52 compute-2 nova_compute[232428]: 2025-11-29 08:01:52.922 232432 INFO nova.scheduler.client.report [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Deleted allocations for instance 08c645cb-08cb-4b4d-b0e8-37da8adc8c02
Nov 29 08:01:53 compute-2 nova_compute[232428]: 2025-11-29 08:01:53.053 232432 DEBUG oslo_concurrency.lockutils [None req-8c986b21-8931-4cd3-9280-61ebf321ce98 fccef0dffe5046debab8211997669052 9cfb4837af3c440e93179ccec8e1811d - - default default] Lock "08c645cb-08cb-4b4d-b0e8-37da8adc8c02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:01:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:01:54 compute-2 nova_compute[232428]: 2025-11-29 08:01:54.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:54.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 08:01:54 compute-2 ceph-mon[77138]: pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Nov 29 08:01:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/615541722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:55 compute-2 nova_compute[232428]: 2025-11-29 08:01:55.129 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:55 compute-2 ceph-mon[77138]: pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 103 op/s
Nov 29 08:01:55 compute-2 ceph-mon[77138]: osdmap e246: 3 total, 3 up, 3 in
Nov 29 08:01:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 08:01:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:01:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:01:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 08:01:56 compute-2 ceph-mon[77138]: osdmap e247: 3 total, 3 up, 3 in
Nov 29 08:01:57 compute-2 ceph-mon[77138]: pgmap v1821: 305 pgs: 305 active+clean; 118 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.0 MiB/s wr, 137 op/s
Nov 29 08:01:57 compute-2 ceph-mon[77138]: osdmap e248: 3 total, 3 up, 3 in
Nov 29 08:01:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 08:01:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:58.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:01:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:01:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:58.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:01:58 compute-2 ceph-mon[77138]: osdmap e249: 3 total, 3 up, 3 in
Nov 29 08:01:59 compute-2 nova_compute[232428]: 2025-11-29 08:01:59.195 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403304.1942582, 08c645cb-08cb-4b4d-b0e8-37da8adc8c02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:59 compute-2 nova_compute[232428]: 2025-11-29 08:01:59.196 232432 INFO nova.compute.manager [-] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] VM Stopped (Lifecycle Event)
Nov 29 08:01:59 compute-2 nova_compute[232428]: 2025-11-29 08:01:59.234 232432 DEBUG nova.compute.manager [None req-2cbe1927-c3a7-4385-b636-3b521d7d99b3 - - - - - -] [instance: 08c645cb-08cb-4b4d-b0e8-37da8adc8c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:59 compute-2 nova_compute[232428]: 2025-11-29 08:01:59.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:00 compute-2 nova_compute[232428]: 2025-11-29 08:02:00.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:00 compute-2 ceph-mon[77138]: pgmap v1824: 305 pgs: 305 active+clean; 118 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 70 op/s
Nov 29 08:02:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:00.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:01 compute-2 ceph-mon[77138]: pgmap v1825: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 137 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 117 op/s
Nov 29 08:02:01 compute-2 podman[260717]: 2025-11-29 08:02:01.689754141 +0000 UTC m=+0.082507212 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:02:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:02 compute-2 sudo[260736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:02 compute-2 sudo[260736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:02 compute-2 sudo[260736]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:02 compute-2 sudo[260761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:02 compute-2 sudo[260761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:02 compute-2 sudo[260761]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:02 compute-2 sudo[260786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:02 compute-2 sudo[260786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:02 compute-2 sudo[260786]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:02 compute-2 sudo[260811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:02:02 compute-2 sudo[260811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:03 compute-2 sudo[260811]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:03.307 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:03.309 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:03.309 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:03 compute-2 ceph-mon[77138]: pgmap v1826: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 128 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 5.0 MiB/s wr, 223 op/s
Nov 29 08:02:04 compute-2 sudo[260867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:04 compute-2 sudo[260867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:04 compute-2 sudo[260867]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:04 compute-2 sudo[260892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:04 compute-2 sudo[260892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:04 compute-2 sudo[260892]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:04.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:04 compute-2 nova_compute[232428]: 2025-11-29 08:02:04.246 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3595788834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:05 compute-2 nova_compute[232428]: 2025-11-29 08:02:05.135 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 08:02:05 compute-2 ceph-mon[77138]: pgmap v1827: 305 pgs: 305 active+clean; 121 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 3.8 MiB/s wr, 182 op/s
Nov 29 08:02:05 compute-2 ceph-mon[77138]: osdmap e250: 3 total, 3 up, 3 in
Nov 29 08:02:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:06.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:06 compute-2 podman[260919]: 2025-11-29 08:02:06.666419684 +0000 UTC m=+0.071718553 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 08:02:06 compute-2 ceph-mon[77138]: pgmap v1829: 305 pgs: 305 active+clean; 87 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 5.7 MiB/s wr, 230 op/s
Nov 29 08:02:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2464851194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:09 compute-2 nova_compute[232428]: 2025-11-29 08:02:09.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:10 compute-2 nova_compute[232428]: 2025-11-29 08:02:10.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-2 ceph-mon[77138]: pgmap v1830: 305 pgs: 305 active+clean; 87 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 519 KiB/s rd, 4.6 MiB/s wr, 186 op/s
Nov 29 08:02:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:10.765 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:10.765 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:02:10 compute-2 nova_compute[232428]: 2025-11-29 08:02:10.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:11 compute-2 ceph-mon[77138]: pgmap v1831: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 454 KiB/s rd, 4.5 MiB/s wr, 172 op/s
Nov 29 08:02:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2062387284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/525321337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:11.767 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:12.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:12.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:13 compute-2 sudo[260944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:13 compute-2 sudo[260944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:13 compute-2 sudo[260944]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:13 compute-2 sudo[260969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:02:13 compute-2 sudo[260969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:13 compute-2 sudo[260969]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:13 compute-2 ceph-mon[77138]: pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 29 08:02:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:02:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:02:14 compute-2 nova_compute[232428]: 2025-11-29 08:02:14.250 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:14.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:14 compute-2 podman[260994]: 2025-11-29 08:02:14.738609551 +0000 UTC m=+0.129833002 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:02:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:15 compute-2 nova_compute[232428]: 2025-11-29 08:02:15.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:15 compute-2 ceph-mon[77138]: pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 08:02:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:16.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:17 compute-2 ceph-mon[77138]: pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 599 KiB/s wr, 80 op/s
Nov 29 08:02:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:18.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:18.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:02:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 112K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 28K writes, 9618 syncs, 2.92 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 45.70 MB, 0.08 MB/s
                                           Interval WAL: 10K writes, 4211 syncs, 2.57 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:02:19 compute-2 nova_compute[232428]: 2025-11-29 08:02:19.251 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:19 compute-2 ceph-mon[77138]: pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 507 KiB/s wr, 67 op/s
Nov 29 08:02:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:20 compute-2 nova_compute[232428]: 2025-11-29 08:02:20.141 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:20.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:21 compute-2 ceph-mon[77138]: pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 507 KiB/s wr, 89 op/s
Nov 29 08:02:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:22.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:22.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:23 compute-2 ceph-mon[77138]: pgmap v1837: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 08:02:24 compute-2 sudo[261026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:24 compute-2 sudo[261026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:24 compute-2 sudo[261026]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.253 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:24.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.272 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.273 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:24 compute-2 sudo[261051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:24 compute-2 sudo[261051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:24 compute-2 sudo[261051]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.364 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:02:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/49200391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.468 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.469 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.483 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.484 232432 INFO nova.compute.claims [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:02:24 compute-2 nova_compute[232428]: 2025-11-29 08:02:24.633 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/383068857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.092 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.102 232432 DEBUG nova.compute.provider_tree [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.122 232432 DEBUG nova.scheduler.client.report [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.150 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.151 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.216 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.216 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.241 232432 INFO nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.272 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:02:25 compute-2 ceph-mon[77138]: pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 08:02:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/383068857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.726 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.728 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.730 232432 INFO nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Creating image(s)
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.760 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.793 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.825 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.829 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.916 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.918 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.918 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.919 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.947 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:25 compute-2 nova_compute[232428]: 2025-11-29 08:02:25.950 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.270 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.344 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:02:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:26.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.466 232432 DEBUG nova.objects.instance [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid 1272be7f-6db1-4b9b-a022-a3a23b9a2faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.489 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.490 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Ensure instance console log exists: /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.490 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.491 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.491 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:26 compute-2 nova_compute[232428]: 2025-11-29 08:02:26.834 232432 DEBUG nova.policy [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:02:27 compute-2 ceph-mon[77138]: pgmap v1839: 305 pgs: 305 active+clean; 112 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Nov 29 08:02:27 compute-2 nova_compute[232428]: 2025-11-29 08:02:27.855 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Successfully created port: 6b9d239d-45f3-49ea-aae9-3c00c9532305 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:02:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:28.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3103524126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3103524126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:28.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:29 compute-2 ceph-mon[77138]: pgmap v1840: 305 pgs: 305 active+clean; 112 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 584 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.459 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Successfully updated port: 6b9d239d-45f3-49ea-aae9-3c00c9532305 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.487 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.487 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.488 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.840 232432 DEBUG nova.compute.manager [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-changed-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.841 232432 DEBUG nova.compute.manager [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Refreshing instance network info cache due to event network-changed-6b9d239d-45f3-49ea-aae9-3c00c9532305. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:29 compute-2 nova_compute[232428]: 2025-11-29 08:02:29.841 232432 DEBUG oslo_concurrency.lockutils [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:30 compute-2 nova_compute[232428]: 2025-11-29 08:02:30.188 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:30 compute-2 nova_compute[232428]: 2025-11-29 08:02:30.216 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:30.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:30.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:30 compute-2 nova_compute[232428]: 2025-11-29 08:02:30.494 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:02:31 compute-2 ceph-mon[77138]: pgmap v1841: 305 pgs: 305 active+clean; 153 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 74 op/s
Nov 29 08:02:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:32.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1365884465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:32 compute-2 podman[261268]: 2025-11-29 08:02:32.645585194 +0000 UTC m=+0.055051962 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.021 232432 DEBUG nova.network.neutron [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Updating instance_info_cache with network_info: [{"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.058 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.058 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Instance network_info: |[{"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.059 232432 DEBUG oslo_concurrency.lockutils [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.059 232432 DEBUG nova.network.neutron [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Refreshing network info cache for port 6b9d239d-45f3-49ea-aae9-3c00c9532305 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.061 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Start _get_guest_xml network_info=[{"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.066 232432 WARNING nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.072 232432 DEBUG nova.virt.libvirt.host [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.073 232432 DEBUG nova.virt.libvirt.host [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.076 232432 DEBUG nova.virt.libvirt.host [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.076 232432 DEBUG nova.virt.libvirt.host [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.077 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.078 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.078 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.078 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.079 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.079 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.079 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.080 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.080 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.080 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.080 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.080 232432 DEBUG nova.virt.hardware [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.083 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1956048378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.519 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.554 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.559 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:33 compute-2 ceph-mon[77138]: pgmap v1842: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 125 op/s
Nov 29 08:02:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/39922280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1956048378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1696174425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:33 compute-2 nova_compute[232428]: 2025-11-29 08:02:33.998 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.000 232432 DEBUG nova.virt.libvirt.vif [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1210835996',display_name='tempest-DeleteServersTestJSON-server-1210835996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1210835996',id=68,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-5tjkqh61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:25Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=1272be7f-6db1-4b9b-a022-a3a23b9a2faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.000 232432 DEBUG nova.network.os_vif_util [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.001 232432 DEBUG nova.network.os_vif_util [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.003 232432 DEBUG nova.objects.instance [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1272be7f-6db1-4b9b-a022-a3a23b9a2faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.037 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <uuid>1272be7f-6db1-4b9b-a022-a3a23b9a2faf</uuid>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <name>instance-00000044</name>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:name>tempest-DeleteServersTestJSON-server-1210835996</nova:name>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:02:33</nova:creationTime>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <nova:port uuid="6b9d239d-45f3-49ea-aae9-3c00c9532305">
Nov 29 08:02:34 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <system>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="serial">1272be7f-6db1-4b9b-a022-a3a23b9a2faf</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="uuid">1272be7f-6db1-4b9b-a022-a3a23b9a2faf</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </system>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <os>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </os>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <features>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </features>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk">
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config">
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:02:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:7f:9d:73"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <target dev="tap6b9d239d-45"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/console.log" append="off"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <video>
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </video>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:02:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:02:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:02:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:02:34 compute-2 nova_compute[232428]: </domain>
Nov 29 08:02:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.039 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Preparing to wait for external event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.039 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.039 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.039 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.040 232432 DEBUG nova.virt.libvirt.vif [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1210835996',display_name='tempest-DeleteServersTestJSON-server-1210835996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1210835996',id=68,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-5tjkqh61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:25Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=1272be7f-6db1-4b9b-a022-a3a23b9a2faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.041 232432 DEBUG nova.network.os_vif_util [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.041 232432 DEBUG nova.network.os_vif_util [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.042 232432 DEBUG os_vif [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.043 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.044 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.048 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b9d239d-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.049 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b9d239d-45, col_values=(('external_ids', {'iface-id': '6b9d239d-45f3-49ea-aae9-3c00c9532305', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:9d:73', 'vm-uuid': '1272be7f-6db1-4b9b-a022-a3a23b9a2faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:34 compute-2 NetworkManager[48993]: <info>  [1764403354.0514] manager: (tap6b9d239d-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.059 232432 INFO os_vif [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45')
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.119 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.119 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.119 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:7f:9d:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.120 232432 INFO nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Using config drive
Nov 29 08:02:34 compute-2 nova_compute[232428]: 2025-11-29 08:02:34.141 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:34.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:34.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1696174425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.069 232432 INFO nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Creating config drive at /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.075 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkej_yv29 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.190 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.218 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkej_yv29" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.249 232432 DEBUG nova.storage.rbd_utils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.254 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.299 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.299 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.566 232432 DEBUG nova.network.neutron [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Updated VIF entry in instance network info cache for port 6b9d239d-45f3-49ea-aae9-3c00c9532305. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.567 232432 DEBUG nova.network.neutron [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Updating instance_info_cache with network_info: [{"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:35 compute-2 nova_compute[232428]: 2025-11-29 08:02:35.588 232432 DEBUG oslo_concurrency.lockutils [req-28fbdaba-2d95-4835-997d-a1e6aaecd73a req-9344f7e4-f696-4beb-bddf-d853c3f1aa8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1272be7f-6db1-4b9b-a022-a3a23b9a2faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:36 compute-2 ceph-mon[77138]: pgmap v1843: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 126 op/s
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.179 232432 DEBUG oslo_concurrency.processutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config 1272be7f-6db1-4b9b-a022-a3a23b9a2faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.179 232432 INFO nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Deleting local config drive /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf/disk.config because it was imported into RBD.
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:36 compute-2 kernel: tap6b9d239d-45: entered promiscuous mode
Nov 29 08:02:36 compute-2 ovn_controller[134375]: 2025-11-29T08:02:36Z|00221|binding|INFO|Claiming lport 6b9d239d-45f3-49ea-aae9-3c00c9532305 for this chassis.
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.2449] manager: (tap6b9d239d-45): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 08:02:36 compute-2 ovn_controller[134375]: 2025-11-29T08:02:36Z|00222|binding|INFO|6b9d239d-45f3-49ea-aae9-3c00c9532305: Claiming fa:16:3e:7f:9d:73 10.100.0.6
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.252 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.256 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9d:73 10.100.0.6'], port_security=['fa:16:3e:7f:9d:73 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1272be7f-6db1-4b9b-a022-a3a23b9a2faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6b9d239d-45f3-49ea-aae9-3c00c9532305) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.258 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6b9d239d-45f3-49ea-aae9-3c00c9532305 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.259 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:02:36 compute-2 systemd-udevd[261424]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.275 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d2c995-5f62-4982-9e6e-90735bf6e10b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.276 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:02:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:36.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.278 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.278 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[54ba23d0-19f2-438a-9fd1-209f2809c58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.280 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe217da-a0df-4534-86d9-92dd069905f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 systemd-machined[194747]: New machine qemu-29-instance-00000044.
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.2934] device (tap6b9d239d-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.292 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3396ca-07c1-4c31-a700-b2600dc6b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.2945] device (tap6b9d239d-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:02:36 compute-2 systemd[1]: Started Virtual Machine qemu-29-instance-00000044.
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.316 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0380e5e5-d9d4-4266-904b-855bdfbe4d95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_controller[134375]: 2025-11-29T08:02:36Z|00223|binding|INFO|Setting lport 6b9d239d-45f3-49ea-aae9-3c00c9532305 ovn-installed in OVS
Nov 29 08:02:36 compute-2 ovn_controller[134375]: 2025-11-29T08:02:36Z|00224|binding|INFO|Setting lport 6b9d239d-45f3-49ea-aae9-3c00c9532305 up in Southbound
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.352 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[71e2642c-3257-43d4-9e9f-1d1f358611d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.3603] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.359 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4388db49-5fda-49ed-8734-555a01cb7e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.393 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[23835b84-8c3d-43a7-a4fb-1ffc0d7f8ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.396 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9e923043-5b5f-4049-9a3d-fe046b480bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.4207] device (tapd5e42602-d0): carrier: link connected
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.432 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[730a3ac0-9595-41ad-b071-e0d9293dda0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.449 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1359a8dc-b717-459d-b505-4df70fa004a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632268, 'reachable_time': 31632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261457, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:36.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bb59cf-cc52-4002-a18d-ceeaf95e7370]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632268, 'tstamp': 632268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261458, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.485 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[326ecce4-a239-422f-b10a-470fa92acf5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632268, 'reachable_time': 31632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261459, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.524 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[17f193ef-5135-4ace-a160-34e10e65ef8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.606 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2414a6d-40be-4738-a30b-f75071a01d98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.608 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.608 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.609 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 NetworkManager[48993]: <info>  [1764403356.6122] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 08:02:36 compute-2 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.614 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.616 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.618 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_controller[134375]: 2025-11-29T08:02:36Z|00225|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 08:02:36 compute-2 nova_compute[232428]: 2025-11-29 08:02:36.644 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.648 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.649 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d455df8d-4d05-4b9f-92f3-c5330d6c285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.650 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:02:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:36.651 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:02:37 compute-2 podman[261528]: 2025-11-29 08:02:37.035528249 +0000 UTC m=+0.057157658 container create dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.059 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403357.0589867, 1272be7f-6db1-4b9b-a022-a3a23b9a2faf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.060 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] VM Started (Lifecycle Event)
Nov 29 08:02:37 compute-2 systemd[1]: Started libpod-conmon-dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e.scope.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.082 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.089 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403357.0592506, 1272be7f-6db1-4b9b-a022-a3a23b9a2faf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.089 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] VM Paused (Lifecycle Event)
Nov 29 08:02:37 compute-2 podman[261528]: 2025-11-29 08:02:36.999811392 +0000 UTC m=+0.021440821 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.104 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.107 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:37 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:02:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb01933d10774c6f1c7f4a1c0ab0434a56caedb993372ca15ff615cd6178575/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:37 compute-2 podman[261528]: 2025-11-29 08:02:37.130605872 +0000 UTC m=+0.152235281 container init dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:02:37 compute-2 podman[261546]: 2025-11-29 08:02:37.130441077 +0000 UTC m=+0.065699905 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.133 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:37 compute-2 podman[261528]: 2025-11-29 08:02:37.138043045 +0000 UTC m=+0.159672454 container start dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:37 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [NOTICE]   (261572) : New worker (261575) forked
Nov 29 08:02:37 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [NOTICE]   (261572) : Loading success.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.190 232432 DEBUG nova.compute.manager [req-0c8bed12-ba8f-49c7-a407-05fb3ac7e570 req-bf67da82-d3f6-4859-999b-4015e0d038a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.191 232432 DEBUG oslo_concurrency.lockutils [req-0c8bed12-ba8f-49c7-a407-05fb3ac7e570 req-bf67da82-d3f6-4859-999b-4015e0d038a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.191 232432 DEBUG oslo_concurrency.lockutils [req-0c8bed12-ba8f-49c7-a407-05fb3ac7e570 req-bf67da82-d3f6-4859-999b-4015e0d038a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.191 232432 DEBUG oslo_concurrency.lockutils [req-0c8bed12-ba8f-49c7-a407-05fb3ac7e570 req-bf67da82-d3f6-4859-999b-4015e0d038a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.191 232432 DEBUG nova.compute.manager [req-0c8bed12-ba8f-49c7-a407-05fb3ac7e570 req-bf67da82-d3f6-4859-999b-4015e0d038a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Processing event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.192 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.195 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403357.1955993, 1272be7f-6db1-4b9b-a022-a3a23b9a2faf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.196 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] VM Resumed (Lifecycle Event)
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.200 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.205 232432 INFO nova.virt.libvirt.driver [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Instance spawned successfully.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.206 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:02:37 compute-2 ceph-mon[77138]: pgmap v1844: 305 pgs: 305 active+clean; 249 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.8 MiB/s wr, 149 op/s
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.210 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.213 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.222 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.223 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.223 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.224 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.224 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.225 232432 DEBUG nova.virt.libvirt.driver [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.229 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.283 232432 INFO nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Took 11.56 seconds to spawn the instance on the hypervisor.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.283 232432 DEBUG nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.350 232432 INFO nova.compute.manager [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Took 12.91 seconds to build instance.
Nov 29 08:02:37 compute-2 nova_compute[232428]: 2025-11-29 08:02:37.377 232432 DEBUG oslo_concurrency.lockutils [None req-690ae510-87f0-4162-87a3-fcbfc05ab598 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:38.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:38.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:39 compute-2 ceph-mon[77138]: pgmap v1845: 305 pgs: 305 active+clean; 249 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 133 op/s
Nov 29 08:02:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/648014255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.466 232432 DEBUG nova.compute.manager [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.466 232432 DEBUG oslo_concurrency.lockutils [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.467 232432 DEBUG oslo_concurrency.lockutils [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.467 232432 DEBUG oslo_concurrency.lockutils [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.467 232432 DEBUG nova.compute.manager [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] No waiting events found dispatching network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.468 232432 WARNING nova.compute.manager [req-ac778596-b392-4b2c-a22c-26e44602baab req-5b9739b5-f775-4403-a980-248db5c4b470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received unexpected event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 for instance with vm_state active and task_state None.
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.636 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.637 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.637 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.637 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.638 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.639 232432 INFO nova.compute.manager [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Terminating instance
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.639 232432 DEBUG nova.compute.manager [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:02:39 compute-2 kernel: tap6b9d239d-45 (unregistering): left promiscuous mode
Nov 29 08:02:39 compute-2 NetworkManager[48993]: <info>  [1764403359.6808] device (tap6b9d239d-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.687 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:39 compute-2 ovn_controller[134375]: 2025-11-29T08:02:39Z|00226|binding|INFO|Releasing lport 6b9d239d-45f3-49ea-aae9-3c00c9532305 from this chassis (sb_readonly=0)
Nov 29 08:02:39 compute-2 ovn_controller[134375]: 2025-11-29T08:02:39Z|00227|binding|INFO|Setting lport 6b9d239d-45f3-49ea-aae9-3c00c9532305 down in Southbound
Nov 29 08:02:39 compute-2 ovn_controller[134375]: 2025-11-29T08:02:39Z|00228|binding|INFO|Removing iface tap6b9d239d-45 ovn-installed in OVS
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.690 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:39.696 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9d:73 10.100.0.6'], port_security=['fa:16:3e:7f:9d:73 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1272be7f-6db1-4b9b-a022-a3a23b9a2faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6b9d239d-45f3-49ea-aae9-3c00c9532305) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:39.698 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6b9d239d-45f3-49ea-aae9-3c00c9532305 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis
Nov 29 08:02:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:39.701 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:02:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:39.702 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[81a77142-6fe3-400b-98b7-20458c7ff3f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:39.707 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:39 compute-2 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 29 08:02:39 compute-2 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Consumed 3.297s CPU time.
Nov 29 08:02:39 compute-2 systemd-machined[194747]: Machine qemu-29-instance-00000044 terminated.
Nov 29 08:02:39 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [NOTICE]   (261572) : haproxy version is 2.8.14-c23fe91
Nov 29 08:02:39 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [NOTICE]   (261572) : path to executable is /usr/sbin/haproxy
Nov 29 08:02:39 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [WARNING]  (261572) : Exiting Master process...
Nov 29 08:02:39 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [ALERT]    (261572) : Current worker (261575) exited with code 143 (Terminated)
Nov 29 08:02:39 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[261557]: [WARNING]  (261572) : All workers exited. Exiting... (0)
Nov 29 08:02:39 compute-2 systemd[1]: libpod-dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e.scope: Deactivated successfully.
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.877 232432 INFO nova.virt.libvirt.driver [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Instance destroyed successfully.
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.878 232432 DEBUG nova.objects.instance [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid 1272be7f-6db1-4b9b-a022-a3a23b9a2faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:39 compute-2 podman[261610]: 2025-11-29 08:02:39.881145525 +0000 UTC m=+0.056737996 container died dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.892 232432 DEBUG nova.virt.libvirt.vif [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1210835996',display_name='tempest-DeleteServersTestJSON-server-1210835996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1210835996',id=68,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-5tjkqh61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:37Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=1272be7f-6db1-4b9b-a022-a3a23b9a2faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.893 232432 DEBUG nova.network.os_vif_util [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "address": "fa:16:3e:7f:9d:73", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b9d239d-45", "ovs_interfaceid": "6b9d239d-45f3-49ea-aae9-3c00c9532305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.894 232432 DEBUG nova.network.os_vif_util [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.894 232432 DEBUG os_vif [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.896 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b9d239d-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.903 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:39 compute-2 nova_compute[232428]: 2025-11-29 08:02:39.907 232432 INFO os_vif [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9d:73,bridge_name='br-int',has_traffic_filtering=True,id=6b9d239d-45f3-49ea-aae9-3c00c9532305,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b9d239d-45')
Nov 29 08:02:39 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e-userdata-shm.mount: Deactivated successfully.
Nov 29 08:02:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-5cb01933d10774c6f1c7f4a1c0ab0434a56caedb993372ca15ff615cd6178575-merged.mount: Deactivated successfully.
Nov 29 08:02:39 compute-2 podman[261610]: 2025-11-29 08:02:39.926262216 +0000 UTC m=+0.101854687 container cleanup dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:02:39 compute-2 systemd[1]: libpod-conmon-dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e.scope: Deactivated successfully.
Nov 29 08:02:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:39 compute-2 podman[261662]: 2025-11-29 08:02:39.997939097 +0000 UTC m=+0.044486632 container remove dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.003 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6eb2ff-99a2-4e72-86d5-e2f28c6553a1]: (4, ('Sat Nov 29 08:02:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e)\ndae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e\nSat Nov 29 08:02:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (dae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e)\ndae36b81a2bf98ece274f28326096ce3121c5b6bc09cdab86f8c8053c4c20e9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.005 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10cfa321-2a77-43a6-801f-e1cdde71cd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.006 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:40 compute-2 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.008 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.021 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.023 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[73c402e8-4a8f-487f-a266-c2ae99362655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.034 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80eb112f-975f-4214-8110-4ea1520bfc22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.035 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37afc3f7-a9f5-485d-9b71-be60658f75cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.052 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1dafb5b7-38d9-4576-9a3c-eb21b0bd78b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632261, 'reachable_time': 21883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261681, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.054 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:02:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:40.054 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0b9cc9-ab34-40fe-b4b9-3eafb2fb356f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:40 compute-2 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.191 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:02:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:40.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:02:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4139501658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.324 232432 INFO nova.virt.libvirt.driver [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Deleting instance files /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf_del
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.325 232432 INFO nova.virt.libvirt.driver [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Deletion of /var/lib/nova/instances/1272be7f-6db1-4b9b-a022-a3a23b9a2faf_del complete
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.398 232432 INFO nova.compute.manager [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.399 232432 DEBUG oslo.service.loopingcall [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.399 232432 DEBUG nova.compute.manager [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:02:40 compute-2 nova_compute[232428]: 2025-11-29 08:02:40.400 232432 DEBUG nova.network.neutron [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:02:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:41 compute-2 ceph-mon[77138]: pgmap v1846: 305 pgs: 305 active+clean; 260 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.3 MiB/s wr, 175 op/s
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.623 232432 DEBUG nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-unplugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.624 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.624 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.624 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.624 232432 DEBUG nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] No waiting events found dispatching network-vif-unplugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.624 232432 DEBUG nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-unplugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 DEBUG nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 DEBUG oslo_concurrency.lockutils [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 DEBUG nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] No waiting events found dispatching network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.625 232432 WARNING nova.compute.manager [req-39d6b750-9644-4bc0-9091-1ae9c8c53ad1 req-017a2eca-0cf6-45fc-a1cc-6436d8aef1b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received unexpected event network-vif-plugged-6b9d239d-45f3-49ea-aae9-3c00c9532305 for instance with vm_state active and task_state deleting.
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.627 232432 DEBUG nova.network.neutron [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.660 232432 INFO nova.compute.manager [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Took 1.26 seconds to deallocate network for instance.
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.727 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.728 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.755 232432 DEBUG nova.compute.manager [req-41db5a4e-21d0-44b2-8980-95eafa1a54f0 req-afd9de96-3528-48c1-81df-361be138124a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Received event network-vif-deleted-6b9d239d-45f3-49ea-aae9-3c00c9532305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.797 232432 DEBUG nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.818 232432 DEBUG nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.819 232432 DEBUG nova.compute.provider_tree [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.885 232432 DEBUG nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.917 232432 DEBUG nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:02:41 compute-2 nova_compute[232428]: 2025-11-29 08:02:41.948 232432 DEBUG oslo_concurrency.processutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:42.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3046142382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.373 232432 DEBUG oslo_concurrency.processutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.383 232432 DEBUG nova.compute.provider_tree [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.403 232432 DEBUG nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.429 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.499 232432 INFO nova.scheduler.client.report [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance 1272be7f-6db1-4b9b-a022-a3a23b9a2faf
Nov 29 08:02:42 compute-2 nova_compute[232428]: 2025-11-29 08:02:42.917 232432 DEBUG oslo_concurrency.lockutils [None req-def02596-f78e-4240-883b-5b664b421c50 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "1272be7f-6db1-4b9b-a022-a3a23b9a2faf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.219 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.219 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.220 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.220 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.220 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-2 ceph-mon[77138]: pgmap v1847: 305 pgs: 305 active+clean; 223 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 264 op/s
Nov 29 08:02:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3046142382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2534587847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.698 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.874 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.875 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4590MB free_disk=20.9168701171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.876 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.876 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.950 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.951 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:02:43 compute-2 nova_compute[232428]: 2025-11-29 08:02:43.970 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:44.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2534587847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1047337460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-2 sudo[261750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:44 compute-2 sudo[261750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:44 compute-2 sudo[261750]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3249347582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.450 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.459 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.476 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:44 compute-2 sudo[261775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:44 compute-2 sudo[261775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:44 compute-2 sudo[261775]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.499 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.499 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:44 compute-2 nova_compute[232428]: 2025-11-29 08:02:44.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:45 compute-2 nova_compute[232428]: 2025-11-29 08:02:45.192 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:45 compute-2 ceph-mon[77138]: pgmap v1848: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Nov 29 08:02:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3249347582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2126488096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:45 compute-2 nova_compute[232428]: 2025-11-29 08:02:45.500 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:45 compute-2 podman[261804]: 2025-11-29 08:02:45.678595689 +0000 UTC m=+0.086649711 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 08:02:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:46.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.468 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.469 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.483 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.685 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.686 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.693 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.693 232432 INFO nova.compute.claims [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:02:46 compute-2 nova_compute[232428]: 2025-11-29 08:02:46.819 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4276755868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.284 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.290 232432 DEBUG nova.compute.provider_tree [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.319 232432 DEBUG nova.scheduler.client.report [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.403 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.404 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.483 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.483 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.547 232432 INFO nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.585 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:02:47 compute-2 ceph-mon[77138]: pgmap v1849: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 29 08:02:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4276755868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.801 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.802 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.803 232432 INFO nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Creating image(s)
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.828 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.855 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.885 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.889 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.980 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.981 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.982 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:47 compute-2 nova_compute[232428]: 2025-11-29 08:02:47.982 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.008 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.012 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf cbe783cf-d541-40ff-855f-81dee6d75a4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:48.110 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:48.112 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.163 232432 DEBUG nova.policy [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:02:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:48.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.322 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf cbe783cf-d541-40ff-855f-81dee6d75a4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.402 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:02:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:48.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:48 compute-2 nova_compute[232428]: 2025-11-29 08:02:48.510 232432 DEBUG nova.objects.instance [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid cbe783cf-d541-40ff-855f-81dee6d75a4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.492 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.493 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Ensure instance console log exists: /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.494 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.495 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.495 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:49 compute-2 nova_compute[232428]: 2025-11-29 08:02:49.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:50 compute-2 nova_compute[232428]: 2025-11-29 08:02:50.194 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:02:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:50.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:02:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:50 compute-2 sshd-session[262021]: Invalid user sol from 45.148.10.240 port 42034
Nov 29 08:02:50 compute-2 sshd-session[262021]: Connection closed by invalid user sol 45.148.10.240 port 42034 [preauth]
Nov 29 08:02:51 compute-2 ceph-mon[77138]: pgmap v1850: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 717 KiB/s wr, 187 op/s
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 nova_compute[232428]: 2025-11-29 08:02:51.902 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Successfully created port: 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 08:02:51 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 08:02:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:02:52.113 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:52.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:52 compute-2 ceph-mon[77138]: pgmap v1851: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 718 KiB/s wr, 187 op/s
Nov 29 08:02:54 compute-2 ceph-mon[77138]: pgmap v1852: 305 pgs: 305 active+clean; 266 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 182 op/s
Nov 29 08:02:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:54 compute-2 nova_compute[232428]: 2025-11-29 08:02:54.875 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403359.8740916, 1272be7f-6db1-4b9b-a022-a3a23b9a2faf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:54 compute-2 nova_compute[232428]: 2025-11-29 08:02:54.876 232432 INFO nova.compute.manager [-] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] VM Stopped (Lifecycle Event)
Nov 29 08:02:54 compute-2 nova_compute[232428]: 2025-11-29 08:02:54.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:55 compute-2 nova_compute[232428]: 2025-11-29 08:02:55.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:55 compute-2 nova_compute[232428]: 2025-11-29 08:02:55.213 232432 DEBUG nova.compute.manager [None req-87a1f843-e176-4cdb-8525-7b532ff90efc - - - - - -] [instance: 1272be7f-6db1-4b9b-a022-a3a23b9a2faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:55 compute-2 ceph-mon[77138]: pgmap v1853: 305 pgs: 305 active+clean; 281 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 649 KiB/s rd, 3.8 MiB/s wr, 99 op/s
Nov 29 08:02:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:02:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:56.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:02:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:57 compute-2 ceph-mon[77138]: pgmap v1854: 305 pgs: 305 active+clean; 290 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 29 08:02:57 compute-2 nova_compute[232428]: 2025-11-29 08:02:57.731 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Successfully updated port: 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:02:57 compute-2 nova_compute[232428]: 2025-11-29 08:02:57.989 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:57 compute-2 nova_compute[232428]: 2025-11-29 08:02:57.990 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:57 compute-2 nova_compute[232428]: 2025-11-29 08:02:57.990 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:02:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:02:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:02:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:02:58 compute-2 nova_compute[232428]: 2025-11-29 08:02:58.958 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:02:59 compute-2 ceph-mon[77138]: pgmap v1855: 305 pgs: 305 active+clean; 290 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 3.9 MiB/s wr, 153 op/s
Nov 29 08:02:59 compute-2 nova_compute[232428]: 2025-11-29 08:02:59.908 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-2 nova_compute[232428]: 2025-11-29 08:02:59.951 232432 DEBUG nova.compute.manager [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-changed-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:59 compute-2 nova_compute[232428]: 2025-11-29 08:02:59.952 232432 DEBUG nova.compute.manager [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Refreshing instance network info cache due to event network-changed-1abb2c5f-a3bf-4113-9b62-fa41bed76f07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:59 compute-2 nova_compute[232428]: 2025-11-29 08:02:59.952 232432 DEBUG oslo_concurrency.lockutils [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:00 compute-2 nova_compute[232428]: 2025-11-29 08:03:00.201 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:00.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:01 compute-2 ceph-mon[77138]: pgmap v1856: 305 pgs: 305 active+clean; 293 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 3.9 MiB/s wr, 209 op/s
Nov 29 08:03:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:02.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.249 232432 DEBUG nova.network.neutron [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Updating instance_info_cache with network_info: [{"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.297 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.297 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Instance network_info: |[{"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.297 232432 DEBUG oslo_concurrency.lockutils [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.298 232432 DEBUG nova.network.neutron [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Refreshing network info cache for port 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.301 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Start _get_guest_xml network_info=[{"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:03.308 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:03.309 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:03.309 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.308 232432 WARNING nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.333 232432 DEBUG nova.virt.libvirt.host [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.334 232432 DEBUG nova.virt.libvirt.host [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:03:03 compute-2 ceph-mon[77138]: pgmap v1857: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 506 KiB/s rd, 3.9 MiB/s wr, 262 op/s
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.342 232432 DEBUG nova.virt.libvirt.host [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.343 232432 DEBUG nova.virt.libvirt.host [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.344 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.345 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.345 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.346 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.346 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.346 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.346 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.346 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.347 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.347 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.347 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.347 232432 DEBUG nova.virt.hardware [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.351 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:03 compute-2 podman[262051]: 2025-11-29 08:03:03.660676553 +0000 UTC m=+0.062618650 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:03:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2117727980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.834 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.867 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:03 compute-2 nova_compute[232428]: 2025-11-29 08:03:03.871 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3369606206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.331 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.333 232432 DEBUG nova.virt.libvirt.vif [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1862688048',display_name='tempest-DeleteServersTestJSON-server-1862688048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1862688048',id=69,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-yi0y02et',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:47Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=cbe783cf-d541-40ff-855f-81dee6d75a4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.334 232432 DEBUG nova.network.os_vif_util [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.335 232432 DEBUG nova.network.os_vif_util [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.336 232432 DEBUG nova.objects.instance [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid cbe783cf-d541-40ff-855f-81dee6d75a4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2117727980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3369606206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:04 compute-2 sudo[262112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:04 compute-2 sudo[262112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:04 compute-2 sudo[262112]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:04 compute-2 sudo[262137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:04 compute-2 sudo[262137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:04 compute-2 sudo[262137]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:04 compute-2 nova_compute[232428]: 2025-11-29 08:03:04.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-2 ceph-mon[77138]: pgmap v1858: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 1.2 MiB/s wr, 225 op/s
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.448 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <uuid>cbe783cf-d541-40ff-855f-81dee6d75a4f</uuid>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <name>instance-00000045</name>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:name>tempest-DeleteServersTestJSON-server-1862688048</nova:name>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:03:03</nova:creationTime>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <nova:port uuid="1abb2c5f-a3bf-4113-9b62-fa41bed76f07">
Nov 29 08:03:05 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <system>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="serial">cbe783cf-d541-40ff-855f-81dee6d75a4f</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="uuid">cbe783cf-d541-40ff-855f-81dee6d75a4f</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </system>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <os>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </os>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <features>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </features>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/cbe783cf-d541-40ff-855f-81dee6d75a4f_disk">
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </source>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config">
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </source>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:03:05 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:fe:9d:cf"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <target dev="tap1abb2c5f-a3"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/console.log" append="off"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <video>
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </video>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:03:05 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:03:05 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:03:05 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:03:05 compute-2 nova_compute[232428]: </domain>
Nov 29 08:03:05 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.448 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Preparing to wait for external event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.449 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.449 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.449 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.450 232432 DEBUG nova.virt.libvirt.vif [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1862688048',display_name='tempest-DeleteServersTestJSON-server-1862688048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1862688048',id=69,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-yi0y02et',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:47Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=cbe783cf-d541-40ff-855f-81dee6d75a4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.450 232432 DEBUG nova.network.os_vif_util [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.451 232432 DEBUG nova.network.os_vif_util [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.451 232432 DEBUG os_vif [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.452 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.452 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.452 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.455 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.455 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1abb2c5f-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.455 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1abb2c5f-a3, col_values=(('external_ids', {'iface-id': '1abb2c5f-a3bf-4113-9b62-fa41bed76f07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:9d:cf', 'vm-uuid': 'cbe783cf-d541-40ff-855f-81dee6d75a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.457 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-2 NetworkManager[48993]: <info>  [1764403385.4578] manager: (tap1abb2c5f-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-2 nova_compute[232428]: 2025-11-29 08:03:05.464 232432 INFO os_vif [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3')
Nov 29 08:03:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:07 compute-2 podman[262166]: 2025-11-29 08:03:07.670227242 +0000 UTC m=+0.067362068 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 08:03:07 compute-2 ceph-mon[77138]: pgmap v1859: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 132 KiB/s wr, 182 op/s
Nov 29 08:03:07 compute-2 nova_compute[232428]: 2025-11-29 08:03:07.876 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:07 compute-2 nova_compute[232428]: 2025-11-29 08:03:07.877 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:07 compute-2 nova_compute[232428]: 2025-11-29 08:03:07.877 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:fe:9d:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:03:07 compute-2 nova_compute[232428]: 2025-11-29 08:03:07.877 232432 INFO nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Using config drive
Nov 29 08:03:07 compute-2 nova_compute[232428]: 2025-11-29 08:03:07.986 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:08.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:09 compute-2 ceph-mon[77138]: pgmap v1860: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 67 KiB/s wr, 110 op/s
Nov 29 08:03:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.174 232432 INFO nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Creating config drive at /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.183 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjumu0426 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.235 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.330 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjumu0426" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.364 232432 DEBUG nova.storage.rbd_utils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.367 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.452 232432 DEBUG nova.network.neutron [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Updated VIF entry in instance network info cache for port 1abb2c5f-a3bf-4113-9b62-fa41bed76f07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.453 232432 DEBUG nova.network.neutron [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Updating instance_info_cache with network_info: [{"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.473 232432 DEBUG oslo_concurrency.lockutils [req-7632077c-a27b-4e7b-90f7-a71f1573a87b req-27df9463-65d9-4777-9f9f-ba0c376e57be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-cbe783cf-d541-40ff-855f-81dee6d75a4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:10.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.552 232432 DEBUG oslo_concurrency.processutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config cbe783cf-d541-40ff-855f-81dee6d75a4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.552 232432 INFO nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Deleting local config drive /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f/disk.config because it was imported into RBD.
Nov 29 08:03:10 compute-2 kernel: tap1abb2c5f-a3: entered promiscuous mode
Nov 29 08:03:10 compute-2 NetworkManager[48993]: <info>  [1764403390.6459] manager: (tap1abb2c5f-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 08:03:10 compute-2 ovn_controller[134375]: 2025-11-29T08:03:10Z|00229|binding|INFO|Claiming lport 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 for this chassis.
Nov 29 08:03:10 compute-2 ovn_controller[134375]: 2025-11-29T08:03:10Z|00230|binding|INFO|1abb2c5f-a3bf-4113-9b62-fa41bed76f07: Claiming fa:16:3e:fe:9d:cf 10.100.0.9
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.646 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.660 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:9d:cf 10.100.0.9'], port_security=['fa:16:3e:fe:9d:cf 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cbe783cf-d541-40ff-855f-81dee6d75a4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=1abb2c5f-a3bf-4113-9b62-fa41bed76f07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.662 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.665 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:03:10 compute-2 ovn_controller[134375]: 2025-11-29T08:03:10Z|00231|binding|INFO|Setting lport 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 ovn-installed in OVS
Nov 29 08:03:10 compute-2 ovn_controller[134375]: 2025-11-29T08:03:10Z|00232|binding|INFO|Setting lport 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 up in Southbound
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.680 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-2 nova_compute[232428]: 2025-11-29 08:03:10.683 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.685 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0d0e86-5bb5-4956-a3b1-e05161ef27a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.686 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.690 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.690 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5ac088-5fd4-4772-909f-c7d54a1d5b76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.691 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfaf1ac-909f-47ac-b7df-66ae712ca663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 systemd-machined[194747]: New machine qemu-30-instance-00000045.
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.713 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4ab295-d322-422b-9a48-23dced3fc187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 systemd[1]: Started Virtual Machine qemu-30-instance-00000045.
Nov 29 08:03:10 compute-2 systemd-udevd[262259]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.730 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4eb79e-7b8d-4ff7-8498-0607b2e77b1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 NetworkManager[48993]: <info>  [1764403390.7353] device (tap1abb2c5f-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:03:10 compute-2 NetworkManager[48993]: <info>  [1764403390.7364] device (tap1abb2c5f-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.772 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1010736e-055f-44bb-bcbc-ec0931d142a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.781 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6deea5-1346-4ae0-b30f-a053258b438d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 systemd-udevd[262263]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:03:10 compute-2 NetworkManager[48993]: <info>  [1764403390.7841] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.832 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[49407c5b-03bf-4535-8a7f-7d819c9ca9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.836 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bac468-4e8a-4cf2-9eda-105c3a63ad34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 NetworkManager[48993]: <info>  [1764403390.8685] device (tapd5e42602-d0): carrier: link connected
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.880 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[48cf3c60-9553-4aa1-a29f-10461d529929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.912 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7a12f860-d3f9-4aef-947e-21c7a96c5976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635713, 'reachable_time': 29964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262290, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.942 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f38de48-9e03-48e8-b90a-5c9d25547895]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635713, 'tstamp': 635713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262291, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:10.969 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6c9687-2f44-4884-a9ae-11926df48bd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635713, 'reachable_time': 29964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262306, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.022 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9306e2a7-bcc0-4c6e-8503-15e3deb14ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.105 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c679a8c-533b-4079-8810-a58dbd8a092f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.107 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.108 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.109 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:11 compute-2 NetworkManager[48993]: <info>  [1764403391.1126] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 08:03:11 compute-2 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.115 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.116 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:11 compute-2 ovn_controller[134375]: 2025-11-29T08:03:11Z|00233|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 08:03:11 compute-2 ceph-mon[77138]: pgmap v1861: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 67 KiB/s wr, 110 op/s
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.149 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.151 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.152 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[afbee416-1eb3-4d7e-a73c-d888c25f8cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.153 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:03:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:11.154 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.168 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403391.1677964, cbe783cf-d541-40ff-855f-81dee6d75a4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.168 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] VM Started (Lifecycle Event)
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.188 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.193 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403391.1687262, cbe783cf-d541-40ff-855f-81dee6d75a4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.193 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] VM Paused (Lifecycle Event)
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.216 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.220 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.255 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:03:11 compute-2 podman[262367]: 2025-11-29 08:03:11.582573769 +0000 UTC m=+0.062841216 container create b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:03:11 compute-2 systemd[1]: Started libpod-conmon-b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb.scope.
Nov 29 08:03:11 compute-2 podman[262367]: 2025-11-29 08:03:11.553462409 +0000 UTC m=+0.033729876 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:03:11 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:03:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed426eecc116a30f066fd1a661c0492f836a0fba2a11bd5823b441371a266d2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:11 compute-2 podman[262367]: 2025-11-29 08:03:11.673691209 +0000 UTC m=+0.153958676 container init b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:03:11 compute-2 podman[262367]: 2025-11-29 08:03:11.681927777 +0000 UTC m=+0.162195224 container start b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:03:11 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [NOTICE]   (262386) : New worker (262388) forked
Nov 29 08:03:11 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [NOTICE]   (262386) : Loading success.
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.944 232432 DEBUG nova.compute.manager [req-38ea6172-9fbc-463b-9f7e-8e3530f6fd78 req-19b82032-c9df-440c-9bd0-5a6cc31f52d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.945 232432 DEBUG oslo_concurrency.lockutils [req-38ea6172-9fbc-463b-9f7e-8e3530f6fd78 req-19b82032-c9df-440c-9bd0-5a6cc31f52d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.946 232432 DEBUG oslo_concurrency.lockutils [req-38ea6172-9fbc-463b-9f7e-8e3530f6fd78 req-19b82032-c9df-440c-9bd0-5a6cc31f52d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.946 232432 DEBUG oslo_concurrency.lockutils [req-38ea6172-9fbc-463b-9f7e-8e3530f6fd78 req-19b82032-c9df-440c-9bd0-5a6cc31f52d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.947 232432 DEBUG nova.compute.manager [req-38ea6172-9fbc-463b-9f7e-8e3530f6fd78 req-19b82032-c9df-440c-9bd0-5a6cc31f52d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Processing event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.948 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.951 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403391.951513, cbe783cf-d541-40ff-855f-81dee6d75a4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.952 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] VM Resumed (Lifecycle Event)
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.954 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.958 232432 INFO nova.virt.libvirt.driver [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Instance spawned successfully.
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.958 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.993 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.996 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.997 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.997 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.998 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:11 compute-2 nova_compute[232428]: 2025-11-29 08:03:11.999 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.000 232432 DEBUG nova.virt.libvirt.driver [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.007 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.044 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.069 232432 INFO nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Took 24.27 seconds to spawn the instance on the hypervisor.
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.070 232432 DEBUG nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.152 232432 INFO nova.compute.manager [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Took 25.50 seconds to build instance.
Nov 29 08:03:12 compute-2 nova_compute[232428]: 2025-11-29 08:03:12.180 232432 DEBUG oslo_concurrency.lockutils [None req-bd4c18cd-039c-4237-865a-5b49ae7751b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:12.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:12.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:13 compute-2 ceph-mon[77138]: pgmap v1862: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 53 op/s
Nov 29 08:03:13 compute-2 sudo[262398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:13 compute-2 sudo[262398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:13 compute-2 sudo[262398]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:13 compute-2 sudo[262423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:13 compute-2 sudo[262423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:13 compute-2 sudo[262423]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:13 compute-2 sudo[262448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:13 compute-2 sudo[262448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:13 compute-2 sudo[262448]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:13 compute-2 sudo[262473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:03:13 compute-2 sudo[262473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:14.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.618 232432 DEBUG nova.compute.manager [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.618 232432 DEBUG oslo_concurrency.lockutils [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.619 232432 DEBUG oslo_concurrency.lockutils [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.619 232432 DEBUG oslo_concurrency.lockutils [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.620 232432 DEBUG nova.compute.manager [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] No waiting events found dispatching network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:14 compute-2 nova_compute[232428]: 2025-11-29 08:03:14.620 232432 WARNING nova.compute.manager [req-0488cda8-ea24-449d-a475-f059789148ac req-1f9f4aed-4b20-4341-994f-21581b91ea15 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received unexpected event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 for instance with vm_state active and task_state None.
Nov 29 08:03:14 compute-2 podman[262569]: 2025-11-29 08:03:14.663611898 +0000 UTC m=+0.170726320 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:14 compute-2 podman[262569]: 2025-11-29 08:03:14.82133398 +0000 UTC m=+0.328448412 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:15 compute-2 ceph-mon[77138]: pgmap v1863: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 14 KiB/s wr, 4 op/s
Nov 29 08:03:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:15 compute-2 nova_compute[232428]: 2025-11-29 08:03:15.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:15 compute-2 nova_compute[232428]: 2025-11-29 08:03:15.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:15 compute-2 podman[262725]: 2025-11-29 08:03:15.796167058 +0000 UTC m=+0.267268700 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:03:16 compute-2 podman[262725]: 2025-11-29 08:03:16.149939853 +0000 UTC m=+0.621041525 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:03:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:16 compute-2 podman[262760]: 2025-11-29 08:03:16.463339624 +0000 UTC m=+0.155616958 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:03:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:16.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:16 compute-2 podman[262817]: 2025-11-29 08:03:16.647245036 +0000 UTC m=+0.199357756 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Nov 29 08:03:16 compute-2 podman[262817]: 2025-11-29 08:03:16.659361065 +0000 UTC m=+0.211473725 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.openshift.expose-services=, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2023-02-22T09:23:20)
Nov 29 08:03:16 compute-2 sudo[262473]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:16 compute-2 sudo[262848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:16 compute-2 sudo[262848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:16 compute-2 sudo[262848]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:16 compute-2 sudo[262873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:16 compute-2 sudo[262873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:16 compute-2 sudo[262873]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:16 compute-2 sudo[262898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:16 compute-2 sudo[262898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:16 compute-2 sudo[262898]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:17 compute-2 sudo[262923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:03:17 compute-2 sudo[262923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.314 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.315 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.332 232432 DEBUG nova.objects.instance [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'flavor' on Instance uuid cbe783cf-d541-40ff-855f-81dee6d75a4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.403 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:17 compute-2 sudo[262923]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.559752) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397559800, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1468, "num_deletes": 254, "total_data_size": 3227910, "memory_usage": 3273176, "flush_reason": "Manual Compaction"}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397571714, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2117411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36107, "largest_seqno": 37570, "table_properties": {"data_size": 2111085, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13942, "raw_average_key_size": 20, "raw_value_size": 2098245, "raw_average_value_size": 3090, "num_data_blocks": 154, "num_entries": 679, "num_filter_entries": 679, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403286, "oldest_key_time": 1764403286, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 11996 microseconds, and 4602 cpu microseconds.
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.571752) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2117411 bytes OK
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.571769) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.574394) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.574406) EVENT_LOG_v1 {"time_micros": 1764403397574402, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.574433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3221052, prev total WAL file size 3221052, number of live WAL files 2.
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.575200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(2067KB)], [66(9232KB)]
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397575246, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11571186, "oldest_snapshot_seqno": -1}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6563 keys, 9585793 bytes, temperature: kUnknown
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397647605, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9585793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9542416, "index_size": 25857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 169825, "raw_average_key_size": 25, "raw_value_size": 9425016, "raw_average_value_size": 1436, "num_data_blocks": 1027, "num_entries": 6563, "num_filter_entries": 6563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.647934) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9585793 bytes
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.649952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.7 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 7092, records dropped: 529 output_compression: NoCompression
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.649974) EVENT_LOG_v1 {"time_micros": 1764403397649963, "job": 40, "event": "compaction_finished", "compaction_time_micros": 72454, "compaction_time_cpu_micros": 20553, "output_level": 6, "num_output_files": 1, "total_output_size": 9585793, "num_input_records": 7092, "num_output_records": 6563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397650651, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397653205, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.575123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.653307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.653339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.653341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.653343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:03:17.653345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.672 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.673 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.673 232432 INFO nova.compute.manager [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Attaching volume 1b84d2e0-dc4b-4e9d-96ef-b4698ba76935 to /dev/vdb
Nov 29 08:03:17 compute-2 ceph-mon[77138]: pgmap v1864: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 74 op/s
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:03:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.872 232432 DEBUG os_brick.utils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.875 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.897 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.897 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[eed530b8-fa96-4d18-83c2-7db785f1d2d8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.900 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.914 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.914 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[2636d059-1543-46e6-9a9e-dc786b906bda]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.918 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.933 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.933 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[a816a5e2-b352-4e9d-88b7-3191478ead8b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.935 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d27c50-48d1-46fd-94c9-78402be6f181]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.937 232432 DEBUG oslo_concurrency.processutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.990 232432 DEBUG oslo_concurrency.processutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "nvme version" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.995 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.995 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.996 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.997 232432 DEBUG os_brick.utils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] <== get_connector_properties: return (123ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:17 compute-2 nova_compute[232428]: 2025-11-29 08:03:17.998 232432 DEBUG nova.virt.block_device [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Updating existing volume attachment record: 3593a64a-a8f2-4d16-88e3-af3f4e7e2f68 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:18.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:18.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:19 compute-2 ceph-mon[77138]: pgmap v1865: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 74 op/s
Nov 29 08:03:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2705964177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.043 232432 DEBUG nova.objects.instance [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'flavor' on Instance uuid cbe783cf-d541-40ff-855f-81dee6d75a4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.074 232432 DEBUG nova.virt.libvirt.driver [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Attempting to attach volume 1b84d2e0-dc4b-4e9d-96ef-b4698ba76935 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.080 232432 DEBUG nova.virt.libvirt.guest [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:03:19 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-1b84d2e0-dc4b-4e9d-96ef-b4698ba76935">
Nov 29 08:03:19 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   </source>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:03:19 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:03:19 compute-2 nova_compute[232428]:   <serial>1b84d2e0-dc4b-4e9d-96ef-b4698ba76935</serial>
Nov 29 08:03:19 compute-2 nova_compute[232428]: </disk>
Nov 29 08:03:19 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.486 232432 DEBUG nova.virt.libvirt.driver [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.487 232432 DEBUG nova.virt.libvirt.driver [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.488 232432 DEBUG nova.virt.libvirt.driver [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.492 232432 DEBUG nova.virt.libvirt.driver [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:fe:9d:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:03:19 compute-2 nova_compute[232428]: 2025-11-29 08:03:19.754 232432 DEBUG oslo_concurrency.lockutils [None req-b2869f77-af1c-41dc-b63e-5093c10afa17 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.239 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.448 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.449 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.450 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.451 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.452 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.454 232432 INFO nova.compute.manager [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Terminating instance
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.456 232432 DEBUG nova.compute.manager [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.460 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 kernel: tap1abb2c5f-a3 (unregistering): left promiscuous mode
Nov 29 08:03:20 compute-2 NetworkManager[48993]: <info>  [1764403400.5080] device (tap1abb2c5f-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:03:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:20.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.525 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 ovn_controller[134375]: 2025-11-29T08:03:20Z|00234|binding|INFO|Releasing lport 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 from this chassis (sb_readonly=0)
Nov 29 08:03:20 compute-2 ovn_controller[134375]: 2025-11-29T08:03:20Z|00235|binding|INFO|Setting lport 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 down in Southbound
Nov 29 08:03:20 compute-2 ovn_controller[134375]: 2025-11-29T08:03:20Z|00236|binding|INFO|Removing iface tap1abb2c5f-a3 ovn-installed in OVS
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.541 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:9d:cf 10.100.0.9'], port_security=['fa:16:3e:fe:9d:cf 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cbe783cf-d541-40ff-855f-81dee6d75a4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=1abb2c5f-a3bf-4113-9b62-fa41bed76f07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.544 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 1abb2c5f-a3bf-4113-9b62-fa41bed76f07 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.546 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.547 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbdd8bb-86b7-4491-913a-44682b2b13d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.547 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 29 08:03:20 compute-2 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000045.scope: Consumed 9.066s CPU time.
Nov 29 08:03:20 compute-2 systemd-machined[194747]: Machine qemu-30-instance-00000045 terminated.
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [NOTICE]   (262386) : haproxy version is 2.8.14-c23fe91
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [NOTICE]   (262386) : path to executable is /usr/sbin/haproxy
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [WARNING]  (262386) : Exiting Master process...
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [WARNING]  (262386) : Exiting Master process...
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [ALERT]    (262386) : Current worker (262388) exited with code 143 (Terminated)
Nov 29 08:03:20 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[262382]: [WARNING]  (262386) : All workers exited. Exiting... (0)
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.676 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 systemd[1]: libpod-b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb.scope: Deactivated successfully.
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.681 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 podman[263034]: 2025-11-29 08:03:20.685363727 +0000 UTC m=+0.045248817 container died b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.697 232432 INFO nova.virt.libvirt.driver [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Instance destroyed successfully.
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.697 232432 DEBUG nova.objects.instance [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid cbe783cf-d541-40ff-855f-81dee6d75a4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.712 232432 DEBUG nova.virt.libvirt.vif [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1862688048',display_name='tempest-DeleteServersTestJSON-server-1862688048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1862688048',id=69,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:03:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-yi0y02et',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:03:12Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=cbe783cf-d541-40ff-855f-81dee6d75a4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.713 232432 DEBUG nova.network.os_vif_util [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "address": "fa:16:3e:fe:9d:cf", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1abb2c5f-a3", "ovs_interfaceid": "1abb2c5f-a3bf-4113-9b62-fa41bed76f07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:20 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb-userdata-shm.mount: Deactivated successfully.
Nov 29 08:03:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-ed426eecc116a30f066fd1a661c0492f836a0fba2a11bd5823b441371a266d2f-merged.mount: Deactivated successfully.
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.719 232432 DEBUG nova.network.os_vif_util [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.719 232432 DEBUG os_vif [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.722 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1abb2c5f-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:20 compute-2 podman[263034]: 2025-11-29 08:03:20.724869272 +0000 UTC m=+0.084754362 container cleanup b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.723 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.726 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.728 232432 INFO os_vif [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:9d:cf,bridge_name='br-int',has_traffic_filtering=True,id=1abb2c5f-a3bf-4113-9b62-fa41bed76f07,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1abb2c5f-a3')
Nov 29 08:03:20 compute-2 systemd[1]: libpod-conmon-b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb.scope: Deactivated successfully.
Nov 29 08:03:20 compute-2 podman[263069]: 2025-11-29 08:03:20.785004753 +0000 UTC m=+0.039150826 container remove b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.791 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6d5f98-acd6-45c6-a0b7-0e748a3b5742]: (4, ('Sat Nov 29 08:03:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb)\nb0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb\nSat Nov 29 08:03:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (b0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb)\nb0f8cae01d5cf6d0de24cdea149ccf1a6303a1cf19c1b504b063720cdac6fddb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[30df18b7-4ef9-4f02-917e-b7d2cc9f936f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.794 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 08:03:20 compute-2 nova_compute[232428]: 2025-11-29 08:03:20.815 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.818 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[274e56db-4b21-4b0e-8732-c37c6fb12886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.833 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5b0ae8-0fe8-476d-b387-aed069d6f917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.834 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[924298d9-211c-40f6-9a70-14a08745731f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.852 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[875a94d3-7ae9-4d3f-9010-d2ade250e4b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635703, 'reachable_time': 29972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263102, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.854 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:03:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:20.854 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[22930e5b-f0e6-4b0c-b594-2f5671a7fdc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:20 compute-2 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 08:03:21 compute-2 ceph-mon[77138]: pgmap v1866: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.071 232432 DEBUG nova.compute.manager [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-unplugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.071 232432 DEBUG oslo_concurrency.lockutils [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.072 232432 DEBUG oslo_concurrency.lockutils [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.073 232432 DEBUG oslo_concurrency.lockutils [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.073 232432 DEBUG nova.compute.manager [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] No waiting events found dispatching network-vif-unplugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.074 232432 DEBUG nova.compute.manager [req-b5896bc0-562f-4a11-a396-74e2c64552f5 req-dd6f015c-f7e1-4025-9e02-da9f3ca912f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-unplugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.203 232432 INFO nova.virt.libvirt.driver [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Deleting instance files /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f_del
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.205 232432 INFO nova.virt.libvirt.driver [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Deletion of /var/lib/nova/instances/cbe783cf-d541-40ff-855f-81dee6d75a4f_del complete
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.280 232432 INFO nova.compute.manager [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.281 232432 DEBUG oslo.service.loopingcall [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.282 232432 DEBUG nova.compute.manager [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:03:21 compute-2 nova_compute[232428]: 2025-11-29 08:03:21.282 232432 DEBUG nova.network.neutron [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:03:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:22.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:22.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:22 compute-2 nova_compute[232428]: 2025-11-29 08:03:22.792 232432 DEBUG nova.network.neutron [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:22 compute-2 nova_compute[232428]: 2025-11-29 08:03:22.831 232432 INFO nova.compute.manager [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Took 1.55 seconds to deallocate network for instance.
Nov 29 08:03:23 compute-2 ceph-mon[77138]: pgmap v1867: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 78 op/s
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.396 232432 INFO nova.compute.manager [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Took 0.56 seconds to detach 1 volumes for instance.
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.401 232432 DEBUG nova.compute.manager [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.401 232432 DEBUG oslo_concurrency.lockutils [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.402 232432 DEBUG oslo_concurrency.lockutils [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.402 232432 DEBUG oslo_concurrency.lockutils [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.402 232432 DEBUG nova.compute.manager [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] No waiting events found dispatching network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.402 232432 WARNING nova.compute.manager [req-829a33ef-d017-4c95-8d4d-f279a16a334f req-034687b8-b6fd-43eb-af88-5d8b2a737403 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received unexpected event network-vif-plugged-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 for instance with vm_state active and task_state deleting.
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.481 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.482 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:23 compute-2 nova_compute[232428]: 2025-11-29 08:03:23.567 232432 DEBUG oslo_concurrency.processutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1710574528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.032 232432 DEBUG oslo_concurrency.processutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.041 232432 DEBUG nova.compute.provider_tree [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.074 232432 DEBUG nova.scheduler.client.report [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.394 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.471 232432 INFO nova.scheduler.client.report [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance cbe783cf-d541-40ff-855f-81dee6d75a4f
Nov 29 08:03:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:24.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:24 compute-2 nova_compute[232428]: 2025-11-29 08:03:24.566 232432 DEBUG oslo_concurrency.lockutils [None req-808650b4-b608-48bb-bfd2-e4ce7cf92316 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "cbe783cf-d541-40ff-855f-81dee6d75a4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:24 compute-2 sudo[263128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:24 compute-2 sudo[263128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:24 compute-2 sudo[263128]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:24 compute-2 sudo[263153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:24 compute-2 sudo[263153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:24 compute-2 sudo[263153]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1710574528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:25 compute-2 nova_compute[232428]: 2025-11-29 08:03:25.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:25 compute-2 sudo[263179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:25 compute-2 sudo[263179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:25 compute-2 sudo[263179]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-2 sudo[263204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:03:25 compute-2 sudo[263204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:25 compute-2 sudo[263204]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-2 nova_compute[232428]: 2025-11-29 08:03:25.724 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:25 compute-2 ceph-mon[77138]: pgmap v1868: 305 pgs: 305 active+clean; 288 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Nov 29 08:03:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:03:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:26 compute-2 ceph-mon[77138]: pgmap v1869: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 101 op/s
Nov 29 08:03:27 compute-2 nova_compute[232428]: 2025-11-29 08:03:27.217 232432 DEBUG nova.compute.manager [req-b3bb028a-8435-44c7-87ba-151a1b83620f req-81a166dd-5f41-46ea-a586-d2a10cbbd83b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Received event network-vif-deleted-1abb2c5f-a3bf-4113-9b62-fa41bed76f07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740418286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740418286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1740418286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1740418286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:28.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:29 compute-2 ceph-mon[77138]: pgmap v1870: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 2.6 KiB/s wr, 31 op/s
Nov 29 08:03:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:30 compute-2 nova_compute[232428]: 2025-11-29 08:03:30.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:30.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:30 compute-2 nova_compute[232428]: 2025-11-29 08:03:30.726 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:31 compute-2 ceph-mon[77138]: pgmap v1871: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 6.2 KiB/s wr, 31 op/s
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.195 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.195 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.216 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.300 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.301 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.306 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.306 232432 INFO nova.compute.claims [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:03:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:32.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:32 compute-2 nova_compute[232428]: 2025-11-29 08:03:32.490 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4142171944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.037 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.044 232432 DEBUG nova.compute.provider_tree [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.060 232432 DEBUG nova.scheduler.client.report [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.109 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.110 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.176 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.177 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.205 232432 INFO nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.289 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.331 232432 INFO nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Booting with volume 9f626c2c-bb56-4532-9935-1e7b39440d48 at /dev/vda
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.562 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.563 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.586 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.586 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[70f83329-87b1-4989-98c5-b81ab2b12ec0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.588 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.604 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.604 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[03cc5a85-10ba-420f-bb40-1ebfd4060168]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.605 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:33 compute-2 ceph-mon[77138]: pgmap v1872: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 5.4 KiB/s wr, 30 op/s
Nov 29 08:03:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4142171944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.614 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.615 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff32aee-ebcb-4b5f-8c6f-f063a10f8fc3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.616 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[351fef65-031d-4767-b5cb-5f1056df70ce]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.617 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.657 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.660 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.660 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.661 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.661 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:33 compute-2 nova_compute[232428]: 2025-11-29 08:03:33.662 232432 DEBUG nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating existing volume attachment record: 56865f30-877a-4878-a3a9-ca202c8b2d30 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2120497893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:34 compute-2 podman[263262]: 2025-11-29 08:03:34.657418265 +0000 UTC m=+0.060944967 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:03:34 compute-2 nova_compute[232428]: 2025-11-29 08:03:34.838 232432 DEBUG nova.policy [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd77e751616c9473786c8ac7ae2d34d20', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:03:34 compute-2 nova_compute[232428]: 2025-11-29 08:03:34.897 232432 INFO nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Booting with volume 905030d9-8042-49ed-9c6d-0283f1caf956 at /dev/vdb
Nov 29 08:03:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.039 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.040 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.050 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.050 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc81fbd-0f99-4845-8c3f-35b2584280a3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.052 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.058 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.058 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc45c83-7997-4464-94c4-383e33b53a44]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.059 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.066 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.066 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2a0280-403b-4743-b1bb-69e787cbe066]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.067 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[c181993d-dddd-4043-aeff-8edc1163f260]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.067 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.097 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.099 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.100 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.100 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.100 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.101 232432 DEBUG nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating existing volume attachment record: bc3d1f07-343f-4d19-bcf1-6f0f0bad5e87 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.616 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully created port: 74928c3b-944c-4f17-b1b2-de33221d05ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:35 compute-2 ceph-mon[77138]: pgmap v1873: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.689 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403400.6877594, cbe783cf-d541-40ff-855f-81dee6d75a4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.689 232432 INFO nova.compute.manager [-] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] VM Stopped (Lifecycle Event)
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.728 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:35 compute-2 nova_compute[232428]: 2025-11-29 08:03:35.732 232432 DEBUG nova.compute.manager [None req-05009a83-116e-4ac0-bdba-5c4f521e30e6 - - - - - -] [instance: cbe783cf-d541-40ff-855f-81dee6d75a4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/499853300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.296 232432 INFO nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Booting with volume f07a118d-ea67-4113-8ddf-ed6afb5e3f24 at /dev/vdc
Nov 29 08:03:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.448 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.450 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.494 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.494 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[2462047b-2fa2-4ce8-b94e-1ff27a2f4386]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.496 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.510 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.511 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[330d6f41-d38e-4abb-8c22-5ba784ee847f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.512 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.523 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.524 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[58eb13e8-2b72-4792-8641-bda7b94ef65d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.525 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eaef87-495e-4a67-bf17-637df945c027]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.526 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.564 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.567 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.568 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.568 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.569 232432 DEBUG os_brick.utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] <== get_connector_properties: return (119ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:36 compute-2 nova_compute[232428]: 2025-11-29 08:03:36.569 232432 DEBUG nova.virt.block_device [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating existing volume attachment record: 32f80b3c-4f6a-4b33-bd61-accbf9121901 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/499853300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.257 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.257 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.290 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully created port: 765356c7-caab-46eb-830e-4a979bbba648 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.693 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.695 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.696 232432 INFO nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Creating image(s)
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.697 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.697 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Ensure instance console log exists: /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.697 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.698 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:37 compute-2 nova_compute[232428]: 2025-11-29 08:03:37.698 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:37 compute-2 ceph-mon[77138]: pgmap v1874: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Nov 29 08:03:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4271457649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3058818878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:38 compute-2 nova_compute[232428]: 2025-11-29 08:03:38.186 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully created port: e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:38.347 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:38 compute-2 nova_compute[232428]: 2025-11-29 08:03:38.348 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:38.350 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:03:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:38 compute-2 podman[263299]: 2025-11-29 08:03:38.70298808 +0000 UTC m=+0.090732809 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:03:39 compute-2 ceph-mon[77138]: pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Nov 29 08:03:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2661788479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:40 compute-2 nova_compute[232428]: 2025-11-29 08:03:40.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:40 compute-2 nova_compute[232428]: 2025-11-29 08:03:40.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:40.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:40 compute-2 nova_compute[232428]: 2025-11-29 08:03:40.630 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully created port: be573f34-a335-4f5c-a6f2-dd0e149534ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:40 compute-2 nova_compute[232428]: 2025-11-29 08:03:40.731 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2289387352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:41 compute-2 nova_compute[232428]: 2025-11-29 08:03:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:41 compute-2 ceph-mon[77138]: pgmap v1876: 305 pgs: 305 active+clean; 272 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 650 KiB/s wr, 2 op/s
Nov 29 08:03:41 compute-2 nova_compute[232428]: 2025-11-29 08:03:41.838 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully created port: 62c5edb0-a405-4b4d-92c0-37a8154c2dbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:42.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3670933379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:43 compute-2 nova_compute[232428]: 2025-11-29 08:03:43.753 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: 74928c3b-944c-4f17-b1b2-de33221d05ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:43 compute-2 ceph-mon[77138]: pgmap v1877: 305 pgs: 305 active+clean; 293 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 08:03:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2714462634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.229 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:44.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/671861190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.658 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3840238596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2959529658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/671861190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.892 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.894 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4586MB free_disk=20.876293182373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.894 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.895 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:44 compute-2 sudo[263344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:44 compute-2 sudo[263344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:44 compute-2 sudo[263344]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:44 compute-2 sudo[263369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:44 compute-2 sudo[263369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:44 compute-2 sudo[263369]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.987 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.987 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:03:44 compute-2 nova_compute[232428]: 2025-11-29 08:03:44.987 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.024 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.062 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.208 232432 DEBUG nova.compute.manager [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.208 232432 DEBUG nova.compute.manager [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-74928c3b-944c-4f17-b1b2-de33221d05ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.209 232432 DEBUG oslo_concurrency.lockutils [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.209 232432 DEBUG oslo_concurrency.lockutils [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.209 232432 DEBUG nova.network.neutron [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port 74928c3b-944c-4f17-b1b2-de33221d05ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.316 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1302705688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.454 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.460 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.483 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.506 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.507 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.531 232432 DEBUG nova.network.neutron [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:45 compute-2 nova_compute[232428]: 2025-11-29 08:03:45.733 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:45 compute-2 ceph-mon[77138]: pgmap v1878: 305 pgs: 305 active+clean; 258 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 08:03:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1476560591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1302705688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:46 compute-2 nova_compute[232428]: 2025-11-29 08:03:46.084 232432 DEBUG nova.network.neutron [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:46 compute-2 nova_compute[232428]: 2025-11-29 08:03:46.160 232432 DEBUG oslo_concurrency.lockutils [req-86fd87a0-8920-48cd-9566-cfbd67116f5c req-ed8ab035-b631-40e9-a088-174679e22811 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:46 compute-2 nova_compute[232428]: 2025-11-29 08:03:46.439 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: fbf3611a-6024-4f95-8880-d580bf23f660 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:46 compute-2 nova_compute[232428]: 2025-11-29 08:03:46.507 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:46.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:46 compute-2 podman[263417]: 2025-11-29 08:03:46.732557143 +0000 UTC m=+0.129806971 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:03:46 compute-2 ceph-mon[77138]: pgmap v1879: 305 pgs: 305 active+clean; 213 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.672 232432 DEBUG nova.compute.manager [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.672 232432 DEBUG nova.compute.manager [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.673 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.674 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.674 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2335680283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.942 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:47 compute-2 nova_compute[232428]: 2025-11-29 08:03:47.980 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: 765356c7-caab-46eb-830e-4a979bbba648 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:03:48.352 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.388 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.405 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.406 232432 DEBUG nova.compute.manager [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-fbf3611a-6024-4f95-8880-d580bf23f660 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.406 232432 DEBUG nova.compute.manager [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-fbf3611a-6024-4f95-8880-d580bf23f660. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.407 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.407 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.407 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port fbf3611a-6024-4f95-8880-d580bf23f660 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:03:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:48.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:03:48 compute-2 nova_compute[232428]: 2025-11-29 08:03:48.644 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:48 compute-2 ceph-mon[77138]: pgmap v1880: 305 pgs: 305 active+clean; 213 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.592 232432 DEBUG nova.network.neutron [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.610 232432 DEBUG oslo_concurrency.lockutils [req-17b79410-1e1a-4cab-8bb6-ea5c1442d226 req-4dc10b51-9fb4-4a85-b2a7-fe4d8ff6e7e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.631 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.989 232432 DEBUG nova.compute.manager [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.990 232432 DEBUG nova.compute.manager [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-765356c7-caab-46eb-830e-4a979bbba648. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.990 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.991 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:49 compute-2 nova_compute[232428]: 2025-11-29 08:03:49.991 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port 765356c7-caab-46eb-830e-4a979bbba648 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:50 compute-2 nova_compute[232428]: 2025-11-29 08:03:50.291 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:50 compute-2 nova_compute[232428]: 2025-11-29 08:03:50.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:50.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:50 compute-2 nova_compute[232428]: 2025-11-29 08:03:50.736 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:50 compute-2 nova_compute[232428]: 2025-11-29 08:03:50.749 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: be573f34-a335-4f5c-a6f2-dd0e149534ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:50 compute-2 ceph-mon[77138]: pgmap v1881: 305 pgs: 305 active+clean; 187 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.104 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.167 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.168 232432 DEBUG nova.compute.manager [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.168 232432 DEBUG nova.compute.manager [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.168 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.168 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.168 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:51 compute-2 nova_compute[232428]: 2025-11-29 08:03:51.420 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.152 232432 DEBUG nova.compute.manager [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.153 232432 DEBUG nova.compute.manager [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-be573f34-a335-4f5c-a6f2-dd0e149534ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.153 232432 DEBUG oslo_concurrency.lockutils [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.154 232432 DEBUG nova.network.neutron [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.195 232432 DEBUG oslo_concurrency.lockutils [req-d069eaa9-cfdc-40ee-b165-7bcec24fe935 req-1e18f476-c030-4e9b-8b27-4b2a87fb4ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.195 232432 DEBUG oslo_concurrency.lockutils [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.196 232432 DEBUG nova.network.neutron [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port be573f34-a335-4f5c-a6f2-dd0e149534ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.524 232432 DEBUG nova.network.neutron [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.566 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Successfully updated port: 62c5edb0-a405-4b4d-92c0-37a8154c2dbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:52.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:52 compute-2 nova_compute[232428]: 2025-11-29 08:03:52.600 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:53 compute-2 ceph-mon[77138]: pgmap v1882: 305 pgs: 305 active+clean; 134 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Nov 29 08:03:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/472195757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:53 compute-2 nova_compute[232428]: 2025-11-29 08:03:53.300 232432 DEBUG nova.network.neutron [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:53 compute-2 nova_compute[232428]: 2025-11-29 08:03:53.319 232432 DEBUG oslo_concurrency.lockutils [req-3079979c-6124-4cdc-acd5-c39f70a40a5f req-6255a0b0-5e5d-47f2-ac55-a44fe4efc148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:53 compute-2 nova_compute[232428]: 2025-11-29 08:03:53.320 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:53 compute-2 nova_compute[232428]: 2025-11-29 08:03:53.320 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:03:53 compute-2 nova_compute[232428]: 2025-11-29 08:03:53.577 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:54 compute-2 nova_compute[232428]: 2025-11-29 08:03:54.239 232432 DEBUG nova.compute.manager [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:54 compute-2 nova_compute[232428]: 2025-11-29 08:03:54.240 232432 DEBUG nova.compute.manager [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-62c5edb0-a405-4b4d-92c0-37a8154c2dbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:54 compute-2 nova_compute[232428]: 2025-11-29 08:03:54.241 232432 DEBUG oslo_concurrency.lockutils [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:55 compute-2 ceph-mon[77138]: pgmap v1883: 305 pgs: 305 active+clean; 109 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 120 op/s
Nov 29 08:03:55 compute-2 nova_compute[232428]: 2025-11-29 08:03:55.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:55 compute-2 nova_compute[232428]: 2025-11-29 08:03:55.737 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:56.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:57 compute-2 ceph-mon[77138]: pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 134 op/s
Nov 29 08:03:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3775957241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:03:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:03:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:03:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:03:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:58.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:03:59 compute-2 ceph-mon[77138]: pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 KiB/s wr, 106 op/s
Nov 29 08:03:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:00 compute-2 nova_compute[232428]: 2025-11-29 08:04:00.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:00 compute-2 nova_compute[232428]: 2025-11-29 08:04:00.740 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-2 ceph-mon[77138]: pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 KiB/s wr, 106 op/s
Nov 29 08:04:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:02.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:03 compute-2 ceph-mon[77138]: pgmap v1887: 305 pgs: 305 active+clean; 121 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 771 KiB/s rd, 1.2 MiB/s wr, 84 op/s
Nov 29 08:04:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/548541423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2882508826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:03.309 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:03.310 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:03.310 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.567 232432 DEBUG nova.network.neutron [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.598 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.599 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance network_info: |[{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.601 232432 DEBUG oslo_concurrency.lockutils [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.601 232432 DEBUG nova.network.neutron [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port 62c5edb0-a405-4b4d-92c0-37a8154c2dbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.619 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Start _get_guest_xml network_info=[{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,m
Nov 29 08:04:03 compute-2 nova_compute[232428]: in_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9f626c2c-bb56-4532-9935-1e7b39440d48', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9f626c2c-bb56-4532-9935-1e7b39440d48', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'attached_at': '', 'detached_at': '', 'volume_id': '9f626c2c-bb56-4532-9935-1e7b39440d48', 'serial': '9f626c2c-bb56-4532-9935-1e7b39440d48'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '56865f30-877a-4878-a3a9-ca202c8b2d30', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-905030d9-8042-49ed-9c6d-0283f1caf956', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '905030d9-8042-49ed-9c6d-0283f1caf956', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'attached_at': '', 'detached_at': '', 'volume_id': '905030d9-8042-49ed-9c6d-0283f1caf956', 'serial': '905030d9-8042-49ed-9c6d-0283f1caf956'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 1, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': 'bc3d1f07-343f-4d19-bcf1-6f0f0bad5e87', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f07a118d-ea67-4113-8ddf-ed6afb5e3f24', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f07a118d-ea67-4113-8ddf-ed6afb5e3f24', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'attached_at': '', 'detached_at': '', 'volume_id': 'f07a118d-ea67-4113-8ddf-ed6afb5e3f24', 'serial': 'f07a118d-ea67-4113-8ddf-ed6afb5e3f24'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 2, 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'delete_on_termination': False, 'attachment_id': '32f80b3c-4f6a-4b33-bd61-accbf9121901', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.626 232432 WARNING nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.638 232432 DEBUG nova.virt.libvirt.host [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.639 232432 DEBUG nova.virt.libvirt.host [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.652 232432 DEBUG nova.virt.libvirt.host [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.653 232432 DEBUG nova.virt.libvirt.host [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.656 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.657 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.657 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.658 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.659 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.659 232432 DEBUG nova.virt.hardware [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.695 232432 DEBUG nova.storage.rbd_utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] rbd image bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:03 compute-2 nova_compute[232428]: 2025-11-29 08:04:03.701 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:04 compute-2 rsyslogd[1008]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 08:04:03.619 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 08:04:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2929208264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3918938773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.189 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.313 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.313 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.315 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.316 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.316 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.317 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.318 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.318 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.319 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.320 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.320 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.323 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.324 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.325 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.326 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.327 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.327 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.328 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.331 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.331 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.332 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.333 232432 DEBUG nova.objects.instance [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb4e9fda-828d-4b2f-84a9-4fbbcb213650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.376 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <uuid>bb4e9fda-828d-4b2f-84a9-4fbbcb213650</uuid>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <name>instance-00000046</name>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:name>tempest-device-tagging-server-99955562</nova:name>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:04:03</nova:creationTime>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:user uuid="d77e751616c9473786c8ac7ae2d34d20">tempest-TaggedBootDevicesTest-59583474-project-member</nova:user>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:project uuid="faa91146f75c46ebbcd15bb2222a8545">tempest-TaggedBootDevicesTest-59583474</nova:project>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="74928c3b-944c-4f17-b1b2-de33221d05ee">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="022e4672-a2e1-4d3d-af5c-cc34a3b4dc38">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.57" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="fbf3611a-6024-4f95-8880-d580bf23f660">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.126" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="765356c7-caab-46eb-830e-4a979bbba648">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.250" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.49" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="be573f34-a335-4f5c-a6f2-dd0e149534ee">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <nova:port uuid="62c5edb0-a405-4b4d-92c0-37a8154c2dbb">
Nov 29 08:04:04 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <system>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="serial">bb4e9fda-828d-4b2f-84a9-4fbbcb213650</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="uuid">bb4e9fda-828d-4b2f-84a9-4fbbcb213650</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </system>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <os>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </os>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <features>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </features>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-9f626c2c-bb56-4532-9935-1e7b39440d48">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <serial>9f626c2c-bb56-4532-9935-1e7b39440d48</serial>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-905030d9-8042-49ed-9c6d-0283f1caf956">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <serial>905030d9-8042-49ed-9c6d-0283f1caf956</serial>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-f07a118d-ea67-4113-8ddf-ed6afb5e3f24">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:04 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="vdc" bus="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <serial>f07a118d-ea67-4113-8ddf-ed6afb5e3f24</serial>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:9a:8a:76"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tap74928c3b-94"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:6e:db:b2"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tap022e4672-a2"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:af:55:80"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tapfbf3611a-60"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:95:68:08"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tap765356c7-ca"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:60:99:5d"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tape4d50911-d5"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:97:ef:13"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tapbe573f34-a3"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:dc:1d:3f"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <target dev="tap62c5edb0-a4"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/console.log" append="off"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <video>
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </video>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:04:04 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:04:04 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:04:04 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:04:04 compute-2 nova_compute[232428]: </domain>
Nov 29 08:04:04 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.378 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.379 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.380 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.381 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.381 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.382 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.382 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.383 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.383 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.384 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.384 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.385 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.385 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.386 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.386 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.387 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.388 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.388 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.389 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.389 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.390 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.390 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.391 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.391 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.391 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Preparing to wait for external event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.392 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.393 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.393 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.394 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.395 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.396 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.397 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.399 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.400 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.400 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.412 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.413 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74928c3b-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.414 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74928c3b-94, col_values=(('external_ids', {'iface-id': '74928c3b-944c-4f17-b1b2-de33221d05ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:8a:76', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.417 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.4181] manager: (tap74928c3b-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.421 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.429 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.431 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.433 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.433 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.434 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.435 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.436 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.437 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.440 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.441 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap022e4672-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.441 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap022e4672-a2, col_values=(('external_ids', {'iface-id': '022e4672-a2e1-4d3d-af5c-cc34a3b4dc38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:db:b2', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.443 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.4455] manager: (tap022e4672-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.446 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.457 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.458 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.458 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.459 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.460 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.461 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.461 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.464 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.464 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3611a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.464 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbf3611a-60, col_values=(('external_ids', {'iface-id': 'fbf3611a-6024-4f95-8880-d580bf23f660', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:55:80', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.466 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.4680] manager: (tapfbf3611a-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.482 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.483 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.484 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.485 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.486 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.486 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.487 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.487 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.487 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.490 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.490 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap765356c7-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.491 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap765356c7-ca, col_values=(('external_ids', {'iface-id': '765356c7-caab-46eb-830e-4a979bbba648', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:68:08', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.4942] manager: (tap765356c7-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.509 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.510 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.510 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.511 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.511 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.512 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.512 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.513 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.515 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d50911-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.516 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4d50911-d5, col_values=(('external_ids', {'iface-id': 'e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:99:5d', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.5189] manager: (tape4d50911-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.538 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.539 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.541 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.541 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.542 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.543 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.544 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.545 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.547 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.548 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe573f34-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.548 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe573f34-a3, col_values=(('external_ids', {'iface-id': 'be573f34-a335-4f5c-a6f2-dd0e149534ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:ef:13', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.550 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.5514] manager: (tapbe573f34-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.573 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.575 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.577 232432 DEBUG nova.virt.libvirt.vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.577 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.578 232432 DEBUG nova.network.os_vif_util [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.579 232432 DEBUG os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.580 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.581 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.584 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c5edb0-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.585 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62c5edb0-a4, col_values=(('external_ids', {'iface-id': '62c5edb0-a405-4b4d-92c0-37a8154c2dbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:1d:3f', 'vm-uuid': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.587 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 NetworkManager[48993]: <info>  [1764403444.5878] manager: (tap62c5edb0-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 08:04:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:04.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.614 232432 INFO os_vif [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4')
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.701 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.702 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.702 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] No VIF found with MAC fa:16:3e:9a:8a:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.703 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] No VIF found with MAC fa:16:3e:60:99:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.703 232432 INFO nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Using config drive
Nov 29 08:04:04 compute-2 nova_compute[232428]: 2025-11-29 08:04:04.745 232432 DEBUG nova.storage.rbd_utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] rbd image bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:05 compute-2 sudo[263541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:05 compute-2 sudo[263541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:05 compute-2 sudo[263541]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:05 compute-2 ceph-mon[77138]: pgmap v1888: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 08:04:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3918938773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:05 compute-2 sudo[263570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:05 compute-2 podman[263565]: 2025-11-29 08:04:05.183592292 +0000 UTC m=+0.102146036 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:04:05 compute-2 sudo[263570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:05 compute-2 sudo[263570]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.502 232432 INFO nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Creating config drive at /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.515 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiqyc67q1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.664 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiqyc67q1" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.719 232432 DEBUG nova.storage.rbd_utils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] rbd image bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.726 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.963 232432 DEBUG oslo_concurrency.processutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config bb4e9fda-828d-4b2f-84a9-4fbbcb213650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:05 compute-2 nova_compute[232428]: 2025-11-29 08:04:05.965 232432 INFO nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Deleting local config drive /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650/disk.config because it was imported into RBD.
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.0396] manager: (tap74928c3b-94): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 08:04:06 compute-2 kernel: tap74928c3b-94: entered promiscuous mode
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.0582] manager: (tap022e4672-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00237|binding|INFO|Claiming lport 74928c3b-944c-4f17-b1b2-de33221d05ee for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00238|binding|INFO|74928c3b-944c-4f17-b1b2-de33221d05ee: Claiming fa:16:3e:9a:8a:76 10.100.0.7
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.066 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.0791] manager: (tapfbf3611a-60): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 08:04:06 compute-2 kernel: tap022e4672-a2: entered promiscuous mode
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.085 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:76 10.100.0.7'], port_security=['fa:16:3e:9a:8a:76 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-244beb46-e997-4214-9a18-cb9fb18e5629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5842bed6-00c3-4570-884f-978f9e83ba3d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=74928c3b-944c-4f17-b1b2-de33221d05ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.088 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 74928c3b-944c-4f17-b1b2-de33221d05ee in datapath 244beb46-e997-4214-9a18-cb9fb18e5629 bound to our chassis
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.092 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 244beb46-e997-4214-9a18-cb9fb18e5629
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.0978] manager: (tap765356c7-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 08:04:06 compute-2 systemd-udevd[263681]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:06 compute-2 systemd-udevd[263683]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:06 compute-2 systemd-udevd[263682]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:06 compute-2 systemd-udevd[263684]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.109 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2e2710-66f4-47d7-b3bb-ef8bb2e807e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.110 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap244beb46-e1 in ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1157] manager: (tape4d50911-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.114 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap244beb46-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.115 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e3974dea-b40a-474c-8445-55791e20743d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.116 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[619e220a-b075-466a-824b-ae061077e680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 systemd-udevd[263691]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1263] device (tap022e4672-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.123 232432 DEBUG nova.network.neutron [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updated VIF entry in instance network info cache for port 62c5edb0-a405-4b4d-92c0-37a8154c2dbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.124 232432 DEBUG nova.network.neutron [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1273] device (tap022e4672-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1364] device (tap74928c3b-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.137 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a69e0d7b-05da-4477-8446-60cf2503280d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1390] device (tap74928c3b-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1463] manager: (tapbe573f34-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1650] manager: (tap62c5edb0-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.170 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.171 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f123981-013b-4841-9780-9accd776eeb0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00239|binding|INFO|Claiming lport 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00240|binding|INFO|022e4672-a2e1-4d3d-af5c-cc34a3b4dc38: Claiming fa:16:3e:6e:db:b2 10.1.1.57
Nov 29 08:04:06 compute-2 kernel: tapfbf3611a-60: entered promiscuous mode
Nov 29 08:04:06 compute-2 kernel: tap62c5edb0-a4: entered promiscuous mode
Nov 29 08:04:06 compute-2 kernel: tape4d50911-d5: entered promiscuous mode
Nov 29 08:04:06 compute-2 kernel: tap765356c7-ca: entered promiscuous mode
Nov 29 08:04:06 compute-2 kernel: tapbe573f34-a3: entered promiscuous mode
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1747] device (tapfbf3611a-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1757] device (tape4d50911-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1762] device (tap765356c7-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1767] device (tapbe573f34-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1773] device (tapfbf3611a-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1778] device (tape4d50911-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1781] device (tap765356c7-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.177 232432 DEBUG oslo_concurrency.lockutils [req-28622b63-182b-4eea-8708-bc46717b42ef req-af54654d-5b54-4867-bd3c-eeccb3ddf0f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1783] device (tapbe573f34-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1810] device (tap62c5edb0-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.1818] device (tap62c5edb0-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.179 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:db:b2 10.1.1.57'], port_security=['fa:16:3e:6e:db:b2 10.1.1.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-926269711', 'neutron:cidrs': '10.1.1.57/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-926269711', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e9b4eb1-3118-41f1-9706-2ee3b78a381a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00241|binding|INFO|Setting lport 74928c3b-944c-4f17-b1b2-de33221d05ee ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00242|binding|INFO|Setting lport 74928c3b-944c-4f17-b1b2-de33221d05ee up in Southbound
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00243|if_status|INFO|Not updating pb chassis for fbf3611a-6024-4f95-8880-d580bf23f660 now as sb is readonly
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.188 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 systemd-machined[194747]: New machine qemu-31-instance-00000046.
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.209 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c1466c9d-566e-40bd-8fad-5226825ed419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.2186] manager: (tap244beb46-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.218 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[628a261d-8ec1-41e3-97ef-b9b7576faeb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 systemd[1]: Started Virtual Machine qemu-31-instance-00000046.
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.261 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4c43b0-ba08-42a5-9400-e93ef82ab8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.265 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f124cc27-8c2a-490d-91bd-cfb80b3a979f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.270 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00244|binding|INFO|Claiming lport 62c5edb0-a405-4b4d-92c0-37a8154c2dbb for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00245|binding|INFO|62c5edb0-a405-4b4d-92c0-37a8154c2dbb: Claiming fa:16:3e:dc:1d:3f 10.2.2.200
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00246|binding|INFO|Claiming lport be573f34-a335-4f5c-a6f2-dd0e149534ee for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00247|binding|INFO|be573f34-a335-4f5c-a6f2-dd0e149534ee: Claiming fa:16:3e:97:ef:13 10.2.2.100
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00248|binding|INFO|Claiming lport 765356c7-caab-46eb-830e-4a979bbba648 for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00249|binding|INFO|765356c7-caab-46eb-830e-4a979bbba648: Claiming fa:16:3e:95:68:08 10.1.1.250
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00250|binding|INFO|Claiming lport e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00251|binding|INFO|e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b: Claiming fa:16:3e:60:99:5d 10.1.1.49
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00252|binding|INFO|Claiming lport fbf3611a-6024-4f95-8880-d580bf23f660 for this chassis.
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00253|binding|INFO|fbf3611a-6024-4f95-8880-d580bf23f660: Claiming fa:16:3e:af:55:80 10.1.1.126
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00254|binding|INFO|Setting lport 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 ovn-installed in OVS
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.280 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00255|binding|INFO|Setting lport 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 up in Southbound
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.286 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:68:08 10.1.1.250'], port_security=['fa:16:3e:95:68:08 10.1.1.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.250/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=765356c7-caab-46eb-830e-4a979bbba648) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.288 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:55:80 10.1.1.126'], port_security=['fa:16:3e:af:55:80 10.1.1.126'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-223744034', 'neutron:cidrs': '10.1.1.126/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-223744034', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e9b4eb1-3118-41f1-9706-2ee3b78a381a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fbf3611a-6024-4f95-8880-d580bf23f660) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.289 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:99:5d 10.1.1.49'], port_security=['fa:16:3e:60:99:5d 10.1.1.49'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.49/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.290 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:1d:3f 10.2.2.200'], port_security=['fa:16:3e:dc:1d:3f 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cafabcb6-1c42-4294-b26b-74933aae0590', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a9a82b-aae6-4039-a3e2-6cda0a5d9cb9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=62c5edb0-a405-4b4d-92c0-37a8154c2dbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.292 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ef:13 10.2.2.100'], port_security=['fa:16:3e:97:ef:13 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cafabcb6-1c42-4294-b26b-74933aae0590', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a9a82b-aae6-4039-a3e2-6cda0a5d9cb9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be573f34-a335-4f5c-a6f2-dd0e149534ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.2991] device (tap244beb46-e0): carrier: link connected
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.303 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[05c8f78e-d509-48d9-9d8a-b6eb8c199aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.343 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9e97f97f-35cc-4a53-8725-3e6b6b60d68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap244beb46-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b1:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641256, 'reachable_time': 16728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263730, 'error': None, 'target': 'ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.361 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bac0ef-4ae6-467a-b372-b9a31b29213b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b147'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641256, 'tstamp': 641256}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263732, 'error': None, 'target': 'ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00256|binding|INFO|Setting lport fbf3611a-6024-4f95-8880-d580bf23f660 ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00257|binding|INFO|Setting lport fbf3611a-6024-4f95-8880-d580bf23f660 up in Southbound
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00258|binding|INFO|Setting lport 765356c7-caab-46eb-830e-4a979bbba648 ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00259|binding|INFO|Setting lport 765356c7-caab-46eb-830e-4a979bbba648 up in Southbound
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00260|binding|INFO|Setting lport 62c5edb0-a405-4b4d-92c0-37a8154c2dbb ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00261|binding|INFO|Setting lport 62c5edb0-a405-4b4d-92c0-37a8154c2dbb up in Southbound
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00262|binding|INFO|Setting lport e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00263|binding|INFO|Setting lport e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b up in Southbound
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00264|binding|INFO|Setting lport be573f34-a335-4f5c-a6f2-dd0e149534ee ovn-installed in OVS
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00265|binding|INFO|Setting lport be573f34-a335-4f5c-a6f2-dd0e149534ee up in Southbound
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.366 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[229a574b-0c99-4368-a389-8d3c1e25f749]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap244beb46-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b1:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641256, 'reachable_time': 16728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263733, 'error': None, 'target': 'ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.428 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d6601edf-bfcc-4941-bbc7-28e3961ab1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.503 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[027da7c9-7336-493b-82e6-a297e5f064ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.504 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244beb46-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.505 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.505 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap244beb46-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 NetworkManager[48993]: <info>  [1764403446.5557] manager: (tap244beb46-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 08:04:06 compute-2 kernel: tap244beb46-e0: entered promiscuous mode
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.560 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap244beb46-e0, col_values=(('external_ids', {'iface-id': '6423497c-1eb2-43cf-bb69-b2e44a4e68f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_controller[134375]: 2025-11-29T08:04:06Z|00266|binding|INFO|Releasing lport 6423497c-1eb2-43cf-bb69-b2e44a4e68f0 from this chassis (sb_readonly=1)
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.585 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.586 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/244beb46-e997-4214-9a18-cb9fb18e5629.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/244beb46-e997-4214-9a18-cb9fb18e5629.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.587 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a3a3c1-ce5a-4711-9a53-c93cb1b0eba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.588 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-244beb46-e997-4214-9a18-cb9fb18e5629
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/244beb46-e997-4214-9a18-cb9fb18e5629.pid.haproxy
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 244beb46-e997-4214-9a18-cb9fb18e5629
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:06.588 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629', 'env', 'PROCESS_TAG=haproxy-244beb46-e997-4214-9a18-cb9fb18e5629', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/244beb46-e997-4214-9a18-cb9fb18e5629.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.850 232432 DEBUG nova.compute.manager [req-dccb5257-5867-4fca-894c-10b73531e126 req-a4ac3fd2-ad37-42d3-b8c8-198020b23488 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.852 232432 DEBUG oslo_concurrency.lockutils [req-dccb5257-5867-4fca-894c-10b73531e126 req-a4ac3fd2-ad37-42d3-b8c8-198020b23488 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.853 232432 DEBUG oslo_concurrency.lockutils [req-dccb5257-5867-4fca-894c-10b73531e126 req-a4ac3fd2-ad37-42d3-b8c8-198020b23488 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.854 232432 DEBUG oslo_concurrency.lockutils [req-dccb5257-5867-4fca-894c-10b73531e126 req-a4ac3fd2-ad37-42d3-b8c8-198020b23488 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.854 232432 DEBUG nova.compute.manager [req-dccb5257-5867-4fca-894c-10b73531e126 req-a4ac3fd2-ad37-42d3-b8c8-198020b23488 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.977 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403446.977338, bb4e9fda-828d-4b2f-84a9-4fbbcb213650 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:06 compute-2 nova_compute[232428]: 2025-11-29 08:04:06.979 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] VM Started (Lifecycle Event)
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.023 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:07 compute-2 podman[263846]: 2025-11-29 08:04:07.02465473 +0000 UTC m=+0.070736642 container create 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.032 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403446.978179, bb4e9fda-828d-4b2f-84a9-4fbbcb213650 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.033 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] VM Paused (Lifecycle Event)
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.063 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.069 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:07 compute-2 systemd[1]: Started libpod-conmon-290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00.scope.
Nov 29 08:04:07 compute-2 podman[263846]: 2025-11-29 08:04:06.989247983 +0000 UTC m=+0.035329875 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:07 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:04:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5972c066d545e4a115b5c55d178282e122f83dafab5d182414d79e3fafbbea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.110 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:07 compute-2 podman[263846]: 2025-11-29 08:04:07.11737102 +0000 UTC m=+0.163452892 container init 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:04:07 compute-2 podman[263846]: 2025-11-29 08:04:07.125463693 +0000 UTC m=+0.171545575 container start 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:04:07 compute-2 ceph-mon[77138]: pgmap v1889: 305 pgs: 305 active+clean; 176 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Nov 29 08:04:07 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [NOTICE]   (263866) : New worker (263868) forked
Nov 29 08:04:07 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [NOTICE]   (263866) : Loading success.
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.193 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.195 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.208 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[396b4e8c-bcec-4f6c-9a96-33d83c16e08c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.209 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4850a5c9-11 in ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.212 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4850a5c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.212 232432 DEBUG nova.compute.manager [req-78ae42ad-81e1-4a23-974a-bf8d5ec69bd3 req-ae5d11fe-3cce-4d02-b2af-a54cb22be8dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0de1b5-b430-48d0-9939-3a6a13171c64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.213 232432 DEBUG oslo_concurrency.lockutils [req-78ae42ad-81e1-4a23-974a-bf8d5ec69bd3 req-ae5d11fe-3cce-4d02-b2af-a54cb22be8dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.213 232432 DEBUG oslo_concurrency.lockutils [req-78ae42ad-81e1-4a23-974a-bf8d5ec69bd3 req-ae5d11fe-3cce-4d02-b2af-a54cb22be8dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.213 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf309dd3-9079-4aeb-88b9-4f7a30d264ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.214 232432 DEBUG oslo_concurrency.lockutils [req-78ae42ad-81e1-4a23-974a-bf8d5ec69bd3 req-ae5d11fe-3cce-4d02-b2af-a54cb22be8dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.214 232432 DEBUG nova.compute.manager [req-78ae42ad-81e1-4a23-974a-bf8d5ec69bd3 req-ae5d11fe-3cce-4d02-b2af-a54cb22be8dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.227 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b976f153-f6f2-45d8-9a55-e10a9837c913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.250 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0f133de3-7634-4255-87c0-6e2e4f6bf83a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.300 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b79ff20f-2b6b-4205-9d57-aee20e95ebda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.310 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5b6b9a-099f-4610-8487-b1d165d4183c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 NetworkManager[48993]: <info>  [1764403447.3121] manager: (tap4850a5c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.369 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d8744488-7429-489f-aabd-848488d42d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.373 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3db40aad-91bd-479b-ba46-3e474dd47861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 NetworkManager[48993]: <info>  [1764403447.4149] device (tap4850a5c9-10): carrier: link connected
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.426 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[02ac4277-de9b-41fe-a7e8-295ce3f87080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.462 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de9777d2-1c27-4993-a15c-b465dc717457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4850a5c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:7b:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641368, 'reachable_time': 18935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263888, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.495 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[29439fc9-a623-4832-93d6-d0cad2a40ee1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:7b1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641368, 'tstamp': 641368}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263889, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.532 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37ea2e87-7e9c-4060-bb76-654edfdb62e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4850a5c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:7b:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641368, 'reachable_time': 18935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263890, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.592 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[04c0e174-b1ec-4c63-9e12-ba389f8dbf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.667 232432 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.668 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.668 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.669 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.670 232432 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.671 232432 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.672 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.672 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.673 232432 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.674 232432 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee in dict_keys([('network-vif-plugged', 'fbf3611a-6024-4f95-8880-d580bf23f660'), ('network-vif-plugged', '765356c7-caab-46eb-830e-4a979bbba648'), ('network-vif-plugged', 'e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b'), ('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.675 232432 WARNING nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee for instance with vm_state building and task_state spawning.
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.689 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef249fa7-a7ed-4aac-b274-39906a0c6f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.690 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4850a5c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.690 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.691 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4850a5c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:07 compute-2 NetworkManager[48993]: <info>  [1764403447.6930] manager: (tap4850a5c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 08:04:07 compute-2 kernel: tap4850a5c9-10: entered promiscuous mode
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.695 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.696 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4850a5c9-10, col_values=(('external_ids', {'iface-id': '789b4e5b-b51d-4f1a-a76d-9277577aa8c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:07 compute-2 ovn_controller[134375]: 2025-11-29T08:04:07Z|00267|binding|INFO|Releasing lport 789b4e5b-b51d-4f1a-a76d-9277577aa8c6 from this chassis (sb_readonly=0)
Nov 29 08:04:07 compute-2 nova_compute[232428]: 2025-11-29 08:04:07.726 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.728 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4850a5c9-1583-4cb5-9c93-b75a1362cb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4850a5c9-1583-4cb5-9c93-b75a1362cb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.729 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f318a9-ca0c-4ecf-9fd9-2fd2defc5cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.730 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/4850a5c9-1583-4cb5-9c93-b75a1362cb60.pid.haproxy
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:07.730 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'env', 'PROCESS_TAG=haproxy-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4850a5c9-1583-4cb5-9c93-b75a1362cb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:08 compute-2 podman[263921]: 2025-11-29 08:04:08.166989218 +0000 UTC m=+0.089249533 container create 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:08 compute-2 systemd[1]: Started libpod-conmon-3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5.scope.
Nov 29 08:04:08 compute-2 podman[263921]: 2025-11-29 08:04:08.125923133 +0000 UTC m=+0.048183508 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:08 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:04:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30923a7dd6101b4f701e1bfab6ebcfb0b1fded3186e6e515f71883ad132e4a72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:08 compute-2 podman[263921]: 2025-11-29 08:04:08.277197105 +0000 UTC m=+0.199457390 container init 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:04:08 compute-2 podman[263921]: 2025-11-29 08:04:08.289104187 +0000 UTC m=+0.211364472 container start 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:04:08 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [NOTICE]   (263940) : New worker (263942) forked
Nov 29 08:04:08 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [NOTICE]   (263940) : Loading success.
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.344 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 765356c7-caab-46eb-830e-4a979bbba648 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.347 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.366 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[32a0f197-5acd-457d-8275-cc38ede5b973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.403 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[da1c2340-34c9-4eb8-ae61-6cecb8551af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.407 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b96684e1-eb6f-4680-88c8-884490775835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.443 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3c326c5c-651e-40bc-a075-bd803949b53e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[da2fc775-3fcb-4852-95d8-ac60941839f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4850a5c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:7b:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641368, 'reachable_time': 18935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263956, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.495 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[357cb218-6d87-42f5-bd73-7812a9da6628]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641390, 'tstamp': 641390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263957, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641395, 'tstamp': 641395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263957, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.498 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4850a5c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.503 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4850a5c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.504 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.504 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4850a5c9-10, col_values=(('external_ids', {'iface-id': '789b4e5b-b51d-4f1a-a76d-9277577aa8c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.505 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.507 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fbf3611a-6024-4f95-8880-d580bf23f660 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.511 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.533 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a1413a-0e38-4344-90d3-234e99d0ae36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.567 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6fe205-79b1-4416-93eb-5060fe940d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.572 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a086e800-d2ed-4cd0-abfa-bf28903d2deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.610 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3094ef-97fd-4579-b3a2-f313a497c910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.640 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[549bde11-9258-4c4f-820b-da9eb34a2b64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4850a5c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:7b:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641368, 'reachable_time': 18935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263963, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.670 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[abb7bf0b-afd7-47f6-bbfd-e486b06016c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641390, 'tstamp': 641390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263964, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641395, 'tstamp': 641395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263964, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.672 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4850a5c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.676 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.677 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4850a5c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.678 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.679 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4850a5c9-10, col_values=(('external_ids', {'iface-id': '789b4e5b-b51d-4f1a-a76d-9277577aa8c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.679 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.682 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.687 232432 DEBUG nova.compute.manager [req-c06a7559-7f7d-4947-b51c-6ab96a01093e req-6f6a6a08-a14e-419a-8271-aa34c0263a6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.687 232432 DEBUG oslo_concurrency.lockutils [req-c06a7559-7f7d-4947-b51c-6ab96a01093e req-6f6a6a08-a14e-419a-8271-aa34c0263a6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.688 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.688 232432 DEBUG oslo_concurrency.lockutils [req-c06a7559-7f7d-4947-b51c-6ab96a01093e req-6f6a6a08-a14e-419a-8271-aa34c0263a6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.688 232432 DEBUG oslo_concurrency.lockutils [req-c06a7559-7f7d-4947-b51c-6ab96a01093e req-6f6a6a08-a14e-419a-8271-aa34c0263a6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.689 232432 DEBUG nova.compute.manager [req-c06a7559-7f7d-4947-b51c-6ab96a01093e req-6f6a6a08-a14e-419a-8271-aa34c0263a6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.708 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[35368bf0-62ee-47c9-91b8-aa50b87f931d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.754 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[76465c82-7b11-4f1a-9f5e-70f24f72b2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.759 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0af67804-4fbc-4a2b-995f-20ced44b6d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.808 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ba69a3d2-95a5-418b-9555-889d6491129e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.839 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4d0bae-e751-4f6d-9d9a-81a157284d85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4850a5c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:7b:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 9, 'rx_bytes': 442, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 9, 'rx_bytes': 442, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641368, 'reachable_time': 18935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263970, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.863 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[390f58fb-0518-49eb-ac03-a95aa34c620e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641390, 'tstamp': 641390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263971, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap4850a5c9-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641395, 'tstamp': 641395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263971, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.865 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4850a5c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.867 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4850a5c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4850a5c9-10, col_values=(('external_ids', {'iface-id': '789b4e5b-b51d-4f1a-a76d-9277577aa8c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.870 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.870 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 62c5edb0-a405-4b4d-92c0-37a8154c2dbb in datapath cafabcb6-1c42-4294-b26b-74933aae0590 unbound from our chassis
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.872 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cafabcb6-1c42-4294-b26b-74933aae0590
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b527c73f-ec9f-4cb3-86e3-cae094f674c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.892 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcafabcb6-11 in ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.893 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcafabcb6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e10430-1c93-48a3-9bcf-7225a01d53d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.895 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d25a42d-e167-4e68-9ffd-634686bea0d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.915 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d5c00d-f3b0-406b-a25f-01f4790e9ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.948 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ab135605-d49d-48c1-ba6d-9368651feb65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.973 232432 DEBUG nova.compute.manager [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.977 232432 DEBUG oslo_concurrency.lockutils [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.978 232432 DEBUG oslo_concurrency.lockutils [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.978 232432 DEBUG oslo_concurrency.lockutils [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.978 232432 DEBUG nova.compute.manager [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 in dict_keys([('network-vif-plugged', '765356c7-caab-46eb-830e-4a979bbba648'), ('network-vif-plugged', 'e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b'), ('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:08 compute-2 nova_compute[232428]: 2025-11-29 08:04:08.979 232432 WARNING nova.compute.manager [req-d30e9a73-a64e-4778-94e2-a9432195bce5 req-eb0154d6-d28e-47e6-ba64-a5bdaeaec242 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 for instance with vm_state building and task_state spawning.
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:08.999 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a2b3d0-1ccb-4a14-b96d-bae79cf43fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.008 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[55a9cea2-77c4-4486-a491-8c4d7f49fa02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 NetworkManager[48993]: <info>  [1764403449.0111] manager: (tapcafabcb6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.048 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[52188d5a-b46d-4623-a510-9bf38fc64042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.052 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bae53efa-b37b-4a71-a804-bc34314d5687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 podman[263974]: 2025-11-29 08:04:09.057931672 +0000 UTC m=+0.098922245 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:04:09 compute-2 NetworkManager[48993]: <info>  [1764403449.0825] device (tapcafabcb6-10): carrier: link connected
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.086 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed248ec-eea7-45e0-a456-3b51a479af66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.104 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b563b0-c5f6-4769-964c-17c8cd2e77e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcafabcb6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:82:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641535, 'reachable_time': 40945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264002, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.119 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc68548-a199-4f94-bf0e-56bebc5eba65]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:82c2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641535, 'tstamp': 641535}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264003, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.138 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d37260-1781-4fc1-b4c6-81254ec25418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcafabcb6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:82:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641535, 'reachable_time': 40945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264004, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.167 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[98d7ee01-0ada-4e74-afdf-a8f6ac6b34c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.237 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[919e6136-57a9-4654-81c3-91af9a14e6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.238 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcafabcb6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.238 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.239 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcafabcb6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-2 NetworkManager[48993]: <info>  [1764403449.2421] manager: (tapcafabcb6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 08:04:09 compute-2 kernel: tapcafabcb6-10: entered promiscuous mode
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.243 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.244 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcafabcb6-10, col_values=(('external_ids', {'iface-id': '26f328f7-c06a-4d2e-bace-a49a53d97587'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-2 ovn_controller[134375]: 2025-11-29T08:04:09Z|00268|binding|INFO|Releasing lport 26f328f7-c06a-4d2e-bace-a49a53d97587 from this chassis (sb_readonly=0)
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.260 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.261 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cafabcb6-1c42-4294-b26b-74933aae0590.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cafabcb6-1c42-4294-b26b-74933aae0590.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.261 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd354a8-486d-4293-a92b-9b29280f96ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.263 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-cafabcb6-1c42-4294-b26b-74933aae0590
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/cafabcb6-1c42-4294-b26b-74933aae0590.pid.haproxy
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID cafabcb6-1c42-4294-b26b-74933aae0590
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.263 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'env', 'PROCESS_TAG=haproxy-cafabcb6-1c42-4294-b26b-74933aae0590', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cafabcb6-1c42-4294-b26b-74933aae0590.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.351 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.351 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.352 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.352 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.352 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb in dict_keys([('network-vif-plugged', '765356c7-caab-46eb-830e-4a979bbba648'), ('network-vif-plugged', 'e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b'), ('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.352 232432 WARNING nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb for instance with vm_state building and task_state spawning.
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.353 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.353 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.353 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.353 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.354 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.354 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.354 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.354 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.354 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.355 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 in dict_keys([('network-vif-plugged', 'e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b'), ('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.355 232432 WARNING nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 for instance with vm_state building and task_state spawning.
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.355 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.355 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.356 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.356 232432 DEBUG oslo_concurrency.lockutils [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.356 232432 DEBUG nova.compute.manager [req-7751ec43-7857-49f1-a9aa-1eff71412fe6 req-22b10a97-e9e0-47b2-9cc8-a6f2a77ee89e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:09 compute-2 nova_compute[232428]: 2025-11-29 08:04:09.587 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-2 podman[264038]: 2025-11-29 08:04:09.696668179 +0000 UTC m=+0.063632751 container create 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:04:09 compute-2 systemd[1]: Started libpod-conmon-5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307.scope.
Nov 29 08:04:09 compute-2 podman[264038]: 2025-11-29 08:04:09.666444124 +0000 UTC m=+0.033408676 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:09 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:04:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d22633191c680002fc45d1b90fd7c0633ad45e9007e6b965fb80feb913b61fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:09 compute-2 podman[264038]: 2025-11-29 08:04:09.788252033 +0000 UTC m=+0.155216585 container init 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:04:09 compute-2 podman[264038]: 2025-11-29 08:04:09.796522642 +0000 UTC m=+0.163487174 container start 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:04:09 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [NOTICE]   (264058) : New worker (264060) forked
Nov 29 08:04:09 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [NOTICE]   (264058) : Loading success.
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.845 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be573f34-a335-4f5c-a6f2-dd0e149534ee in datapath cafabcb6-1c42-4294-b26b-74933aae0590 unbound from our chassis
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.848 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cafabcb6-1c42-4294-b26b-74933aae0590
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.867 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1c31b9cf-54b1-4629-81bd-f6f6e63af3f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.902 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ea21b1db-ec04-4ff2-975d-1b31d466408b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.906 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[67580bed-5c82-4dc2-a947-8fddf131eeed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ceph-mon[77138]: pgmap v1890: 305 pgs: 305 active+clean; 176 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.5 MiB/s wr, 45 op/s
Nov 29 08:04:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2852531384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/317147062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.947 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fefc5f26-3872-4f36-aaa1-37243e27c02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.974 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b905809f-3cc4-4e4b-ae43-671fb6845092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcafabcb6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:82:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641535, 'reachable_time': 40945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264074, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:09.998 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed49422-c3a3-43fd-9cb4-0b95de3a0a41]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcafabcb6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641546, 'tstamp': 641546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264075, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tapcafabcb6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641550, 'tstamp': 641550}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264075, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:10.001 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcafabcb6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.003 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:10.005 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcafabcb6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:10.005 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:10.006 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcafabcb6-10, col_values=(('external_ids', {'iface-id': '26f328f7-c06a-4d2e-bace-a49a53d97587'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:10.006 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:10.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.434 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.813 232432 DEBUG nova.compute.manager [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.813 232432 DEBUG oslo_concurrency.lockutils [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.814 232432 DEBUG oslo_concurrency.lockutils [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.814 232432 DEBUG oslo_concurrency.lockutils [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.814 232432 DEBUG nova.compute.manager [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 in dict_keys([('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:10 compute-2 nova_compute[232428]: 2025-11-29 08:04:10.815 232432 WARNING nova.compute.manager [req-cbcb638d-3288-4675-98c1-f39f55b0a644 req-de84d063-9aec-4104-9d2d-93b990deaf07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 for instance with vm_state building and task_state spawning.
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.466 232432 DEBUG nova.compute.manager [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.467 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.467 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.468 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.468 232432 DEBUG nova.compute.manager [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No event matching network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b in dict_keys([('network-vif-plugged', 'be573f34-a335-4f5c-a6f2-dd0e149534ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.469 232432 WARNING nova.compute.manager [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b for instance with vm_state building and task_state spawning.
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.469 232432 DEBUG nova.compute.manager [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.469 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.470 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.470 232432 DEBUG oslo_concurrency.lockutils [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.470 232432 DEBUG nova.compute.manager [req-48cc75bf-a425-4e48-ad5e-0a4093892136 req-9d470c0a-2421-4f67-b640-b99cff3d3fa8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Processing event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.471 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.476 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403451.4764845, bb4e9fda-828d-4b2f-84a9-4fbbcb213650 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.477 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] VM Resumed (Lifecycle Event)
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.480 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.484 232432 INFO nova.virt.libvirt.driver [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance spawned successfully.
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.485 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.499 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.510 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.515 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.516 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.517 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.518 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.518 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.519 232432 DEBUG nova.virt.libvirt.driver [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.529 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:11 compute-2 ceph-mon[77138]: pgmap v1891: 305 pgs: 305 active+clean; 180 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.592 232432 INFO nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Took 33.90 seconds to spawn the instance on the hypervisor.
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.593 232432 DEBUG nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.684 232432 INFO nova.compute.manager [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Took 39.41 seconds to build instance.
Nov 29 08:04:11 compute-2 nova_compute[232428]: 2025-11-29 08:04:11.701 232432 DEBUG oslo_concurrency.lockutils [None req-a03e7976-998e-4ed3-96fd-091c96ca4a69 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 39.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.578 232432 DEBUG nova.compute.manager [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.579 232432 DEBUG oslo_concurrency.lockutils [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.580 232432 DEBUG oslo_concurrency.lockutils [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.580 232432 DEBUG oslo_concurrency.lockutils [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.581 232432 DEBUG nova.compute.manager [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:13 compute-2 nova_compute[232428]: 2025-11-29 08:04:13.581 232432 WARNING nova.compute.manager [req-1ad89fa0-3a55-4496-b979-914fbd708fdb req-c6b676cc-1a5e-4e45-a4cf-bf8f22ebd66a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee for instance with vm_state active and task_state None.
Nov 29 08:04:13 compute-2 ceph-mon[77138]: pgmap v1892: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Nov 29 08:04:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:14 compute-2 nova_compute[232428]: 2025-11-29 08:04:14.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:14 compute-2 NetworkManager[48993]: <info>  [1764403454.5081] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 08:04:14 compute-2 NetworkManager[48993]: <info>  [1764403454.5087] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 08:04:14 compute-2 nova_compute[232428]: 2025-11-29 08:04:14.590 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:14.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:14 compute-2 nova_compute[232428]: 2025-11-29 08:04:14.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:14 compute-2 ovn_controller[134375]: 2025-11-29T08:04:14Z|00269|binding|INFO|Releasing lport 6423497c-1eb2-43cf-bb69-b2e44a4e68f0 from this chassis (sb_readonly=0)
Nov 29 08:04:14 compute-2 ovn_controller[134375]: 2025-11-29T08:04:14Z|00270|binding|INFO|Releasing lport 789b4e5b-b51d-4f1a-a76d-9277577aa8c6 from this chassis (sb_readonly=0)
Nov 29 08:04:14 compute-2 ovn_controller[134375]: 2025-11-29T08:04:14Z|00271|binding|INFO|Releasing lport 26f328f7-c06a-4d2e-bace-a49a53d97587 from this chassis (sb_readonly=0)
Nov 29 08:04:14 compute-2 nova_compute[232428]: 2025-11-29 08:04:14.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:14 compute-2 ceph-mon[77138]: pgmap v1893: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Nov 29 08:04:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.709 232432 DEBUG nova.compute.manager [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-changed-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.710 232432 DEBUG nova.compute.manager [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing instance network info cache due to event network-changed-74928c3b-944c-4f17-b1b2-de33221d05ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.710 232432 DEBUG oslo_concurrency.lockutils [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.711 232432 DEBUG oslo_concurrency.lockutils [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:15 compute-2 nova_compute[232428]: 2025-11-29 08:04:15.711 232432 DEBUG nova.network.neutron [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Refreshing network info cache for port 74928c3b-944c-4f17-b1b2-de33221d05ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:16.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.808 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.810 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.836 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.938 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.939 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.949 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:04:16 compute-2 nova_compute[232428]: 2025-11-29 08:04:16.950 232432 INFO nova.compute.claims [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:04:17 compute-2 ceph-mon[77138]: pgmap v1894: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.079 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1821513324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.511 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.523 232432 DEBUG nova.compute.provider_tree [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.542 232432 DEBUG nova.scheduler.client.report [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.567 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.568 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:04:17 compute-2 podman[264103]: 2025-11-29 08:04:17.724200327 +0000 UTC m=+0.119708854 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.830 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.831 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:04:17 compute-2 nova_compute[232428]: 2025-11-29 08:04:17.886 232432 INFO nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.035 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:04:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.413 232432 DEBUG nova.network.neutron [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updated VIF entry in instance network info cache for port 74928c3b-944c-4f17-b1b2-de33221d05ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.414 232432 DEBUG nova.network.neutron [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.423 232432 DEBUG nova.policy [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a9dbe11399b4d34ad4c7a0a4098b324', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:04:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1821513324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.612 232432 DEBUG oslo_concurrency.lockutils [req-e25f3a29-f6e0-489a-938a-1232f25f3830 req-9b5ea220-be4d-4b60-8e01-976f3c0df42e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.685 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.687 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.688 232432 INFO nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Creating image(s)
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.732 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.779 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.827 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.833 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.947 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.949 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.950 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.950 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.987 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:18 compute-2 nova_compute[232428]: 2025-11-29 08:04:18.992 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a8918e2f-17e5-477f-b975-0efb4898396f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:19 compute-2 nova_compute[232428]: 2025-11-29 08:04:19.596 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:19 compute-2 ceph-mon[77138]: pgmap v1895: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 76 KiB/s wr, 223 op/s
Nov 29 08:04:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.074 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a8918e2f-17e5-477f-b975-0efb4898396f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.194 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] resizing rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:04:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:04:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.405 232432 DEBUG nova.objects.instance [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lazy-loading 'migration_context' on Instance uuid a8918e2f-17e5-477f-b975-0efb4898396f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.438 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.440 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Ensure instance console log exists: /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.441 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.441 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.442 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.442 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:20 compute-2 nova_compute[232428]: 2025-11-29 08:04:20.532 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Successfully created port: 99180cbe-8710-42bc-abfb-7d8cab714b9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:04:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.319 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Successfully updated port: 99180cbe-8710-42bc-abfb-7d8cab714b9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.347 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.348 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquired lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.348 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.449 232432 DEBUG nova.compute.manager [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-changed-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.450 232432 DEBUG nova.compute.manager [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Refreshing instance network info cache due to event network-changed-99180cbe-8710-42bc-abfb-7d8cab714b9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.450 232432 DEBUG oslo_concurrency.lockutils [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:21 compute-2 nova_compute[232428]: 2025-11-29 08:04:21.504 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:04:21 compute-2 ceph-mon[77138]: pgmap v1896: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 76 KiB/s wr, 235 op/s
Nov 29 08:04:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:22.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.637 232432 DEBUG nova.network.neutron [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.655 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Releasing lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.655 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Instance network_info: |[{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.656 232432 DEBUG oslo_concurrency.lockutils [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.656 232432 DEBUG nova.network.neutron [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Refreshing network info cache for port 99180cbe-8710-42bc-abfb-7d8cab714b9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.664 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Start _get_guest_xml network_info=[{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.673 232432 WARNING nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.688 232432 DEBUG nova.virt.libvirt.host [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.690 232432 DEBUG nova.virt.libvirt.host [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.694 232432 DEBUG nova.virt.libvirt.host [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.696 232432 DEBUG nova.virt.libvirt.host [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.699 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.700 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.701 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.702 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.703 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.703 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.704 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.705 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.706 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.706 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.707 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.708 232432 DEBUG nova.virt.hardware [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:04:22 compute-2 nova_compute[232428]: 2025-11-29 08:04:22.716 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3285655624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.199 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.241 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.247 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2752386618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.681 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.685 232432 DEBUG nova.virt.libvirt.vif [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:18Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.686 232432 DEBUG nova.network.os_vif_util [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.688 232432 DEBUG nova.network.os_vif_util [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.690 232432 DEBUG nova.objects.instance [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lazy-loading 'pci_devices' on Instance uuid a8918e2f-17e5-477f-b975-0efb4898396f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.709 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <uuid>a8918e2f-17e5-477f-b975-0efb4898396f</uuid>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <name>instance-0000004a</name>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachInterfacesV270Test-server-1029060762</nova:name>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:04:22</nova:creationTime>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:user uuid="1a9dbe11399b4d34ad4c7a0a4098b324">tempest-AttachInterfacesV270Test-777547104-project-member</nova:user>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:project uuid="bfe413599ef1478a806f8de6ae727e3a">tempest-AttachInterfacesV270Test-777547104</nova:project>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <nova:port uuid="99180cbe-8710-42bc-abfb-7d8cab714b9f">
Nov 29 08:04:23 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <system>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="serial">a8918e2f-17e5-477f-b975-0efb4898396f</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="uuid">a8918e2f-17e5-477f-b975-0efb4898396f</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </system>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <os>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </os>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <features>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </features>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a8918e2f-17e5-477f-b975-0efb4898396f_disk">
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a8918e2f-17e5-477f-b975-0efb4898396f_disk.config">
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </source>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:04:23 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:5b:1a:bc"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <target dev="tap99180cbe-87"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/console.log" append="off"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <video>
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </video>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:04:23 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:04:23 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:04:23 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:04:23 compute-2 nova_compute[232428]: </domain>
Nov 29 08:04:23 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.712 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Preparing to wait for external event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.712 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.713 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.713 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.715 232432 DEBUG nova.virt.libvirt.vif [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:18Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.715 232432 DEBUG nova.network.os_vif_util [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.716 232432 DEBUG nova.network.os_vif_util [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.718 232432 DEBUG os_vif [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.720 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.721 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.722 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.733 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.733 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99180cbe-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.735 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99180cbe-87, col_values=(('external_ids', {'iface-id': '99180cbe-8710-42bc-abfb-7d8cab714b9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:1a:bc', 'vm-uuid': 'a8918e2f-17e5-477f-b975-0efb4898396f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:23 compute-2 NetworkManager[48993]: <info>  [1764403463.7402] manager: (tap99180cbe-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.745 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.750 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.750 232432 INFO os_vif [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87')
Nov 29 08:04:23 compute-2 ceph-mon[77138]: pgmap v1897: 305 pgs: 305 active+clean; 211 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Nov 29 08:04:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3285655624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2752386618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.851 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.852 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.853 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No VIF found with MAC fa:16:3e:5b:1a:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.853 232432 INFO nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Using config drive
Nov 29 08:04:23 compute-2 nova_compute[232428]: 2025-11-29 08:04:23.890 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.563 232432 INFO nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Creating config drive at /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.576 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dj1qu6i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.722 232432 DEBUG nova.network.neutron [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updated VIF entry in instance network info cache for port 99180cbe-8710-42bc-abfb-7d8cab714b9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.724 232432 DEBUG nova.network.neutron [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.743 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dj1qu6i" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.791 232432 DEBUG nova.storage.rbd_utils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] rbd image a8918e2f-17e5-477f-b975-0efb4898396f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.796 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config a8918e2f-17e5-477f-b975-0efb4898396f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:24 compute-2 nova_compute[232428]: 2025-11-29 08:04:24.845 232432 DEBUG oslo_concurrency.lockutils [req-77745085-869a-4fa1-a3db-7a442c435a19 req-b684d5a3-63d1-4e49-986e-b054a741d213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.162 232432 DEBUG oslo_concurrency.processutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config a8918e2f-17e5-477f-b975-0efb4898396f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.163 232432 INFO nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Deleting local config drive /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f/disk.config because it was imported into RBD.
Nov 29 08:04:25 compute-2 kernel: tap99180cbe-87: entered promiscuous mode
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.2501] manager: (tap99180cbe-87): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 08:04:25 compute-2 ovn_controller[134375]: 2025-11-29T08:04:25Z|00272|binding|INFO|Claiming lport 99180cbe-8710-42bc-abfb-7d8cab714b9f for this chassis.
Nov 29 08:04:25 compute-2 ovn_controller[134375]: 2025-11-29T08:04:25Z|00273|binding|INFO|99180cbe-8710-42bc-abfb-7d8cab714b9f: Claiming fa:16:3e:5b:1a:bc 10.100.0.14
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.278 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:1a:bc 10.100.0.14'], port_security=['fa:16:3e:5b:1a:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a8918e2f-17e5-477f-b975-0efb4898396f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88300110-1480-4699-927f-98480530ac41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8a2e3c9-6f5d-4616-a527-2f3d9ba0c8f2, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=99180cbe-8710-42bc-abfb-7d8cab714b9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.279 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 99180cbe-8710-42bc-abfb-7d8cab714b9f in datapath f5fc40b4-a0df-48ac-ae34-b50895d87260 bound to our chassis
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.283 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5fc40b4-a0df-48ac-ae34-b50895d87260
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.301 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[399c94c5-fb97-477d-960d-339c9610a36e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.302 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5fc40b4-a1 in ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:25 compute-2 ovn_controller[134375]: 2025-11-29T08:04:25Z|00274|binding|INFO|Setting lport 99180cbe-8710-42bc-abfb-7d8cab714b9f ovn-installed in OVS
Nov 29 08:04:25 compute-2 ovn_controller[134375]: 2025-11-29T08:04:25Z|00275|binding|INFO|Setting lport 99180cbe-8710-42bc-abfb-7d8cab714b9f up in Southbound
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.304 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5fc40b4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.304 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8422b7ae-9831-4e0d-a4a6-c01198afdcd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.306 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3860ae-8695-4f52-8db5-635e4f80c5d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 sudo[264427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:25 compute-2 systemd-machined[194747]: New machine qemu-32-instance-0000004a.
Nov 29 08:04:25 compute-2 sudo[264427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.331 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[67e5375b-24c4-4710-a05c-706583dd97b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 sudo[264427]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:25 compute-2 systemd[1]: Started Virtual Machine qemu-32-instance-0000004a.
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.345 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4d34db1c-9ff6-4536-b604-07422c7e4fd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 systemd-udevd[264470]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.3775] device (tap99180cbe-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.3788] device (tap99180cbe-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.388 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[956cb49d-51cc-47eb-b44b-1356e6b6b1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 systemd-udevd[264485]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.3949] manager: (tapf5fc40b4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.394 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3b728941-6276-4a53-b9e4-34af915887f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 sudo[264465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:25 compute-2 sudo[264465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 sudo[264465]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.427 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[aef90904-c246-4758-aec1-744538dff6a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.430 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f487a189-63be-4cc3-baaa-a2c5f2564b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.441 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.4568] device (tapf5fc40b4-a0): carrier: link connected
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.463 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[99bd32e3-6ff6-459b-acdc-0698fbbfe2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.486 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[21ef3a71-aaac-46de-a5d9-f1a8009d8f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5fc40b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:67:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643172, 'reachable_time': 34242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264524, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.509 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe1dd0d-3743-4896-bd07-6be67cb89dcd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:6778'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643172, 'tstamp': 643172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264544, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 sudo[264520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:25 compute-2 sudo[264520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 sudo[264520]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.532 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[31fe182a-7bb9-4eec-89c4-32c36b8c8205]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5fc40b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:67:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643172, 'reachable_time': 34242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264546, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.576 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[782beb83-8d23-4148-83ba-ffd1c230154b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 sudo[264548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:25 compute-2 sudo[264548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 sudo[264548]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.639 232432 DEBUG nova.compute.manager [req-38a86cba-d26f-44a4-bca5-f6a9ecbbac85 req-22f9a7d9-01da-49aa-a6ae-af7c9a850964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.640 232432 DEBUG oslo_concurrency.lockutils [req-38a86cba-d26f-44a4-bca5-f6a9ecbbac85 req-22f9a7d9-01da-49aa-a6ae-af7c9a850964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.641 232432 DEBUG oslo_concurrency.lockutils [req-38a86cba-d26f-44a4-bca5-f6a9ecbbac85 req-22f9a7d9-01da-49aa-a6ae-af7c9a850964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.641 232432 DEBUG oslo_concurrency.lockutils [req-38a86cba-d26f-44a4-bca5-f6a9ecbbac85 req-22f9a7d9-01da-49aa-a6ae-af7c9a850964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.641 232432 DEBUG nova.compute.manager [req-38a86cba-d26f-44a4-bca5-f6a9ecbbac85 req-22f9a7d9-01da-49aa-a6ae-af7c9a850964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Processing event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.659 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[160dba3e-7633-495e-915d-5fbbfed8d8d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.660 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5fc40b4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.661 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.661 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5fc40b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:25 compute-2 sudo[264577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:25 compute-2 sudo[264577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 sudo[264577]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:25 compute-2 NetworkManager[48993]: <info>  [1764403465.6994] manager: (tapf5fc40b4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.698 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 kernel: tapf5fc40b4-a0: entered promiscuous mode
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.702 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.707 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5fc40b4-a0, col_values=(('external_ids', {'iface-id': '588bb5af-cb8d-4a53-9076-9b467211a395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 ovn_controller[134375]: 2025-11-29T08:04:25Z|00276|binding|INFO|Releasing lport 588bb5af-cb8d-4a53-9076-9b467211a395 from this chassis (sb_readonly=0)
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.713 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5fc40b4-a0df-48ac-ae34-b50895d87260.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5fc40b4-a0df-48ac-ae34-b50895d87260.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.716 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[878803e5-6b26-44f3-9912-ef907d3c53a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.718 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-f5fc40b4-a0df-48ac-ae34-b50895d87260
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/f5fc40b4-a0df-48ac-ae34-b50895d87260.pid.haproxy
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID f5fc40b4-a0df-48ac-ae34-b50895d87260
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:25.720 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'env', 'PROCESS_TAG=haproxy-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5fc40b4-a0df-48ac-ae34-b50895d87260.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:25 compute-2 nova_compute[232428]: 2025-11-29 08:04:25.728 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:25 compute-2 sudo[264611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:04:25 compute-2 sudo[264611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:25 compute-2 ceph-mon[77138]: pgmap v1898: 305 pgs: 305 active+clean; 237 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 209 op/s
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.028 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403466.0281591, a8918e2f-17e5-477f-b975-0efb4898396f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.030 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] VM Started (Lifecycle Event)
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.032 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.037 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.040 232432 INFO nova.virt.libvirt.driver [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Instance spawned successfully.
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.040 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.053 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.057 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.065 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.065 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.066 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.066 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.067 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.067 232432 DEBUG nova.virt.libvirt.driver [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.101 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.102 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403466.028407, a8918e2f-17e5-477f-b975-0efb4898396f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.102 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] VM Paused (Lifecycle Event)
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.140 232432 INFO nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Took 7.45 seconds to spawn the instance on the hypervisor.
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.140 232432 DEBUG nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.142 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.147 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403466.035539, a8918e2f-17e5-477f-b975-0efb4898396f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.148 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] VM Resumed (Lifecycle Event)
Nov 29 08:04:26 compute-2 podman[264710]: 2025-11-29 08:04:26.17518318 +0000 UTC m=+0.070350081 container create 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.180 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.182 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:26 compute-2 podman[264710]: 2025-11-29 08:04:26.140917298 +0000 UTC m=+0.036084229 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.265 232432 INFO nova.compute.manager [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Took 9.36 seconds to build instance.
Nov 29 08:04:26 compute-2 systemd[1]: Started libpod-conmon-267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12.scope.
Nov 29 08:04:26 compute-2 nova_compute[232428]: 2025-11-29 08:04:26.299 232432 DEBUG oslo_concurrency.lockutils [None req-1db845e5-1617-4cde-8d21-0fda63f41fe1 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:26 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:04:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68615d67d58a235e818b00ea93678b2c2bd6bf8e49855900194298c0abaf0817/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:26 compute-2 podman[264710]: 2025-11-29 08:04:26.393897081 +0000 UTC m=+0.289064012 container init 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:26 compute-2 podman[264710]: 2025-11-29 08:04:26.400016881 +0000 UTC m=+0.295183782 container start 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:04:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:26 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [NOTICE]   (264740) : New worker (264748) forked
Nov 29 08:04:26 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [NOTICE]   (264740) : Loading success.
Nov 29 08:04:26 compute-2 sudo[264611]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:26 compute-2 ovn_controller[134375]: 2025-11-29T08:04:26Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:99:5d 10.1.1.49
Nov 29 08:04:26 compute-2 ovn_controller[134375]: 2025-11-29T08:04:26Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:99:5d 10.1.1.49
Nov 29 08:04:26 compute-2 ceph-mon[77138]: pgmap v1899: 305 pgs: 305 active+clean; 274 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 231 op/s
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:04:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:ef:13 10.2.2.100
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:ef:13 10.2.2.100
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:68:08 10.1.1.250
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:68:08 10.1.1.250
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:db:b2 10.1.1.57
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:db:b2 10.1.1.57
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:8a:76 10.100.0.7
Nov 29 08:04:27 compute-2 ovn_controller[134375]: 2025-11-29T08:04:27Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:8a:76 10.100.0.7
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.914 232432 DEBUG nova.compute.manager [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.916 232432 DEBUG oslo_concurrency.lockutils [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.917 232432 DEBUG oslo_concurrency.lockutils [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.918 232432 DEBUG oslo_concurrency.lockutils [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.918 232432 DEBUG nova.compute.manager [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:27 compute-2 nova_compute[232428]: 2025-11-29 08:04:27.919 232432 WARNING nova.compute.manager [req-0a10bbd4-06eb-40c3-a333-b28eda8ae86b req-199663c0-d4ea-4cd1-a009-e65a49dfe1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received unexpected event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f for instance with vm_state active and task_state None.
Nov 29 08:04:27 compute-2 ceph-mon[77138]: osdmap e251: 3 total, 3 up, 3 in
Nov 29 08:04:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 08:04:28 compute-2 ovn_controller[134375]: 2025-11-29T08:04:28Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:1d:3f 10.2.2.200
Nov 29 08:04:28 compute-2 ovn_controller[134375]: 2025-11-29T08:04:28Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:1d:3f 10.2.2.200
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.057 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "interface-a8918e2f-17e5-477f-b975-0efb4898396f-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.058 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "interface-a8918e2f-17e5-477f-b975-0efb4898396f-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.060 232432 DEBUG nova.objects.instance [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lazy-loading 'flavor' on Instance uuid a8918e2f-17e5-477f-b975-0efb4898396f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.092 232432 DEBUG nova.objects.instance [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lazy-loading 'pci_requests' on Instance uuid a8918e2f-17e5-477f-b975-0efb4898396f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.126 232432 DEBUG nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:04:28 compute-2 ovn_controller[134375]: 2025-11-29T08:04:28Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:55:80 10.1.1.126
Nov 29 08:04:28 compute-2 ovn_controller[134375]: 2025-11-29T08:04:28Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:55:80 10.1.1.126
Nov 29 08:04:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:28.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:28 compute-2 nova_compute[232428]: 2025-11-29 08:04:28.739 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 08:04:29 compute-2 ceph-mon[77138]: pgmap v1901: 305 pgs: 305 active+clean; 274 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 6.2 MiB/s wr, 149 op/s
Nov 29 08:04:29 compute-2 ceph-mon[77138]: osdmap e252: 3 total, 3 up, 3 in
Nov 29 08:04:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3085072095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3085072095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:30 compute-2 nova_compute[232428]: 2025-11-29 08:04:30.361 232432 DEBUG nova.policy [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a9dbe11399b4d34ad4c7a0a4098b324', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:04:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:30 compute-2 nova_compute[232428]: 2025-11-29 08:04:30.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:30.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:30 compute-2 ceph-mon[77138]: osdmap e253: 3 total, 3 up, 3 in
Nov 29 08:04:31 compute-2 ceph-mon[77138]: pgmap v1904: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 335 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 207 op/s
Nov 29 08:04:31 compute-2 nova_compute[232428]: 2025-11-29 08:04:31.740 232432 DEBUG nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Successfully created port: e517291d-acb1-46c9-baa1-17a748ef1ded _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:04:32 compute-2 nova_compute[232428]: 2025-11-29 08:04:32.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:32.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:33 compute-2 ceph-mon[77138]: pgmap v1905: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 405 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 14 MiB/s wr, 522 op/s
Nov 29 08:04:33 compute-2 nova_compute[232428]: 2025-11-29 08:04:33.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:34 compute-2 sudo[264761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:34 compute-2 sudo[264761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:34 compute-2 sudo[264761]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:34 compute-2 sudo[264786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:04:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:34 compute-2 sudo[264786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:34 compute-2 sudo[264786]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.757 232432 DEBUG nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Successfully updated port: e517291d-acb1-46c9-baa1-17a748ef1ded _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.781 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.781 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquired lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.781 232432 DEBUG nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.893 232432 DEBUG nova.compute.manager [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-changed-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.894 232432 DEBUG nova.compute.manager [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Refreshing instance network info cache due to event network-changed-e517291d-acb1-46c9-baa1-17a748ef1ded. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.894 232432 DEBUG oslo_concurrency.lockutils [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:34 compute-2 ceph-mon[77138]: pgmap v1906: 305 pgs: 305 active+clean; 405 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 448 op/s
Nov 29 08:04:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:04:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:04:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:34 compute-2 nova_compute[232428]: 2025-11-29 08:04:34.988 232432 WARNING nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] f5fc40b4-a0df-48ac-ae34-b50895d87260 already exists in list: networks containing: ['f5fc40b4-a0df-48ac-ae34-b50895d87260']. ignoring it
Nov 29 08:04:35 compute-2 nova_compute[232428]: 2025-11-29 08:04:35.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:35 compute-2 podman[264812]: 2025-11-29 08:04:35.678534974 +0000 UTC m=+0.066131608 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:04:36 compute-2 nova_compute[232428]: 2025-11-29 08:04:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:36 compute-2 nova_compute[232428]: 2025-11-29 08:04:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 08:04:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:36.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:37 compute-2 ceph-mon[77138]: pgmap v1907: 305 pgs: 305 active+clean; 405 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 10 MiB/s wr, 413 op/s
Nov 29 08:04:37 compute-2 ceph-mon[77138]: osdmap e254: 3 total, 3 up, 3 in
Nov 29 08:04:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:37.651 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:37.653 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:04:37 compute-2 nova_compute[232428]: 2025-11-29 08:04:37.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:38.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2163040213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3157694843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.532 232432 DEBUG nova.network.neutron [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.563 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Releasing lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.564 232432 DEBUG oslo_concurrency.lockutils [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.565 232432 DEBUG nova.network.neutron [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Refreshing network info cache for port e517291d-acb1-46c9-baa1-17a748ef1ded _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.567 232432 DEBUG nova.virt.libvirt.vif [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:26Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.568 232432 DEBUG nova.network.os_vif_util [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.569 232432 DEBUG nova.network.os_vif_util [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.569 232432 DEBUG os_vif [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.570 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.571 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.573 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.574 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape517291d-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.574 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape517291d-ac, col_values=(('external_ids', {'iface-id': 'e517291d-acb1-46c9-baa1-17a748ef1ded', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:66:5a', 'vm-uuid': 'a8918e2f-17e5-477f-b975-0efb4898396f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.576 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 NetworkManager[48993]: <info>  [1764403478.5771] manager: (tape517291d-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.583 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.584 232432 INFO os_vif [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac')
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.584 232432 DEBUG nova.virt.libvirt.vif [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:26Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.585 232432 DEBUG nova.network.os_vif_util [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.585 232432 DEBUG nova.network.os_vif_util [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.588 232432 DEBUG nova.virt.libvirt.guest [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] attach device xml: <interface type="ethernet">
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:77:66:5a"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <target dev="tape517291d-ac"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]: </interface>
Nov 29 08:04:38 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:04:38 compute-2 kernel: tape517291d-ac: entered promiscuous mode
Nov 29 08:04:38 compute-2 NetworkManager[48993]: <info>  [1764403478.6033] manager: (tape517291d-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 08:04:38 compute-2 ovn_controller[134375]: 2025-11-29T08:04:38Z|00277|binding|INFO|Claiming lport e517291d-acb1-46c9-baa1-17a748ef1ded for this chassis.
Nov 29 08:04:38 compute-2 ovn_controller[134375]: 2025-11-29T08:04:38Z|00278|binding|INFO|e517291d-acb1-46c9-baa1-17a748ef1ded: Claiming fa:16:3e:77:66:5a 10.100.0.11
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.609 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:66:5a 10.100.0.11'], port_security=['fa:16:3e:77:66:5a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a8918e2f-17e5-477f-b975-0efb4898396f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88300110-1480-4699-927f-98480530ac41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8a2e3c9-6f5d-4616-a527-2f3d9ba0c8f2, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e517291d-acb1-46c9-baa1-17a748ef1ded) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.610 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e517291d-acb1-46c9-baa1-17a748ef1ded in datapath f5fc40b4-a0df-48ac-ae34-b50895d87260 bound to our chassis
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.612 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5fc40b4-a0df-48ac-ae34-b50895d87260
Nov 29 08:04:38 compute-2 ovn_controller[134375]: 2025-11-29T08:04:38Z|00279|binding|INFO|Setting lport e517291d-acb1-46c9-baa1-17a748ef1ded ovn-installed in OVS
Nov 29 08:04:38 compute-2 ovn_controller[134375]: 2025-11-29T08:04:38Z|00280|binding|INFO|Setting lport e517291d-acb1-46c9-baa1-17a748ef1ded up in Southbound
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.629 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[17bdab8b-4090-43fd-9927-9bc86c76a074]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:38 compute-2 systemd-udevd[264839]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.655 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 NetworkManager[48993]: <info>  [1764403478.6607] device (tape517291d-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:38 compute-2 NetworkManager[48993]: <info>  [1764403478.6615] device (tape517291d-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.669 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c86b20c6-38ad-4a04-95ff-9ee865efc758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.672 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[30838bf2-5377-47cf-a8a1-f8d6ba6ddf36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.698 232432 DEBUG nova.virt.libvirt.driver [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.699 232432 DEBUG nova.virt.libvirt.driver [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.699 232432 DEBUG nova.virt.libvirt.driver [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No VIF found with MAC fa:16:3e:5b:1a:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.699 232432 DEBUG nova.virt.libvirt.driver [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] No VIF found with MAC fa:16:3e:77:66:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.705 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2a81d6-8ffd-4732-82b3-bced1a0953f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.726 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5b2e21-f560-4dbb-ab79-ccfa7acfcdf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5fc40b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:67:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643172, 'reachable_time': 34242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264845, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.732 232432 DEBUG nova.virt.libvirt.guest [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:name>tempest-AttachInterfacesV270Test-server-1029060762</nova:name>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:04:38</nova:creationTime>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:user uuid="1a9dbe11399b4d34ad4c7a0a4098b324">tempest-AttachInterfacesV270Test-777547104-project-member</nova:user>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:project uuid="bfe413599ef1478a806f8de6ae727e3a">tempest-AttachInterfacesV270Test-777547104</nova:project>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:port uuid="99180cbe-8710-42bc-abfb-7d8cab714b9f">
Nov 29 08:04:38 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     <nova:port uuid="e517291d-acb1-46c9-baa1-17a748ef1ded">
Nov 29 08:04:38 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:04:38 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:04:38 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:04:38 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:04:38 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.749 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9203b1fd-b712-4ad2-afa6-86f1e7703148]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5fc40b4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643187, 'tstamp': 643187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264846, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5fc40b4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643192, 'tstamp': 643192}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264846, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.751 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5fc40b4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.756 232432 DEBUG oslo_concurrency.lockutils [None req-7a0bb9a0-8869-4c33-a72d-205993f081ca 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "interface-a8918e2f-17e5-477f-b975-0efb4898396f-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 nova_compute[232428]: 2025-11-29 08:04:38.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.790 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5fc40b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.791 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.792 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5fc40b4-a0, col_values=(('external_ids', {'iface-id': '588bb5af-cb8d-4a53-9076-9b467211a395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:38.794 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.455 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.456 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.456 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.456 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb4e9fda-828d-4b2f-84a9-4fbbcb213650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 08:04:39 compute-2 ceph-mon[77138]: pgmap v1909: 305 pgs: 305 active+clean; 405 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.8 MiB/s wr, 340 op/s
Nov 29 08:04:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3283760105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:39 compute-2 podman[264848]: 2025-11-29 08:04:39.694659859 +0000 UTC m=+0.081817390 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.915 232432 DEBUG nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.916 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.919 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.919 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.920 232432 DEBUG nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.921 232432 WARNING nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received unexpected event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded for instance with vm_state active and task_state None.
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.921 232432 DEBUG nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.921 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.922 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.922 232432 DEBUG oslo_concurrency.lockutils [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.923 232432 DEBUG nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:39 compute-2 nova_compute[232428]: 2025-11-29 08:04:39.923 232432 WARNING nova.compute.manager [req-9f696bad-4664-49c0-abef-72c7ee96eba2 req-9b0b60bb-5d63-4479-a9bd-e56edeb49c98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received unexpected event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded for instance with vm_state active and task_state None.
Nov 29 08:04:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-2 ceph-mon[77138]: osdmap e255: 3 total, 3 up, 3 in
Nov 29 08:04:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2874465640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.820 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.820 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.821 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.821 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.822 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.824 232432 INFO nova.compute.manager [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Terminating instance
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.825 232432 DEBUG nova.compute.manager [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:04:40 compute-2 kernel: tap99180cbe-87 (unregistering): left promiscuous mode
Nov 29 08:04:40 compute-2 NetworkManager[48993]: <info>  [1764403480.8887] device (tap99180cbe-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:40 compute-2 kernel: tape517291d-ac (unregistering): left promiscuous mode
Nov 29 08:04:40 compute-2 NetworkManager[48993]: <info>  [1764403480.9106] device (tape517291d-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.953 232432 DEBUG nova.network.neutron [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updated VIF entry in instance network info cache for port e517291d-acb1-46c9-baa1-17a748ef1ded. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.954 232432 DEBUG nova.network.neutron [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [{"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00281|binding|INFO|Releasing lport 99180cbe-8710-42bc-abfb-7d8cab714b9f from this chassis (sb_readonly=0)
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00282|binding|INFO|Setting lport 99180cbe-8710-42bc-abfb-7d8cab714b9f down in Southbound
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.959 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00283|binding|INFO|Removing iface tap99180cbe-87 ovn-installed in OVS
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00284|binding|INFO|Releasing lport e517291d-acb1-46c9-baa1-17a748ef1ded from this chassis (sb_readonly=1)
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00285|if_status|INFO|Dropped 2 log messages in last 512 seconds (most recently, 512 seconds ago) due to excessive rate
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00286|if_status|INFO|Not setting lport e517291d-acb1-46c9-baa1-17a748ef1ded down as sb is readonly
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00287|binding|INFO|Removing iface tape517291d-ac ovn-installed in OVS
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-2 ovn_controller[134375]: 2025-11-29T08:04:40Z|00288|binding|INFO|Setting lport e517291d-acb1-46c9-baa1-17a748ef1ded down in Southbound
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.983 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:1a:bc 10.100.0.14'], port_security=['fa:16:3e:5b:1a:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a8918e2f-17e5-477f-b975-0efb4898396f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88300110-1480-4699-927f-98480530ac41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8a2e3c9-6f5d-4616-a527-2f3d9ba0c8f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=99180cbe-8710-42bc-abfb-7d8cab714b9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.985 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 99180cbe-8710-42bc-abfb-7d8cab714b9f in datapath f5fc40b4-a0df-48ac-ae34-b50895d87260 unbound from our chassis
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.989 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.988 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:66:5a 10.100.0.11'], port_security=['fa:16:3e:77:66:5a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a8918e2f-17e5-477f-b975-0efb4898396f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfe413599ef1478a806f8de6ae727e3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88300110-1480-4699-927f-98480530ac41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8a2e3c9-6f5d-4616-a527-2f3d9ba0c8f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e517291d-acb1-46c9-baa1-17a748ef1ded) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.994 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5fc40b4-a0df-48ac-ae34-b50895d87260, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.995 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d64ed8-ce5f-4310-bee4-e3432048555a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:40.996 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260 namespace which is not needed anymore
Nov 29 08:04:40 compute-2 nova_compute[232428]: 2025-11-29 08:04:40.996 232432 DEBUG oslo_concurrency.lockutils [req-3b4afc0b-a0ef-4620-ad59-9abcf3148462 req-3c795dce-5d43-4283-853c-965647f25cd7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a8918e2f-17e5-477f-b975-0efb4898396f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.010 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 29 08:04:41 compute-2 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Consumed 14.585s CPU time.
Nov 29 08:04:41 compute-2 systemd-machined[194747]: Machine qemu-32-instance-0000004a terminated.
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [NOTICE]   (264740) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [NOTICE]   (264740) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [WARNING]  (264740) : Exiting Master process...
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [WARNING]  (264740) : Exiting Master process...
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [ALERT]    (264740) : Current worker (264748) exited with code 143 (Terminated)
Nov 29 08:04:41 compute-2 neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260[264730]: [WARNING]  (264740) : All workers exited. Exiting... (0)
Nov 29 08:04:41 compute-2 systemd[1]: libpod-267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12.scope: Deactivated successfully.
Nov 29 08:04:41 compute-2 podman[264895]: 2025-11-29 08:04:41.231832464 +0000 UTC m=+0.073226002 container died 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:04:41 compute-2 NetworkManager[48993]: <info>  [1764403481.2689] manager: (tape517291d-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 08:04:41 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:41 compute-2 systemd[1]: var-lib-containers-storage-overlay-68615d67d58a235e818b00ea93678b2c2bd6bf8e49855900194298c0abaf0817-merged.mount: Deactivated successfully.
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.292 232432 INFO nova.virt.libvirt.driver [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Instance destroyed successfully.
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.293 232432 DEBUG nova.objects.instance [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lazy-loading 'resources' on Instance uuid a8918e2f-17e5-477f-b975-0efb4898396f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:41 compute-2 podman[264895]: 2025-11-29 08:04:41.300528962 +0000 UTC m=+0.141922490 container cleanup 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:04:41 compute-2 systemd[1]: libpod-conmon-267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12.scope: Deactivated successfully.
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.314 232432 DEBUG nova.virt.libvirt.vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:26Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.315 232432 DEBUG nova.network.os_vif_util [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "address": "fa:16:3e:5b:1a:bc", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99180cbe-87", "ovs_interfaceid": "99180cbe-8710-42bc-abfb-7d8cab714b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.316 232432 DEBUG nova.network.os_vif_util [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.318 232432 DEBUG os_vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.323 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99180cbe-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.330 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.336 232432 INFO os_vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:1a:bc,bridge_name='br-int',has_traffic_filtering=True,id=99180cbe-8710-42bc-abfb-7d8cab714b9f,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99180cbe-87')
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.337 232432 DEBUG nova.virt.libvirt.vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1029060762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1029060762',id=74,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfe413599ef1478a806f8de6ae727e3a',ramdisk_id='',reservation_id='r-9xzciiz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-777547104',owner_user_name='tempest-AttachInterfacesV270Test-777547104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:26Z,user_data=None,user_id='1a9dbe11399b4d34ad4c7a0a4098b324',uuid=a8918e2f-17e5-477f-b975-0efb4898396f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.338 232432 DEBUG nova.network.os_vif_util [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converting VIF {"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.338 232432 DEBUG nova.network.os_vif_util [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.339 232432 DEBUG os_vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.340 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.340 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape517291d-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.342 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.346 232432 INFO os_vif [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:66:5a,bridge_name='br-int',has_traffic_filtering=True,id=e517291d-acb1-46c9-baa1-17a748ef1ded,network=Network(f5fc40b4-a0df-48ac-ae34-b50895d87260),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape517291d-ac')
Nov 29 08:04:41 compute-2 podman[264943]: 2025-11-29 08:04:41.393650674 +0000 UTC m=+0.057770018 container remove 267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.404 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7fa3ca-c718-493a-ae8a-a2d0166609e0]: (4, ('Sat Nov 29 08:04:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260 (267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12)\n267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12\nSat Nov 29 08:04:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260 (267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12)\n267f4ef0df945304f1c32e06ff83e2c0df336289fc824b68a1f9de62d971ec12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.406 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5e3800-9dfe-418c-baba-e29f9904c5eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.407 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5fc40b4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 kernel: tapf5fc40b4-a0: left promiscuous mode
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.436 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb40ec12-10c1-4c43-a402-2787507a0a9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.451 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2fae046a-de7d-41be-8de7-02d70c405aa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.453 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[006651ad-ab8c-4797-b2c3-c20d016731b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.482 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[843465d3-3341-45ea-99b0-2ba80ffc6984]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643164, 'reachable_time': 15443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264976, 'error': None, 'target': 'ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 systemd[1]: run-netns-ovnmeta\x2df5fc40b4\x2da0df\x2d48ac\x2dae34\x2db50895d87260.mount: Deactivated successfully.
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.488 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5fc40b4-a0df-48ac-ae34-b50895d87260 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.488 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[110ce8b6-1b5f-4666-bba3-b423dda0496b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.490 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e517291d-acb1-46c9-baa1-17a748ef1ded in datapath f5fc40b4-a0df-48ac-ae34-b50895d87260 unbound from our chassis
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.492 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5fc40b4-a0df-48ac-ae34-b50895d87260, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:41.493 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1a1da7-67d1-43ea-8e22-33ad7d8eadc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:41 compute-2 ceph-mon[77138]: pgmap v1911: 305 pgs: 305 active+clean; 370 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 42 KiB/s wr, 51 op/s
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.845 232432 INFO nova.virt.libvirt.driver [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Deleting instance files /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f_del
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.846 232432 INFO nova.virt.libvirt.driver [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Deletion of /var/lib/nova/instances/a8918e2f-17e5-477f-b975-0efb4898396f_del complete
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.934 232432 INFO nova.compute.manager [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Took 1.11 seconds to destroy the instance on the hypervisor.
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.935 232432 DEBUG oslo.service.loopingcall [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.936 232432 DEBUG nova.compute.manager [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:04:41 compute-2 nova_compute[232428]: 2025-11-29 08:04:41.936 232432 DEBUG nova.network.neutron [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.079 232432 DEBUG nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-unplugged-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.080 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.081 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.081 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.081 232432 DEBUG nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-unplugged-e517291d-acb1-46c9-baa1-17a748ef1ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.081 232432 DEBUG nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-unplugged-e517291d-acb1-46c9-baa1-17a748ef1ded for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.082 232432 DEBUG nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.082 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.082 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.082 232432 DEBUG oslo_concurrency.lockutils [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.082 232432 DEBUG nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.083 232432 WARNING nova.compute.manager [req-1eda7e60-efa4-4137-b7e2-5c478c642424 req-bd6155e5-c19a-4651-a6d8-59b016acf47e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received unexpected event network-vif-plugged-e517291d-acb1-46c9-baa1-17a748ef1ded for instance with vm_state active and task_state deleting.
Nov 29 08:04:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.595 232432 DEBUG nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-unplugged-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.596 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.596 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.597 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.598 232432 DEBUG nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-unplugged-99180cbe-8710-42bc-abfb-7d8cab714b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.598 232432 DEBUG nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-unplugged-99180cbe-8710-42bc-abfb-7d8cab714b9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.599 232432 DEBUG nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.599 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.600 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.600 232432 DEBUG oslo_concurrency.lockutils [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.601 232432 DEBUG nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] No waiting events found dispatching network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:42 compute-2 nova_compute[232428]: 2025-11-29 08:04:42.601 232432 WARNING nova.compute.manager [req-257a66b9-bc01-4412-a8df-d051066b1ca8 req-7c084f36-36c2-4021-a353-cec8253c739b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received unexpected event network-vif-plugged-99180cbe-8710-42bc-abfb-7d8cab714b9f for instance with vm_state active and task_state deleting.
Nov 29 08:04:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:42.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:43 compute-2 ceph-mon[77138]: pgmap v1912: 305 pgs: 305 active+clean; 227 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 2.4 MiB/s wr, 168 op/s
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.641 232432 DEBUG nova.compute.manager [req-efe9c0b4-d9e0-40e9-ab7f-eb58f3a7ba9f req-450124d9-b5bd-405f-97b2-34314a207e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-deleted-99180cbe-8710-42bc-abfb-7d8cab714b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.642 232432 INFO nova.compute.manager [req-efe9c0b4-d9e0-40e9-ab7f-eb58f3a7ba9f req-450124d9-b5bd-405f-97b2-34314a207e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Neutron deleted interface 99180cbe-8710-42bc-abfb-7d8cab714b9f; detaching it from the instance and deleting it from the info cache
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.642 232432 DEBUG nova.network.neutron [req-efe9c0b4-d9e0-40e9-ab7f-eb58f3a7ba9f req-450124d9-b5bd-405f-97b2-34314a207e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [{"id": "e517291d-acb1-46c9-baa1-17a748ef1ded", "address": "fa:16:3e:77:66:5a", "network": {"id": "f5fc40b4-a0df-48ac-ae34-b50895d87260", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1734390708-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfe413599ef1478a806f8de6ae727e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape517291d-ac", "ovs_interfaceid": "e517291d-acb1-46c9-baa1-17a748ef1ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.678 232432 DEBUG nova.compute.manager [req-efe9c0b4-d9e0-40e9-ab7f-eb58f3a7ba9f req-450124d9-b5bd-405f-97b2-34314a207e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Detach interface failed, port_id=99180cbe-8710-42bc-abfb-7d8cab714b9f, reason: Instance a8918e2f-17e5-477f-b975-0efb4898396f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.917 232432 DEBUG nova.network.neutron [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.946 232432 INFO nova.compute.manager [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Took 2.01 seconds to deallocate network for instance.
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.993 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:43 compute-2 nova_compute[232428]: 2025-11-29 08:04:43.994 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.091 232432 DEBUG oslo_concurrency.processutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/770430332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.596 232432 DEBUG oslo_concurrency.processutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.607 232432 DEBUG nova.compute.provider_tree [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.646 232432 DEBUG nova.scheduler.client.report [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:44 compute-2 ovn_controller[134375]: 2025-11-29T08:04:44Z|00289|binding|INFO|Releasing lport 6423497c-1eb2-43cf-bb69-b2e44a4e68f0 from this chassis (sb_readonly=0)
Nov 29 08:04:44 compute-2 ovn_controller[134375]: 2025-11-29T08:04:44Z|00290|binding|INFO|Releasing lport 789b4e5b-b51d-4f1a-a76d-9277577aa8c6 from this chassis (sb_readonly=0)
Nov 29 08:04:44 compute-2 ovn_controller[134375]: 2025-11-29T08:04:44Z|00291|binding|INFO|Releasing lport 26f328f7-c06a-4d2e-bace-a49a53d97587 from this chassis (sb_readonly=0)
Nov 29 08:04:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.676 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.710 232432 INFO nova.scheduler.client.report [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Deleted allocations for instance a8918e2f-17e5-477f-b975-0efb4898396f
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.761 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-2 nova_compute[232428]: 2025-11-29 08:04:44.806 232432 DEBUG oslo_concurrency.lockutils [None req-ec7c2f89-b59d-4396-8b0e-98701f86ca73 1a9dbe11399b4d34ad4c7a0a4098b324 bfe413599ef1478a806f8de6ae727e3a - - default default] Lock "a8918e2f-17e5-477f-b975-0efb4898396f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:45 compute-2 nova_compute[232428]: 2025-11-29 08:04:45.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:45 compute-2 sudo[265002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:45 compute-2 sudo[265002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:45 compute-2 sudo[265002]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:45 compute-2 ceph-mon[77138]: pgmap v1913: 305 pgs: 305 active+clean; 172 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 492 KiB/s rd, 3.1 MiB/s wr, 201 op/s
Nov 29 08:04:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/770430332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2607916270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:45 compute-2 sudo[265027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:45 compute-2 sudo[265027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:45 compute-2 sudo[265027]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:45 compute-2 nova_compute[232428]: 2025-11-29 08:04:45.733 232432 DEBUG nova.compute.manager [req-9f246300-cc4a-4724-b51e-8b3b3428bd0f req-17921260-db6c-47fe-9e01-404cba9f2413 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Received event network-vif-deleted-e517291d-acb1-46c9-baa1-17a748ef1ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:46 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 08:04:46 compute-2 nova_compute[232428]: 2025-11-29 08:04:46.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 08:04:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2625207377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/767972050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:46 compute-2 ceph-mon[77138]: osdmap e256: 3 total, 3 up, 3 in
Nov 29 08:04:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:46.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:47 compute-2 ceph-mon[77138]: pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Nov 29 08:04:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:04:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:04:48 compute-2 podman[265053]: 2025-11-29 08:04:48.79693084 +0000 UTC m=+0.192170390 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.226 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.374 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-bb4e9fda-828d-4b2f-84a9-4fbbcb213650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.375 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.375 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.376 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.376 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.376 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.376 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.376 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.403 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.403 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.404 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.404 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.404 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3723265995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:49 compute-2 nova_compute[232428]: 2025-11-29 08:04:49.975 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:49 compute-2 ceph-mon[77138]: pgmap v1916: 305 pgs: 305 active+clean; 121 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 493 KiB/s rd, 3.0 MiB/s wr, 190 op/s
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:50.015 143912 DEBUG eventlet.wsgi.server [-] (143912) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:50.019 143912 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: Accept: */*
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: Connection: close
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: Content-Type: text/plain
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: Host: 169.254.169.254
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: User-Agent: curl/7.84.0
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: X-Forwarded-For: 10.100.0.7
Nov 29 08:04:50 compute-2 ovn_metadata_agent[143796]: X-Ovn-Network-Id: 244beb46-e997-4214-9a18-cb9fb18e5629 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.100 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.100 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.100 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.100 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:50 compute-2 ovn_controller[134375]: 2025-11-29T08:04:50Z|00292|binding|INFO|Releasing lport 6423497c-1eb2-43cf-bb69-b2e44a4e68f0 from this chassis (sb_readonly=0)
Nov 29 08:04:50 compute-2 ovn_controller[134375]: 2025-11-29T08:04:50Z|00293|binding|INFO|Releasing lport 789b4e5b-b51d-4f1a-a76d-9277577aa8c6 from this chassis (sb_readonly=0)
Nov 29 08:04:50 compute-2 ovn_controller[134375]: 2025-11-29T08:04:50Z|00294|binding|INFO|Releasing lport 26f328f7-c06a-4d2e-bace-a49a53d97587 from this chassis (sb_readonly=0)
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.338 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.339 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4301MB free_disk=20.98809814453125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.339 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.339 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.530 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.530 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.530 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:04:50 compute-2 nova_compute[232428]: 2025-11-29 08:04:50.571 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:51 compute-2 ceph-mon[77138]: pgmap v1917: 305 pgs: 305 active+clean; 128 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.9 MiB/s wr, 160 op/s
Nov 29 08:04:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3723265995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4207702613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.070 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.079 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.099 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.137 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.137 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:51 compute-2 nova_compute[232428]: 2025-11-29 08:04:51.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:51.391 143912 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 29 08:04:51 compute-2 haproxy-metadata-proxy-244beb46-e997-4214-9a18-cb9fb18e5629[263868]: 10.100.0.7:59742 [29/Nov/2025:08:04:50.013] listener listener/metadata 0/0/0/1379/1379 200 2530 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 08:04:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:51.392 143912 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2546 time: 1.3743792
Nov 29 08:04:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4207702613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:52.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.666 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.668 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.668 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.669 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.669 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.672 232432 INFO nova.compute.manager [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Terminating instance
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.674 232432 DEBUG nova.compute.manager [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:04:52 compute-2 kernel: tap74928c3b-94 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.7609] device (tap74928c3b-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00295|binding|INFO|Releasing lport 74928c3b-944c-4f17-b1b2-de33221d05ee from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00296|binding|INFO|Setting lport 74928c3b-944c-4f17-b1b2-de33221d05ee down in Southbound
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.782 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00297|binding|INFO|Removing iface tap74928c3b-94 ovn-installed in OVS
Nov 29 08:04:52 compute-2 kernel: tap022e4672-a2 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.7939] device (tap022e4672-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.805 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:76 10.100.0.7'], port_security=['fa:16:3e:9a:8a:76 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-244beb46-e997-4214-9a18-cb9fb18e5629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5842bed6-00c3-4570-884f-978f9e83ba3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=74928c3b-944c-4f17-b1b2-de33221d05ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.807 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 74928c3b-944c-4f17-b1b2-de33221d05ee in datapath 244beb46-e997-4214-9a18-cb9fb18e5629 unbound from our chassis
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.810 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 244beb46-e997-4214-9a18-cb9fb18e5629, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[59d181a4-b1b9-4689-802a-b5637be6aa3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.811 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629 namespace which is not needed anymore
Nov 29 08:04:52 compute-2 kernel: tapfbf3611a-60 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.8261] device (tapfbf3611a-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00298|binding|INFO|Releasing lport 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00299|binding|INFO|Setting lport 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 down in Southbound
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.829 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00300|binding|INFO|Removing iface tap022e4672-a2 ovn-installed in OVS
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.832 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 kernel: tap765356c7-ca (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.853 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:db:b2 10.1.1.57'], port_security=['fa:16:3e:6e:db:b2 10.1.1.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-926269711', 'neutron:cidrs': '10.1.1.57/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-926269711', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e9b4eb1-3118-41f1-9706-2ee3b78a381a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.8588] device (tap765356c7-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00301|binding|INFO|Releasing lport 765356c7-caab-46eb-830e-4a979bbba648 from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00302|binding|INFO|Setting lport 765356c7-caab-46eb-830e-4a979bbba648 down in Southbound
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00303|binding|INFO|Releasing lport fbf3611a-6024-4f95-8880-d580bf23f660 from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00304|binding|INFO|Setting lport fbf3611a-6024-4f95-8880-d580bf23f660 down in Southbound
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00305|binding|INFO|Removing iface tapfbf3611a-60 ovn-installed in OVS
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00306|binding|INFO|Removing iface tap765356c7-ca ovn-installed in OVS
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.885 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:55:80 10.1.1.126'], port_security=['fa:16:3e:af:55:80 10.1.1.126'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-223744034', 'neutron:cidrs': '10.1.1.126/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-223744034', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e9b4eb1-3118-41f1-9706-2ee3b78a381a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fbf3611a-6024-4f95-8880-d580bf23f660) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.886 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:68:08 10.1.1.250'], port_security=['fa:16:3e:95:68:08 10.1.1.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.250/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=765356c7-caab-46eb-830e-4a979bbba648) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 kernel: tape4d50911-d5 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.8948] device (tape4d50911-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 kernel: tapbe573f34-a3 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.9286] device (tapbe573f34-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 kernel: tap62c5edb0-a4 (unregistering): left promiscuous mode
Nov 29 08:04:52 compute-2 NetworkManager[48993]: <info>  [1764403492.9497] device (tap62c5edb0-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00307|binding|INFO|Releasing lport be573f34-a335-4f5c-a6f2-dd0e149534ee from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00308|binding|INFO|Setting lport be573f34-a335-4f5c-a6f2-dd0e149534ee down in Southbound
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00309|binding|INFO|Releasing lport e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b from this chassis (sb_readonly=0)
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00310|binding|INFO|Setting lport e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b down in Southbound
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00311|binding|INFO|Removing iface tape4d50911-d5 ovn-installed in OVS
Nov 29 08:04:52 compute-2 ovn_controller[134375]: 2025-11-29T08:04:52Z|00312|binding|INFO|Removing iface tapbe573f34-a3 ovn-installed in OVS
Nov 29 08:04:52 compute-2 nova_compute[232428]: 2025-11-29 08:04:52.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.964 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:99:5d 10.1.1.49'], port_security=['fa:16:3e:60:99:5d 10.1.1.49'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.49/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d57b2a8f-1c0d-4719-a94f-27c530caaafc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:52.966 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ef:13 10.2.2.100'], port_security=['fa:16:3e:97:ef:13 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cafabcb6-1c42-4294-b26b-74933aae0590', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a9a82b-aae6-4039-a3e2-6cda0a5d9cb9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be573f34-a335-4f5c-a6f2-dd0e149534ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:52 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [NOTICE]   (263866) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:52 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [NOTICE]   (263866) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:52 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [WARNING]  (263866) : Exiting Master process...
Nov 29 08:04:52 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [ALERT]    (263866) : Current worker (263868) exited with code 143 (Terminated)
Nov 29 08:04:52 compute-2 neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629[263862]: [WARNING]  (263866) : All workers exited. Exiting... (0)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000046.scope: Consumed 18.902s CPU time.
Nov 29 08:04:53 compute-2 podman[265171]: 2025-11-29 08:04:53.00701392 +0000 UTC m=+0.058593564 container died 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:04:53 compute-2 systemd-machined[194747]: Machine qemu-31-instance-00000046 terminated.
Nov 29 08:04:53 compute-2 ovn_controller[134375]: 2025-11-29T08:04:53Z|00313|binding|INFO|Releasing lport 62c5edb0-a405-4b4d-92c0-37a8154c2dbb from this chassis (sb_readonly=0)
Nov 29 08:04:53 compute-2 ovn_controller[134375]: 2025-11-29T08:04:53Z|00314|binding|INFO|Setting lport 62c5edb0-a405-4b4d-92c0-37a8154c2dbb down in Southbound
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_controller[134375]: 2025-11-29T08:04:53Z|00315|binding|INFO|Removing iface tap62c5edb0-a4 ovn-installed in OVS
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.064 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:1d:3f 10.2.2.200'], port_security=['fa:16:3e:dc:1d:3f 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bb4e9fda-828d-4b2f-84a9-4fbbcb213650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cafabcb6-1c42-4294-b26b-74933aae0590', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa91146f75c46ebbcd15bb2222a8545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18490b5d-fa14-429b-b6ce-a75d5f01459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a9a82b-aae6-4039-a3e2-6cda0a5d9cb9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=62c5edb0-a405-4b4d-92c0-37a8154c2dbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.072 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-8b5972c066d545e4a115b5c55d178282e122f83dafab5d182414d79e3fafbbea-merged.mount: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265171]: 2025-11-29 08:04:53.087188157 +0000 UTC m=+0.138767801 container cleanup 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:04:53 compute-2 NetworkManager[48993]: <info>  [1764403493.0961] manager: (tap74928c3b-94): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-conmon-290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 ceph-mon[77138]: pgmap v1918: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.8 MiB/s wr, 98 op/s
Nov 29 08:04:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/773053081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3525853994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:53 compute-2 NetworkManager[48993]: <info>  [1764403493.1104] manager: (tap022e4672-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 08:04:53 compute-2 NetworkManager[48993]: <info>  [1764403493.1321] manager: (tap765356c7-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 08:04:53 compute-2 podman[265217]: 2025-11-29 08:04:53.188162115 +0000 UTC m=+0.067782451 container remove 290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.186 232432 INFO nova.virt.libvirt.driver [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Instance destroyed successfully.
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.187 232432 DEBUG nova.objects.instance [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lazy-loading 'resources' on Instance uuid bb4e9fda-828d-4b2f-84a9-4fbbcb213650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.195 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[84b6ec5b-c055-4472-95d1-342653754836]: (4, ('Sat Nov 29 08:04:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629 (290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00)\n290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00\nSat Nov 29 08:04:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629 (290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00)\n290bd41724da664f882fddecef2d4d60eb9ebc9a405139681e5249f319387f00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.197 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffb6285-3eef-465b-bd49-ccc02d62833a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.198 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244beb46-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.199 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.202 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.203 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "74928c3b-944c-4f17-b1b2-de33221d05ee", "address": "fa:16:3e:9a:8a:76", "network": {"id": "244beb46-e997-4214-9a18-cb9fb18e5629", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1871907043-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74928c3b-94", "ovs_interfaceid": "74928c3b-944c-4f17-b1b2-de33221d05ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.204 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.204 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.205 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.205 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74928c3b-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.207 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.209 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.212 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 kernel: tap244beb46-e0: left promiscuous mode
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.232 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.237 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36a95587-29a7-46ec-a0a0-1d4600ce6a6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.238 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=74928c3b-944c-4f17-b1b2-de33221d05ee,network=Network(244beb46-e997-4214-9a18-cb9fb18e5629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74928c3b-94')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.239 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.239 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.240 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.240 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.241 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap022e4672-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.243 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.250 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6f22f0ad-7077-4aed-b126-aab5bc72409a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.251 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc621a34-b3c9-4d67-aea5-50ddeb1c4e11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.267 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.267 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8ff141-caee-4d09-af2a-681c90575225]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641246, 'reachable_time': 28785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265320, 'error': None, 'target': 'ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.270 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:db:b2,bridge_name='br-int',has_traffic_filtering=True,id=022e4672-a2e1-4d3d-af5c-cc34a3b4dc38,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap022e4672-a2')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.270 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.271 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 systemd[1]: run-netns-ovnmeta\x2d244beb46\x2de997\x2d4214\x2d9a18\x2dcb9fb18e5629.mount: Deactivated successfully.
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.271 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.271 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.272 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-244beb46-e997-4214-9a18-cb9fb18e5629 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.272 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3cc46d-99f9-47a5-9ce7-23f8d7f8bd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.273 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3611a-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.273 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.274 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.275 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.275 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[66a4cfd8-ed57-4f30-820d-829ef15840c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.276 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60 namespace which is not needed anymore
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.276 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.294 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.296 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:55:80,bridge_name='br-int',has_traffic_filtering=True,id=fbf3611a-6024-4f95-8880-d580bf23f660,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfbf3611a-60')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.297 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.297 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.298 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.298 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.299 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.299 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap765356c7-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.315 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:68:08,bridge_name='br-int',has_traffic_filtering=True,id=765356c7-caab-46eb-830e-4a979bbba648,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap765356c7-ca')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.316 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.317 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.318 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.319 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.321 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d50911-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.324 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.329 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.332 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:99:5d,bridge_name='br-int',has_traffic_filtering=True,id=e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b,network=Network(4850a5c9-1583-4cb5-9c93-b75a1362cb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4d50911-d5')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.333 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.334 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.335 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.335 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.337 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe573f34-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.339 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.341 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.346 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ef:13,bridge_name='br-int',has_traffic_filtering=True,id=be573f34-a335-4f5c-a6f2-dd0e149534ee,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe573f34-a3')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.348 232432 DEBUG nova.virt.libvirt.vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-99955562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-99955562',id=70,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFADDBD3giRqiPRw/p4d9sX3U2T2Aun70Y02mWarrVsF/Mp6fRksMr/ooDT8ha68lPerV4OzixUExlRrAs3zVOxDl14khmI3uUPk4Z7D6y/l2FVZZLo8zhKD5ZpeoWiLw==',key_name='tempest-keypair-950675443',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa91146f75c46ebbcd15bb2222a8545',ramdisk_id='',reservation_id='r-05lre9qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-59583474',owner_user_name='tempest-TaggedBootDevicesTest-59583474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d77e751616c9473786c8ac7ae2d34d20',uuid=bb4e9fda-828d-4b2f-84a9-4fbbcb213650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.348 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converting VIF {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.349 232432 DEBUG nova.network.os_vif_util [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.349 232432 DEBUG os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.351 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c5edb0-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.353 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.356 232432 INFO os_vif [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:1d:3f,bridge_name='br-int',has_traffic_filtering=True,id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb,network=Network(cafabcb6-1c42-4294-b26b-74933aae0590),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62c5edb0-a4')
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.400 232432 DEBUG nova.compute.manager [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.401 232432 DEBUG oslo_concurrency.lockutils [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.401 232432 DEBUG oslo_concurrency.lockutils [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.401 232432 DEBUG oslo_concurrency.lockutils [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.401 232432 DEBUG nova.compute.manager [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-74928c3b-944c-4f17-b1b2-de33221d05ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.401 232432 DEBUG nova.compute.manager [req-dbc454dc-7d91-4451-b5e7-bb73e93333f5 req-2b1a7e97-f027-45c6-8fa7-6a774d65142a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-74928c3b-944c-4f17-b1b2-de33221d05ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.410 232432 DEBUG nova.compute.manager [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.410 232432 DEBUG oslo_concurrency.lockutils [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [NOTICE]   (263940) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [NOTICE]   (263940) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [WARNING]  (263940) : Exiting Master process...
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.410 232432 DEBUG oslo_concurrency.lockutils [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.411 232432 DEBUG oslo_concurrency.lockutils [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.411 232432 DEBUG nova.compute.manager [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-765356c7-caab-46eb-830e-4a979bbba648 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.411 232432 DEBUG nova.compute.manager [req-73dc4dc8-54f7-49cb-914c-3ff4aca816ae req-7a414276-88d2-4ddc-a969-3c996a53c15c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-765356c7-caab-46eb-830e-4a979bbba648 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [ALERT]    (263940) : Current worker (263942) exited with code 143 (Terminated)
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60[263936]: [WARNING]  (263940) : All workers exited. Exiting... (0)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265355]: 2025-11-29 08:04:53.422349419 +0000 UTC m=+0.047634681 container died 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:04:53 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-30923a7dd6101b4f701e1bfab6ebcfb0b1fded3186e6e515f71883ad132e4a72-merged.mount: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265355]: 2025-11-29 08:04:53.461273966 +0000 UTC m=+0.086559228 container cleanup 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-conmon-3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265404]: 2025-11-29 08:04:53.533915318 +0000 UTC m=+0.040383743 container remove 3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.540 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a17df2e6-5dfb-4464-84da-d1c45e7c6cf8]: (4, ('Sat Nov 29 08:04:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60 (3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5)\n3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5\nSat Nov 29 08:04:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60 (3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5)\n3aa6fdbd092e767bfc4cc1a36cdf9a72d0b698d459a5562cd5cf5f97ecd094e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.542 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74710aa7-6678-4e09-9e43-8f54bd3c286f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.543 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4850a5c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 kernel: tap4850a5c9-10: left promiscuous mode
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.555 232432 DEBUG nova.compute.manager [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-fbf3611a-6024-4f95-8880-d580bf23f660 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.555 232432 DEBUG oslo_concurrency.lockutils [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.555 232432 DEBUG oslo_concurrency.lockutils [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.556 232432 DEBUG oslo_concurrency.lockutils [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.556 232432 DEBUG nova.compute.manager [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-fbf3611a-6024-4f95-8880-d580bf23f660 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.556 232432 DEBUG nova.compute.manager [req-094fced7-91ff-41cd-992b-e007b3a7f898 req-3e82dcd0-1dbb-4bfe-9ee5-e2f82dc36965 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-fbf3611a-6024-4f95-8880-d580bf23f660 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.558 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.560 232432 DEBUG nova.compute.manager [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.560 232432 DEBUG oslo_concurrency.lockutils [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.560 232432 DEBUG oslo_concurrency.lockutils [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.561 232432 DEBUG oslo_concurrency.lockutils [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.561 232432 DEBUG nova.compute.manager [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.561 232432 DEBUG nova.compute.manager [req-55dea072-5c23-476c-ba9c-696ef0841c74 req-419eb915-c570-4088-a9d3-1cdb5b7bba98 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.573 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3b95c588-036a-4279-ab15-81c80b016be9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.590 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cf56031c-7619-4a67-be72-e8dbdc08898f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.591 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[67b2d033-9d8f-4d20-a796-c7d098856561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.610 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f55c733-a8e2-4c55-98e6-011d52d8af18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641356, 'reachable_time': 31306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265420, 'error': None, 'target': 'ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.612 232432 INFO nova.virt.libvirt.driver [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Deleting instance files /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650_del
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.612 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4850a5c9-1583-4cb5-9c93-b75a1362cb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.612 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[e23718ee-d01a-41aa-9a2e-ba73087c2b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.612 232432 INFO nova.virt.libvirt.driver [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Deletion of /var/lib/nova/instances/bb4e9fda-828d-4b2f-84a9-4fbbcb213650_del complete
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.613 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fbf3611a-6024-4f95-8880-d580bf23f660 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.614 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.615 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f5d19c-866c-4111-93fb-168b8064c5ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.615 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 765356c7-caab-46eb-830e-4a979bbba648 in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.616 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.617 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4570fa7f-0e7e-4a3b-81f9-ff1c8bd2f974]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.617 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b in datapath 4850a5c9-1583-4cb5-9c93-b75a1362cb60 unbound from our chassis
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.618 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4850a5c9-1583-4cb5-9c93-b75a1362cb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.618 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e4272032-5951-475d-baf6-6d34fa289277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.619 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be573f34-a335-4f5c-a6f2-dd0e149534ee in datapath cafabcb6-1c42-4294-b26b-74933aae0590 unbound from our chassis
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.620 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cafabcb6-1c42-4294-b26b-74933aae0590, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.620 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[299934b7-7308-425c-92f6-113f47ba3559]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.621 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590 namespace which is not needed anymore
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.667 232432 INFO nova.compute.manager [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.669 232432 DEBUG oslo.service.loopingcall [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.669 232432 DEBUG nova.compute.manager [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.669 232432 DEBUG nova.network.neutron [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [NOTICE]   (264058) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [NOTICE]   (264058) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [WARNING]  (264058) : Exiting Master process...
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [ALERT]    (264058) : Current worker (264060) exited with code 143 (Terminated)
Nov 29 08:04:53 compute-2 neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590[264054]: [WARNING]  (264058) : All workers exited. Exiting... (0)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265438]: 2025-11-29 08:04:53.784851597 +0000 UTC m=+0.057706976 container died 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:04:53 compute-2 podman[265438]: 2025-11-29 08:04:53.832020232 +0000 UTC m=+0.104875621 container cleanup 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:04:53 compute-2 systemd[1]: libpod-conmon-5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307.scope: Deactivated successfully.
Nov 29 08:04:53 compute-2 podman[265467]: 2025-11-29 08:04:53.916587937 +0000 UTC m=+0.055812256 container remove 5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.926 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b1316d-0ed4-4b6b-886a-17711595144b]: (4, ('Sat Nov 29 08:04:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590 (5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307)\n5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307\nSat Nov 29 08:04:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590 (5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307)\n5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.928 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2eb576-12fe-45c7-8ccb-466c5ba55d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.930 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcafabcb6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.933 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 kernel: tapcafabcb6-10: left promiscuous mode
Nov 29 08:04:53 compute-2 nova_compute[232428]: 2025-11-29 08:04:53.950 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.956 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[793b8db3-72d1-41d0-bd0c-4d0348a3d7d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.973 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[292dc69a-7cf6-4b8c-9002-957068f2eabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:53.974 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[040c380f-ad1d-47d8-8cda-6fafe05f4055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.002 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c4fc2d-f22e-42db-8b14-8e0b6d2c2e60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641526, 'reachable_time': 22227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265482, 'error': None, 'target': 'ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.005 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cafabcb6-1c42-4294-b26b-74933aae0590 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.005 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[60935a8b-a5e0-4c81-81f6-eb5777aaebf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.006 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 62c5edb0-a405-4b4d-92c0-37a8154c2dbb in datapath cafabcb6-1c42-4294-b26b-74933aae0590 unbound from our chassis
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.008 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cafabcb6-1c42-4294-b26b-74933aae0590, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:04:54.010 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[68ffbdde-4043-4a33-ad3c-5aad881f558a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-6d22633191c680002fc45d1b90fd7c0633ad45e9007e6b965fb80feb913b61fe-merged.mount: Deactivated successfully.
Nov 29 08:04:54 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ca776a09518c51af0bd4e6c1c9723fcaf3bc3152d0305877489c359a82d7307-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:54 compute-2 systemd[1]: run-netns-ovnmeta\x2dcafabcb6\x2d1c42\x2d4294\x2db26b\x2d74933aae0590.mount: Deactivated successfully.
Nov 29 08:04:54 compute-2 systemd[1]: run-netns-ovnmeta\x2d4850a5c9\x2d1583\x2d4cb5\x2d9c93\x2db75a1362cb60.mount: Deactivated successfully.
Nov 29 08:04:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:54.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:54 compute-2 nova_compute[232428]: 2025-11-29 08:04:54.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:55 compute-2 ceph-mon[77138]: pgmap v1919: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.2 MiB/s wr, 56 op/s
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.618 232432 DEBUG nova.compute.manager [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.619 232432 DEBUG oslo_concurrency.lockutils [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.619 232432 DEBUG oslo_concurrency.lockutils [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.620 232432 DEBUG oslo_concurrency.lockutils [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.620 232432 DEBUG nova.compute.manager [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.621 232432 WARNING nova.compute.manager [req-83922c27-26ac-4d21-bff0-2eca97f010ee req-04580539-3169-43ff-bb45-f4e5374c8ad4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-74928c3b-944c-4f17-b1b2-de33221d05ee for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.648 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.648 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.648 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.649 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.649 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.649 232432 WARNING nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-765356c7-caab-46eb-830e-4a979bbba648 for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.649 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.649 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.650 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.650 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.650 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.650 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.650 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 WARNING nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.651 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-be573f34-a335-4f5c-a6f2-dd0e149534ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-be573f34-a335-4f5c-a6f2-dd0e149534ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.652 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 WARNING nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-be573f34-a335-4f5c-a6f2-dd0e149534ee for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.653 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-unplugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-unplugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.654 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.655 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.655 232432 DEBUG oslo_concurrency.lockutils [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.655 232432 DEBUG nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.655 232432 WARNING nova.compute.manager [req-313acd0e-4b92-4fd0-a7f1-1fdcd1970a3b req-afd15c9c-719d-49bb-93e2-9f58b7711db5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-62c5edb0-a405-4b4d-92c0-37a8154c2dbb for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.689 232432 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.690 232432 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.690 232432 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.690 232432 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.690 232432 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.690 232432 WARNING nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-022e4672-a2e1-4d3d-af5c-cc34a3b4dc38 for instance with vm_state active and task_state deleting.
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.726 232432 DEBUG nova.compute.manager [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.726 232432 DEBUG oslo_concurrency.lockutils [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.727 232432 DEBUG oslo_concurrency.lockutils [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.727 232432 DEBUG oslo_concurrency.lockutils [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.727 232432 DEBUG nova.compute.manager [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] No waiting events found dispatching network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:55 compute-2 nova_compute[232428]: 2025-11-29 08:04:55.727 232432 WARNING nova.compute.manager [req-01460c5d-b2db-45b4-b293-823dd1a90060 req-ecc6ce91-3920-4759-96d8-0160589d0537 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received unexpected event network-vif-plugged-fbf3611a-6024-4f95-8880-d580bf23f660 for instance with vm_state active and task_state deleting.
Nov 29 08:04:56 compute-2 nova_compute[232428]: 2025-11-29 08:04:56.286 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403481.2852743, a8918e2f-17e5-477f-b975-0efb4898396f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:56 compute-2 nova_compute[232428]: 2025-11-29 08:04:56.287 232432 INFO nova.compute.manager [-] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] VM Stopped (Lifecycle Event)
Nov 29 08:04:56 compute-2 nova_compute[232428]: 2025-11-29 08:04:56.308 232432 DEBUG nova.compute.manager [None req-d7c421d2-78ef-42a3-9dac-e8148a46fde5 - - - - - -] [instance: a8918e2f-17e5-477f-b975-0efb4898396f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:04:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:04:57 compute-2 ceph-mon[77138]: pgmap v1920: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 29 08:04:58 compute-2 nova_compute[232428]: 2025-11-29 08:04:58.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:58 compute-2 nova_compute[232428]: 2025-11-29 08:04:58.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:04:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:04:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:58.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:04:59 compute-2 nova_compute[232428]: 2025-11-29 08:04:59.035 232432 DEBUG nova.compute.manager [req-c01a4126-0d6a-4b0f-80e1-fb453c561b83 req-318f9977-a036-425e-b3fa-bd36230984c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-deleted-74928c3b-944c-4f17-b1b2-de33221d05ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:59 compute-2 nova_compute[232428]: 2025-11-29 08:04:59.036 232432 INFO nova.compute.manager [req-c01a4126-0d6a-4b0f-80e1-fb453c561b83 req-318f9977-a036-425e-b3fa-bd36230984c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Neutron deleted interface 74928c3b-944c-4f17-b1b2-de33221d05ee; detaching it from the instance and deleting it from the info cache
Nov 29 08:04:59 compute-2 nova_compute[232428]: 2025-11-29 08:04:59.036 232432 DEBUG nova.network.neutron [req-c01a4126-0d6a-4b0f-80e1-fb453c561b83 req-318f9977-a036-425e-b3fa-bd36230984c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "address": "fa:16:3e:dc:1d:3f", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62c5edb0-a4", "ovs_interfaceid": "62c5edb0-a405-4b4d-92c0-37a8154c2dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:59 compute-2 nova_compute[232428]: 2025-11-29 08:04:59.078 232432 DEBUG nova.compute.manager [req-c01a4126-0d6a-4b0f-80e1-fb453c561b83 req-318f9977-a036-425e-b3fa-bd36230984c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Detach interface failed, port_id=74928c3b-944c-4f17-b1b2-de33221d05ee, reason: Instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:04:59 compute-2 ceph-mon[77138]: pgmap v1921: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Nov 29 08:04:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:00 compute-2 nova_compute[232428]: 2025-11-29 08:05:00.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.158 232432 DEBUG nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-deleted-62c5edb0-a405-4b4d-92c0-37a8154c2dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.158 232432 INFO nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Neutron deleted interface 62c5edb0-a405-4b4d-92c0-37a8154c2dbb; detaching it from the instance and deleting it from the info cache
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.158 232432 DEBUG nova.network.neutron [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "address": "fa:16:3e:97:ef:13", "network": {"id": "cafabcb6-1c42-4294-b26b-74933aae0590", "bridge": "br-int", "label": "tempest-device-tagging-net2-1523747447", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe573f34-a3", "ovs_interfaceid": "be573f34-a335-4f5c-a6f2-dd0e149534ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:01 compute-2 ceph-mon[77138]: pgmap v1922: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.197 232432 DEBUG nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Detach interface failed, port_id=62c5edb0-a405-4b4d-92c0-37a8154c2dbb, reason: Instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.197 232432 DEBUG nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-deleted-be573f34-a335-4f5c-a6f2-dd0e149534ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.197 232432 INFO nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Neutron deleted interface be573f34-a335-4f5c-a6f2-dd0e149534ee; detaching it from the instance and deleting it from the info cache
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.197 232432 DEBUG nova.network.neutron [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [{"id": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "address": "fa:16:3e:6e:db:b2", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap022e4672-a2", "ovs_interfaceid": "022e4672-a2e1-4d3d-af5c-cc34a3b4dc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "fbf3611a-6024-4f95-8880-d580bf23f660", "address": "fa:16:3e:af:55:80", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf3611a-60", "ovs_interfaceid": "fbf3611a-6024-4f95-8880-d580bf23f660", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "765356c7-caab-46eb-830e-4a979bbba648", "address": "fa:16:3e:95:68:08", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap765356c7-ca", "ovs_interfaceid": "765356c7-caab-46eb-830e-4a979bbba648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "address": "fa:16:3e:60:99:5d", "network": {"id": "4850a5c9-1583-4cb5-9c93-b75a1362cb60", "bridge": "br-int", "label": "tempest-device-tagging-net1-710686921", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa91146f75c46ebbcd15bb2222a8545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4d50911-d5", "ovs_interfaceid": "e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.228 232432 DEBUG nova.compute.manager [req-fabd8e3b-f270-49f6-b214-17c6f00bda9c req-6646934c-db7c-46b8-aa0d-4114c15f267b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Detach interface failed, port_id=be573f34-a335-4f5c-a6f2-dd0e149534ee, reason: Instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.250 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.949 232432 DEBUG nova.network.neutron [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:01 compute-2 nova_compute[232428]: 2025-11-29 08:05:01.966 232432 INFO nova.compute.manager [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Took 8.30 seconds to deallocate network for instance.
Nov 29 08:05:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:02.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.109 232432 INFO nova.compute.manager [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Took 1.14 seconds to detach 3 volumes for instance.
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.215 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.216 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.291 232432 DEBUG oslo_concurrency.processutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:03.310 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:03.311 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:03.311 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.337 232432 DEBUG nova.compute.manager [req-4b4c6549-ed94-4b87-8bab-7867a0a969b3 req-bd901424-c9d9-434d-b3e5-b677d7286a40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-deleted-e4d50911-d5d1-4ae7-af4d-9af4d4f2e18b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.337 232432 DEBUG nova.compute.manager [req-4b4c6549-ed94-4b87-8bab-7867a0a969b3 req-bd901424-c9d9-434d-b3e5-b677d7286a40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Received event network-vif-deleted-765356c7-caab-46eb-830e-4a979bbba648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.356 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:03 compute-2 ceph-mon[77138]: pgmap v1923: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Nov 29 08:05:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3165512548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.728 232432 DEBUG oslo_concurrency.processutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.736 232432 DEBUG nova.compute.provider_tree [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.772 232432 DEBUG nova.scheduler.client.report [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.802 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.831 232432 INFO nova.scheduler.client.report [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Deleted allocations for instance bb4e9fda-828d-4b2f-84a9-4fbbcb213650
Nov 29 08:05:03 compute-2 nova_compute[232428]: 2025-11-29 08:05:03.902 232432 DEBUG oslo_concurrency.lockutils [None req-f595c03e-6be6-4031-9ab2-5bf89a9c1939 d77e751616c9473786c8ac7ae2d34d20 faa91146f75c46ebbcd15bb2222a8545 - - default default] Lock "bb4e9fda-828d-4b2f-84a9-4fbbcb213650" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:04 compute-2 nova_compute[232428]: 2025-11-29 08:05:04.220 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:04 compute-2 sshd-session[265511]: Invalid user sol from 45.148.10.240 port 43648
Nov 29 08:05:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3165512548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:04 compute-2 sshd-session[265511]: Connection closed by invalid user sol 45.148.10.240 port 43648 [preauth]
Nov 29 08:05:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:04.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:05 compute-2 nova_compute[232428]: 2025-11-29 08:05:05.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-2 ceph-mon[77138]: pgmap v1924: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Nov 29 08:05:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2534689414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:05 compute-2 sudo[265514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:05 compute-2 sudo[265514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:05 compute-2 sudo[265514]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:05 compute-2 podman[265538]: 2025-11-29 08:05:05.818290598 +0000 UTC m=+0.057324983 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:05:05 compute-2 sudo[265545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:05 compute-2 sudo[265545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:05 compute-2 sudo[265545]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:06.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:07 compute-2 ceph-mon[77138]: pgmap v1925: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Nov 29 08:05:08 compute-2 nova_compute[232428]: 2025-11-29 08:05:08.184 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403493.181763, bb4e9fda-828d-4b2f-84a9-4fbbcb213650 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:08 compute-2 nova_compute[232428]: 2025-11-29 08:05:08.184 232432 INFO nova.compute.manager [-] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] VM Stopped (Lifecycle Event)
Nov 29 08:05:08 compute-2 nova_compute[232428]: 2025-11-29 08:05:08.207 232432 DEBUG nova.compute.manager [None req-6e217138-0741-465e-ab53-d99a87a97601 - - - - - -] [instance: bb4e9fda-828d-4b2f-84a9-4fbbcb213650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:08 compute-2 nova_compute[232428]: 2025-11-29 08:05:08.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:05:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:08.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:05:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:09 compute-2 ceph-mon[77138]: pgmap v1926: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Nov 29 08:05:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:10.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:10 compute-2 nova_compute[232428]: 2025-11-29 08:05:10.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:10.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:10 compute-2 podman[265585]: 2025-11-29 08:05:10.718612865 +0000 UTC m=+0.107050789 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Nov 29 08:05:10 compute-2 nova_compute[232428]: 2025-11-29 08:05:10.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:11 compute-2 nova_compute[232428]: 2025-11-29 08:05:11.159 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:11 compute-2 ceph-mon[77138]: pgmap v1927: 305 pgs: 305 active+clean; 180 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 851 KiB/s wr, 71 op/s
Nov 29 08:05:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:12.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:13 compute-2 nova_compute[232428]: 2025-11-29 08:05:13.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:13 compute-2 ceph-mon[77138]: pgmap v1928: 305 pgs: 305 active+clean; 237 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Nov 29 08:05:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2285732094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:13 compute-2 nova_compute[232428]: 2025-11-29 08:05:13.974 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:13 compute-2 nova_compute[232428]: 2025-11-29 08:05:13.974 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:13 compute-2 nova_compute[232428]: 2025-11-29 08:05:13.996 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.096 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.096 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.103 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.103 232432 INFO nova.compute.claims [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.247 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435440735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435440735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:05:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:05:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1195367420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.679 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.686 232432 DEBUG nova.compute.provider_tree [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:14.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.705 232432 DEBUG nova.scheduler.client.report [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.734 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.735 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4264389366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/435440735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/435440735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1195367420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.815 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.815 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.834 232432 INFO nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.850 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.972 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.975 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:14 compute-2 nova_compute[232428]: 2025-11-29 08:05:14.976 232432 INFO nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Creating image(s)
Nov 29 08:05:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.019 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.058 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.087 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.092 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.158 232432 DEBUG nova.policy [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fea0b5a703d4426882e6691d4313bd30', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90b6d3cc4af4471cb593f860c98e0cba', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.163 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.164 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.165 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.165 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.195 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.199 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.544 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.628 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] resizing rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.738 232432 DEBUG nova.objects.instance [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lazy-loading 'migration_context' on Instance uuid bd6a38e4-b49a-4936-8b8c-c399f5560b72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.754 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.755 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Ensure instance console log exists: /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.755 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.755 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:15 compute-2 nova_compute[232428]: 2025-11-29 08:05:15.756 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:15 compute-2 ceph-mon[77138]: pgmap v1929: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 08:05:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3624555279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3624555279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:16.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:16.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:16 compute-2 nova_compute[232428]: 2025-11-29 08:05:16.753 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Successfully created port: 146a0516-d25a-4ab5-b910-9e61df665977 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149935740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149935740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:17 compute-2 nova_compute[232428]: 2025-11-29 08:05:17.589 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Successfully updated port: 146a0516-d25a-4ab5-b910-9e61df665977 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:17 compute-2 nova_compute[232428]: 2025-11-29 08:05:17.603 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:17 compute-2 nova_compute[232428]: 2025-11-29 08:05:17.603 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquired lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:17 compute-2 nova_compute[232428]: 2025-11-29 08:05:17.604 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:17 compute-2 ceph-mon[77138]: pgmap v1930: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 08:05:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/149935740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/149935740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:18 compute-2 nova_compute[232428]: 2025-11-29 08:05:18.016 232432 DEBUG nova.compute.manager [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received event network-changed-146a0516-d25a-4ab5-b910-9e61df665977 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:18 compute-2 nova_compute[232428]: 2025-11-29 08:05:18.017 232432 DEBUG nova.compute.manager [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Refreshing instance network info cache due to event network-changed-146a0516-d25a-4ab5-b910-9e61df665977. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:18 compute-2 nova_compute[232428]: 2025-11-29 08:05:18.018 232432 DEBUG oslo_concurrency.lockutils [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:18 compute-2 nova_compute[232428]: 2025-11-29 08:05:18.098 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:18 compute-2 nova_compute[232428]: 2025-11-29 08:05:18.364 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:18.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:18.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.223 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:19.222 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:19.225 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.676 232432 DEBUG nova.network.neutron [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updating instance_info_cache with network_info: [{"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.702 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Releasing lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.703 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Instance network_info: |[{"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.704 232432 DEBUG oslo_concurrency.lockutils [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.705 232432 DEBUG nova.network.neutron [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Refreshing network info cache for port 146a0516-d25a-4ab5-b910-9e61df665977 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.709 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Start _get_guest_xml network_info=[{"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:19 compute-2 podman[265800]: 2025-11-29 08:05:19.71065206 +0000 UTC m=+0.108370970 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.719 232432 WARNING nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.726 232432 DEBUG nova.virt.libvirt.host [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.726 232432 DEBUG nova.virt.libvirt.host [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.733 232432 DEBUG nova.virt.libvirt.host [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.734 232432 DEBUG nova.virt.libvirt.host [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.735 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.736 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.737 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.737 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.737 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.737 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.737 232432 DEBUG nova.virt.hardware [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:19 compute-2 nova_compute[232428]: 2025-11-29 08:05:19.740 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:19 compute-2 ceph-mon[77138]: pgmap v1931: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 08:05:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1489924693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.228 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.269 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.276 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.516 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:20.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3266102366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.792 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.795 232432 DEBUG nova.virt.libvirt.vif [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=77,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxIiLkFQ1ert7nEL0v58X7WKu3xUh1mWUXmT1JlZHXz/8vh+Td85o1NdCeHiQUXwP7hD8T82BrUqMAoPcaIM+5T8OOw6A6IpooWQgzpb0pQ6VN64N1quToMciGmK+lXuA==',key_name='tempest-keypair-1903166805',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90b6d3cc4af4471cb593f860c98e0cba',ramdisk_id='',reservation_id='r-1lzlqjps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-635415268',owner_user_name='tempest-ServersV294TestFqdnHostnames-635415268-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fea0b5a703d4426882e6691d4313bd30',uuid=bd6a38e4-b49a-4936-8b8c-c399f5560b72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.796 232432 DEBUG nova.network.os_vif_util [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converting VIF {"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.797 232432 DEBUG nova.network.os_vif_util [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.800 232432 DEBUG nova.objects.instance [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lazy-loading 'pci_devices' on Instance uuid bd6a38e4-b49a-4936-8b8c-c399f5560b72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.822 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <uuid>bd6a38e4-b49a-4936-8b8c-c399f5560b72</uuid>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <name>instance-0000004d</name>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:name>guest-instance-1</nova:name>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:05:19</nova:creationTime>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:user uuid="fea0b5a703d4426882e6691d4313bd30">tempest-ServersV294TestFqdnHostnames-635415268-project-member</nova:user>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:project uuid="90b6d3cc4af4471cb593f860c98e0cba">tempest-ServersV294TestFqdnHostnames-635415268</nova:project>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <nova:port uuid="146a0516-d25a-4ab5-b910-9e61df665977">
Nov 29 08:05:20 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <system>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="serial">bd6a38e4-b49a-4936-8b8c-c399f5560b72</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="uuid">bd6a38e4-b49a-4936-8b8c-c399f5560b72</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </system>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <os>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </os>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <features>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </features>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk">
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config">
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:05:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:59:df:cf"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <target dev="tap146a0516-d2"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/console.log" append="off"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <video>
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </video>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:05:20 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:05:20 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:05:20 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:05:20 compute-2 nova_compute[232428]: </domain>
Nov 29 08:05:20 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.824 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Preparing to wait for external event network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.825 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.825 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.826 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.827 232432 DEBUG nova.virt.libvirt.vif [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=77,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxIiLkFQ1ert7nEL0v58X7WKu3xUh1mWUXmT1JlZHXz/8vh+Td85o1NdCeHiQUXwP7hD8T82BrUqMAoPcaIM+5T8OOw6A6IpooWQgzpb0pQ6VN64N1quToMciGmK+lXuA==',key_name='tempest-keypair-1903166805',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90b6d3cc4af4471cb593f860c98e0cba',ramdisk_id='',reservation_id='r-1lzlqjps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-635415268',owner_user_name='tempest-ServersV294TestFqdnHostnames-635415268-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fea0b5a703d4426882e6691d4313bd30',uuid=bd6a38e4-b49a-4936-8b8c-c399f5560b72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.828 232432 DEBUG nova.network.os_vif_util [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converting VIF {"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.829 232432 DEBUG nova.network.os_vif_util [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.829 232432 DEBUG os_vif [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.832 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.832 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.839 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap146a0516-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.840 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap146a0516-d2, col_values=(('external_ids', {'iface-id': '146a0516-d25a-4ab5-b910-9e61df665977', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:df:cf', 'vm-uuid': 'bd6a38e4-b49a-4936-8b8c-c399f5560b72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1489924693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3266102366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:20 compute-2 NetworkManager[48993]: <info>  [1764403520.8448] manager: (tap146a0516-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.860 232432 INFO os_vif [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2')
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.931 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.931 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.931 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] No VIF found with MAC fa:16:3e:59:df:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.932 232432 INFO nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Using config drive
Nov 29 08:05:20 compute-2 nova_compute[232428]: 2025-11-29 08:05:20.965 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.631 232432 INFO nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Creating config drive at /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.645 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzggw9oh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.706 232432 DEBUG nova.network.neutron [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updated VIF entry in instance network info cache for port 146a0516-d25a-4ab5-b910-9e61df665977. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.708 232432 DEBUG nova.network.neutron [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updating instance_info_cache with network_info: [{"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.741 232432 DEBUG oslo_concurrency.lockutils [req-52fc3fdd-5168-43dd-91c9-d62c5f0ffd3a req-41746685-e140-48e6-924c-5be8c602a19c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.822 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzggw9oh" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.869 232432 DEBUG nova.storage.rbd_utils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] rbd image bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:21 compute-2 nova_compute[232428]: 2025-11-29 08:05:21.874 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:21 compute-2 ceph-mon[77138]: pgmap v1932: 305 pgs: 305 active+clean; 236 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.4 MiB/s wr, 159 op/s
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.164 232432 DEBUG oslo_concurrency.processutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config bd6a38e4-b49a-4936-8b8c-c399f5560b72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.166 232432 INFO nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Deleting local config drive /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72/disk.config because it was imported into RBD.
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.227 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:22 compute-2 kernel: tap146a0516-d2: entered promiscuous mode
Nov 29 08:05:22 compute-2 ovn_controller[134375]: 2025-11-29T08:05:22Z|00316|binding|INFO|Claiming lport 146a0516-d25a-4ab5-b910-9e61df665977 for this chassis.
Nov 29 08:05:22 compute-2 ovn_controller[134375]: 2025-11-29T08:05:22Z|00317|binding|INFO|146a0516-d25a-4ab5-b910-9e61df665977: Claiming fa:16:3e:59:df:cf 10.100.0.4
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.2427] manager: (tap146a0516-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.252 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:df:cf 10.100.0.4'], port_security=['fa:16:3e:59:df:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bd6a38e4-b49a-4936-8b8c-c399f5560b72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90b6d3cc4af4471cb593f860c98e0cba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b70e5fd2-eb68-47ee-9e4c-5cc6823e04f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79c9873d-1a8f-4030-9b43-0ba3c90c94bf, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=146a0516-d25a-4ab5-b910-9e61df665977) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.254 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 146a0516-d25a-4ab5-b910-9e61df665977 in datapath b73b21a1-05be-4ae9-bc63-7177a7f45f24 bound to our chassis
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.256 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b73b21a1-05be-4ae9-bc63-7177a7f45f24
Nov 29 08:05:22 compute-2 systemd-udevd[265964]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.272 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1e025115-f6c9-4440-b538-b57c2cbbed03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.273 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb73b21a1-01 in ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.275 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb73b21a1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.275 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb930cf-f85c-44e2-b3c4-6a87cc444c19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.276 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad1cf00-824c-4d57-b87d-d431d7b93560]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 systemd-machined[194747]: New machine qemu-33-instance-0000004d.
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.2896] device (tap146a0516-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.2910] device (tap146a0516-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.295 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4aafaf-a807-4b67-bb69-b6c1f6e1a694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.322 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[00111938-9733-44e9-9bfd-4fba964bc1da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 systemd[1]: Started Virtual Machine qemu-33-instance-0000004d.
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.342 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_controller[134375]: 2025-11-29T08:05:22Z|00318|binding|INFO|Setting lport 146a0516-d25a-4ab5-b910-9e61df665977 ovn-installed in OVS
Nov 29 08:05:22 compute-2 ovn_controller[134375]: 2025-11-29T08:05:22Z|00319|binding|INFO|Setting lport 146a0516-d25a-4ab5-b910-9e61df665977 up in Southbound
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.362 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a07db5-4c19-4eee-86cb-b4ce3ababb72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.3694] manager: (tapb73b21a1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.370 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ca714cc4-1ef9-47ac-a0a3-af81bc27da1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.407 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4abac413-3910-47f7-ab2e-860587d0af0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.411 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b264b246-c662-4334-adbf-51312d0f5fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.4445] device (tapb73b21a1-00): carrier: link connected
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.450 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9858ea1d-9922-4792-8c65-8c15bfe42c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.472 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[25ca00a9-5198-415c-8d77-9c2f51ac51de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb73b21a1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:e7:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648871, 'reachable_time': 16671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265997, 'error': None, 'target': 'ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:22.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.493 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[da781139-9eb5-4199-a365-f1b25d8c7524]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:e7bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 648871, 'tstamp': 648871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265998, 'error': None, 'target': 'ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.511 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3adff5be-308d-4994-88b9-8f6fe6e9f91f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb73b21a1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:e7:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648871, 'reachable_time': 16671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265999, 'error': None, 'target': 'ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.549 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[55efe123-8944-4807-a884-4bcd82244787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.613 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df328c80-fc57-4080-81f6-eb26707172eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.614 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb73b21a1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.615 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.615 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb73b21a1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.656 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 kernel: tapb73b21a1-00: entered promiscuous mode
Nov 29 08:05:22 compute-2 NetworkManager[48993]: <info>  [1764403522.6585] manager: (tapb73b21a1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.659 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.660 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb73b21a1-00, col_values=(('external_ids', {'iface-id': '615e6a60-7afd-4450-a5c7-4b8fef11ff71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_controller[134375]: 2025-11-29T08:05:22Z|00320|binding|INFO|Releasing lport 615e6a60-7afd-4450-a5c7-4b8fef11ff71 from this chassis (sb_readonly=0)
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.692 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b73b21a1-05be-4ae9-bc63-7177a7f45f24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b73b21a1-05be-4ae9-bc63-7177a7f45f24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.693 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[66ddfd0a-eec7-4e14-a934-145503e9075e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.694 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-b73b21a1-05be-4ae9-bc63-7177a7f45f24
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/b73b21a1-05be-4ae9-bc63-7177a7f45f24.pid.haproxy
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID b73b21a1-05be-4ae9-bc63-7177a7f45f24
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:22.700 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'env', 'PROCESS_TAG=haproxy-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b73b21a1-05be-4ae9-bc63-7177a7f45f24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:22 compute-2 ceph-mon[77138]: pgmap v1933: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 234 op/s
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.917 232432 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received event network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.919 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.919 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.920 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.921 232432 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Processing event network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.921 232432 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received event network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.922 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.922 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.923 232432 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.924 232432 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] No waiting events found dispatching network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.924 232432 WARNING nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received unexpected event network-vif-plugged-146a0516-d25a-4ab5-b910-9e61df665977 for instance with vm_state building and task_state spawning.
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.989 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403522.988758, bd6a38e4-b49a-4936-8b8c-c399f5560b72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.990 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] VM Started (Lifecycle Event)
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.992 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:22 compute-2 nova_compute[232428]: 2025-11-29 08:05:22.997 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.001 232432 INFO nova.virt.libvirt.driver [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Instance spawned successfully.
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.001 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.013 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.021 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.024 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.025 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.026 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.026 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.027 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.027 232432 DEBUG nova.virt.libvirt.driver [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.050 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.051 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403522.9889069, bd6a38e4-b49a-4936-8b8c-c399f5560b72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.051 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] VM Paused (Lifecycle Event)
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.074 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.079 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403522.9959066, bd6a38e4-b49a-4936-8b8c-c399f5560b72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.079 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] VM Resumed (Lifecycle Event)
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.087 232432 INFO nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Took 8.11 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.088 232432 DEBUG nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.097 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.104 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:23 compute-2 podman[266073]: 2025-11-29 08:05:23.123495225 +0000 UTC m=+0.072287191 container create 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.136 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.148 232432 INFO nova.compute.manager [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Took 9.09 seconds to build instance.
Nov 29 08:05:23 compute-2 nova_compute[232428]: 2025-11-29 08:05:23.162 232432 DEBUG oslo_concurrency.lockutils [None req-fb70d65b-cdb3-42a1-9ffe-4127a3f04733 fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:23 compute-2 systemd[1]: Started libpod-conmon-20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a.scope.
Nov 29 08:05:23 compute-2 podman[266073]: 2025-11-29 08:05:23.085174057 +0000 UTC m=+0.033966003 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:23 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:05:23 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56842b21287f417162d48b105240beeac4e4c3c1c2f5c987aff212c9ab6933b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:23 compute-2 podman[266073]: 2025-11-29 08:05:23.224948879 +0000 UTC m=+0.173740805 container init 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:23 compute-2 podman[266073]: 2025-11-29 08:05:23.233123024 +0000 UTC m=+0.181914940 container start 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:23 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [NOTICE]   (266092) : New worker (266094) forked
Nov 29 08:05:23 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [NOTICE]   (266092) : Loading success.
Nov 29 08:05:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:25 compute-2 ceph-mon[77138]: pgmap v1934: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 156 op/s
Nov 29 08:05:25 compute-2 nova_compute[232428]: 2025-11-29 08:05:25.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:25 compute-2 nova_compute[232428]: 2025-11-29 08:05:25.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:25 compute-2 NetworkManager[48993]: <info>  [1764403525.7678] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 08:05:25 compute-2 NetworkManager[48993]: <info>  [1764403525.7687] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 08:05:25 compute-2 nova_compute[232428]: 2025-11-29 08:05:25.844 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:25 compute-2 sudo[266105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:25 compute-2 sudo[266105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:25 compute-2 sudo[266105]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:25 compute-2 nova_compute[232428]: 2025-11-29 08:05:25.943 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:25 compute-2 ovn_controller[134375]: 2025-11-29T08:05:25Z|00321|binding|INFO|Releasing lport 615e6a60-7afd-4450-a5c7-4b8fef11ff71 from this chassis (sb_readonly=0)
Nov 29 08:05:25 compute-2 nova_compute[232428]: 2025-11-29 08:05:25.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:26 compute-2 sudo[266131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:26 compute-2 sudo[266131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:26 compute-2 sudo[266131]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.052 232432 DEBUG nova.compute.manager [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received event network-changed-146a0516-d25a-4ab5-b910-9e61df665977 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.054 232432 DEBUG nova.compute.manager [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Refreshing instance network info cache due to event network-changed-146a0516-d25a-4ab5-b910-9e61df665977. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.054 232432 DEBUG oslo_concurrency.lockutils [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.055 232432 DEBUG oslo_concurrency.lockutils [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.056 232432 DEBUG nova.network.neutron [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Refreshing network info cache for port 146a0516-d25a-4ab5-b910-9e61df665977 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1282985769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:26 compute-2 ovn_controller[134375]: 2025-11-29T08:05:26Z|00322|binding|INFO|Releasing lport 615e6a60-7afd-4450-a5c7-4b8fef11ff71 from this chassis (sb_readonly=0)
Nov 29 08:05:26 compute-2 nova_compute[232428]: 2025-11-29 08:05:26.610 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:26.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:27 compute-2 ceph-mon[77138]: pgmap v1935: 305 pgs: 305 active+clean; 151 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Nov 29 08:05:27 compute-2 nova_compute[232428]: 2025-11-29 08:05:27.527 232432 DEBUG nova.network.neutron [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updated VIF entry in instance network info cache for port 146a0516-d25a-4ab5-b910-9e61df665977. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:27 compute-2 nova_compute[232428]: 2025-11-29 08:05:27.528 232432 DEBUG nova.network.neutron [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updating instance_info_cache with network_info: [{"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:27 compute-2 nova_compute[232428]: 2025-11-29 08:05:27.556 232432 DEBUG oslo_concurrency.lockutils [req-05a6ee7c-1d96-477e-a90c-cfde8e6ce36b req-cd344c5d-da44-423e-a65d-bf97d7e47677 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bd6a38e4-b49a-4936-8b8c-c399f5560b72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4123083525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4123083525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:29 compute-2 ceph-mon[77138]: pgmap v1936: 305 pgs: 305 active+clean; 151 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Nov 29 08:05:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:30 compute-2 nova_compute[232428]: 2025-11-29 08:05:30.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1014316860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:30 compute-2 nova_compute[232428]: 2025-11-29 08:05:30.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:31 compute-2 ceph-mon[77138]: pgmap v1937: 305 pgs: 305 active+clean; 134 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 232 op/s
Nov 29 08:05:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:32.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:32.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:33 compute-2 ceph-mon[77138]: pgmap v1938: 305 pgs: 305 active+clean; 151 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 29 08:05:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:34 compute-2 sudo[266160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:34 compute-2 sudo[266160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:34 compute-2 sudo[266160]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:34 compute-2 sudo[266185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:34 compute-2 sudo[266185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:34 compute-2 sudo[266185]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:34 compute-2 sudo[266210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:34 compute-2 sudo[266210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:34 compute-2 sudo[266210]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:34.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:34 compute-2 sudo[266235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:05:34 compute-2 sudo[266235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4079359461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:35 compute-2 sudo[266235]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-2 sudo[266282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:35 compute-2 sudo[266282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-2 sudo[266282]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-2 sudo[266307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:35 compute-2 sudo[266307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-2 sudo[266307]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-2 sudo[266333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:35 compute-2 sudo[266333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-2 sudo[266333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-2 sudo[266358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:05:35 compute-2 sudo[266358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-2 nova_compute[232428]: 2025-11-29 08:05:35.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:35 compute-2 ceph-mon[77138]: pgmap v1939: 305 pgs: 305 active+clean; 176 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Nov 29 08:05:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1798689061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:05:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:05:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:05:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:05:35 compute-2 nova_compute[232428]: 2025-11-29 08:05:35.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:36 compute-2 sudo[266358]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:36 compute-2 ovn_controller[134375]: 2025-11-29T08:05:36Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:df:cf 10.100.0.4
Nov 29 08:05:36 compute-2 ovn_controller[134375]: 2025-11-29T08:05:36Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:df:cf 10.100.0.4
Nov 29 08:05:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:36 compute-2 podman[266413]: 2025-11-29 08:05:36.672144865 +0000 UTC m=+0.069653079 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:36.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:36 compute-2 ceph-mon[77138]: pgmap v1940: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:05:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:37 compute-2 nova_compute[232428]: 2025-11-29 08:05:37.963 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:37 compute-2 nova_compute[232428]: 2025-11-29 08:05:37.965 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:38.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:38 compute-2 nova_compute[232428]: 2025-11-29 08:05:38.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:39 compute-2 ceph-mon[77138]: pgmap v1941: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 29 08:05:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.249 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.250 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:40.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:40 compute-2 nova_compute[232428]: 2025-11-29 08:05:40.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:41 compute-2 ceph-mon[77138]: pgmap v1942: 305 pgs: 305 active+clean; 226 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.8 MiB/s wr, 145 op/s
Nov 29 08:05:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3865215440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:41 compute-2 nova_compute[232428]: 2025-11-29 08:05:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:41 compute-2 podman[266435]: 2025-11-29 08:05:41.704864382 +0000 UTC m=+0.097192821 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:41.897 143912 DEBUG eventlet.wsgi.server [-] (143912) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:41.899 143912 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: Accept: */*
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: Connection: close
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: Content-Type: text/plain
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: Host: 169.254.169.254
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: User-Agent: curl/7.84.0
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: X-Forwarded-For: 10.100.0.4
Nov 29 08:05:41 compute-2 ovn_metadata_agent[143796]: X-Ovn-Network-Id: b73b21a1-05be-4ae9-bc63-7177a7f45f24 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 29 08:05:42 compute-2 nova_compute[232428]: 2025-11-29 08:05:42.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3971483710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2052017598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:05:42 compute-2 sudo[266457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:42 compute-2 sudo[266457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:42 compute-2 sudo[266457]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:42 compute-2 sudo[266482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:05:42 compute-2 sudo[266482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:42 compute-2 sudo[266482]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:42.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:42.885 143912 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 29 08:05:42 compute-2 haproxy-metadata-proxy-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266094]: 10.100.0.4:58364 [29/Nov/2025:08:05:41.896] listener listener/metadata 0/0/0/990/990 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 08:05:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:42.887 143912 INFO eventlet.wsgi.server [-] 10.100.0.4,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1673 time: 0.9877591
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.075 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.249 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.250 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.251 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.251 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.252 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.255 232432 INFO nova.compute.manager [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Terminating instance
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.256 232432 DEBUG nova.compute.manager [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:05:43 compute-2 ceph-mon[77138]: pgmap v1943: 305 pgs: 305 active+clean; 187 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 247 op/s
Nov 29 08:05:43 compute-2 kernel: tap146a0516-d2 (unregistering): left promiscuous mode
Nov 29 08:05:43 compute-2 NetworkManager[48993]: <info>  [1764403543.3947] device (tap146a0516-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:05:43 compute-2 ovn_controller[134375]: 2025-11-29T08:05:43Z|00323|binding|INFO|Releasing lport 146a0516-d25a-4ab5-b910-9e61df665977 from this chassis (sb_readonly=0)
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.403 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 ovn_controller[134375]: 2025-11-29T08:05:43Z|00324|binding|INFO|Setting lport 146a0516-d25a-4ab5-b910-9e61df665977 down in Southbound
Nov 29 08:05:43 compute-2 ovn_controller[134375]: 2025-11-29T08:05:43Z|00325|binding|INFO|Removing iface tap146a0516-d2 ovn-installed in OVS
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.411 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:df:cf 10.100.0.4'], port_security=['fa:16:3e:59:df:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bd6a38e4-b49a-4936-8b8c-c399f5560b72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90b6d3cc4af4471cb593f860c98e0cba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b70e5fd2-eb68-47ee-9e4c-5cc6823e04f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79c9873d-1a8f-4030-9b43-0ba3c90c94bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=146a0516-d25a-4ab5-b910-9e61df665977) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.413 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 146a0516-d25a-4ab5-b910-9e61df665977 in datapath b73b21a1-05be-4ae9-bc63-7177a7f45f24 unbound from our chassis
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.414 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b73b21a1-05be-4ae9-bc63-7177a7f45f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.415 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c9454d-c207-43a2-b389-d3e0bb1e6d59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.416 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24 namespace which is not needed anymore
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 29 08:05:43 compute-2 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004d.scope: Consumed 14.405s CPU time.
Nov 29 08:05:43 compute-2 systemd-machined[194747]: Machine qemu-33-instance-0000004d terminated.
Nov 29 08:05:43 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [NOTICE]   (266092) : haproxy version is 2.8.14-c23fe91
Nov 29 08:05:43 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [NOTICE]   (266092) : path to executable is /usr/sbin/haproxy
Nov 29 08:05:43 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [WARNING]  (266092) : Exiting Master process...
Nov 29 08:05:43 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [ALERT]    (266092) : Current worker (266094) exited with code 143 (Terminated)
Nov 29 08:05:43 compute-2 neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24[266088]: [WARNING]  (266092) : All workers exited. Exiting... (0)
Nov 29 08:05:43 compute-2 systemd[1]: libpod-20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a.scope: Deactivated successfully.
Nov 29 08:05:43 compute-2 podman[266532]: 2025-11-29 08:05:43.579524033 +0000 UTC m=+0.050674516 container died 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:05:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:05:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-56842b21287f417162d48b105240beeac4e4c3c1c2f5c987aff212c9ab6933b1-merged.mount: Deactivated successfully.
Nov 29 08:05:43 compute-2 podman[266532]: 2025-11-29 08:05:43.698644449 +0000 UTC m=+0.169794972 container cleanup 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.716 232432 INFO nova.virt.libvirt.driver [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Instance destroyed successfully.
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.718 232432 DEBUG nova.objects.instance [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lazy-loading 'resources' on Instance uuid bd6a38e4-b49a-4936-8b8c-c399f5560b72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:43 compute-2 systemd[1]: libpod-conmon-20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a.scope: Deactivated successfully.
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.739 232432 DEBUG nova.virt.libvirt.vif [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=77,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxIiLkFQ1ert7nEL0v58X7WKu3xUh1mWUXmT1JlZHXz/8vh+Td85o1NdCeHiQUXwP7hD8T82BrUqMAoPcaIM+5T8OOw6A6IpooWQgzpb0pQ6VN64N1quToMciGmK+lXuA==',key_name='tempest-keypair-1903166805',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90b6d3cc4af4471cb593f860c98e0cba',ramdisk_id='',reservation_id='r-1lzlqjps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-635415268',owner_user_name='tempest-ServersV294TestFqdnHostnames-635415268-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fea0b5a703d4426882e6691d4313bd30',uuid=bd6a38e4-b49a-4936-8b8c-c399f5560b72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.740 232432 DEBUG nova.network.os_vif_util [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converting VIF {"id": "146a0516-d25a-4ab5-b910-9e61df665977", "address": "fa:16:3e:59:df:cf", "network": {"id": "b73b21a1-05be-4ae9-bc63-7177a7f45f24", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1398348953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90b6d3cc4af4471cb593f860c98e0cba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146a0516-d2", "ovs_interfaceid": "146a0516-d25a-4ab5-b910-9e61df665977", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.741 232432 DEBUG nova.network.os_vif_util [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.741 232432 DEBUG os_vif [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.743 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.744 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap146a0516-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.745 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.747 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.750 232432 INFO os_vif [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:df:cf,bridge_name='br-int',has_traffic_filtering=True,id=146a0516-d25a-4ab5-b910-9e61df665977,network=Network(b73b21a1-05be-4ae9-bc63-7177a7f45f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146a0516-d2')
Nov 29 08:05:43 compute-2 podman[266572]: 2025-11-29 08:05:43.782526313 +0000 UTC m=+0.049698386 container remove 20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.790 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[803af587-36f5-45b9-9085-b425bd8ca5f2]: (4, ('Sat Nov 29 08:05:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24 (20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a)\n20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a\nSat Nov 29 08:05:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24 (20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a)\n20529cfc360aa92662992e8dcd4bdbb0cec4ceb3de22596d6510067ea4de190a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.792 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8313b64b-3d39-4510-982b-69bf8e00b849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.794 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb73b21a1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 kernel: tapb73b21a1-00: left promiscuous mode
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.799 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.802 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ac851939-2b34-4501-8896-193b14a789fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 nova_compute[232428]: 2025-11-29 08:05:43.814 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.822 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4f23be1d-3fd0-4fa2-9bc6-cad76955b5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.823 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[23ce4840-3eb7-4195-8c53-e776de062773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.841 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[402b1e4b-ec9b-44c8-9101-5a62a8df34e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648862, 'reachable_time': 40508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266607, 'error': None, 'target': 'ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.845 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b73b21a1-05be-4ae9-bc63-7177a7f45f24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:05:43.846 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[aebd26a4-e923-486a-866f-74e379fdd5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:43 compute-2 systemd[1]: run-netns-ovnmeta\x2db73b21a1\x2d05be\x2d4ae9\x2dbc63\x2d7177a7f45f24.mount: Deactivated successfully.
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.270 232432 INFO nova.virt.libvirt.driver [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Deleting instance files /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72_del
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.272 232432 INFO nova.virt.libvirt.driver [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Deletion of /var/lib/nova/instances/bd6a38e4-b49a-4936-8b8c-c399f5560b72_del complete
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.501 232432 INFO nova.compute.manager [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Took 1.24 seconds to destroy the instance on the hypervisor.
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.502 232432 DEBUG oslo.service.loopingcall [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.503 232432 DEBUG nova.compute.manager [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:05:44 compute-2 nova_compute[232428]: 2025-11-29 08:05:44.503 232432 DEBUG nova.network.neutron [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:05:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:44.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2542148193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:05:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:44.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:05:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.550 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:45 compute-2 ceph-mon[77138]: pgmap v1944: 305 pgs: 305 active+clean; 148 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 239 op/s
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.781 232432 DEBUG nova.network.neutron [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.810 232432 INFO nova.compute.manager [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Took 1.31 seconds to deallocate network for instance.
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.856 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.856 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.883 232432 DEBUG nova.compute.manager [req-de32ac03-c6e5-4389-a331-79f7ff487d06 req-d90736ee-dcdf-4d18-9a27-901e1b1b4ee6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Received event network-vif-deleted-146a0516-d25a-4ab5-b910-9e61df665977 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:45 compute-2 nova_compute[232428]: 2025-11-29 08:05:45.910 232432 DEBUG oslo_concurrency.processutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-2 sudo[266612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:46 compute-2 sudo[266612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:46 compute-2 sudo[266612]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:46 compute-2 sudo[266656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:46 compute-2 sudo[266656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:46 compute-2 sudo[266656]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.223 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2632986959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.415 232432 DEBUG oslo_concurrency.processutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.428 232432 DEBUG nova.compute.provider_tree [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.445 232432 DEBUG nova.scheduler.client.report [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.478 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.483 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.484 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.484 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.485 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.583 232432 INFO nova.scheduler.client.report [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Deleted allocations for instance bd6a38e4-b49a-4936-8b8c-c399f5560b72
Nov 29 08:05:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4115316929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2632986959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.864 232432 DEBUG oslo_concurrency.lockutils [None req-9520bd29-3acd-43af-8a82-f9e247910dec fea0b5a703d4426882e6691d4313bd30 90b6d3cc4af4471cb593f860c98e0cba - - default default] Lock "bd6a38e4-b49a-4936-8b8c-c399f5560b72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2205277719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:46 compute-2 nova_compute[232428]: 2025-11-29 08:05:46.978 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.148 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.149 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4538MB free_disk=20.972389221191406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.150 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.150 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.259 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.260 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.287 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:47 compute-2 ceph-mon[77138]: pgmap v1945: 305 pgs: 305 active+clean; 67 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Nov 29 08:05:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3552444880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2205277719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2449904932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1797006350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.794 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.799 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.814 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.835 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:05:47 compute-2 nova_compute[232428]: 2025-11-29 08:05:47.835 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:05:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:05:48 compute-2 nova_compute[232428]: 2025-11-29 08:05:48.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:05:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:05:48 compute-2 nova_compute[232428]: 2025-11-29 08:05:48.836 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1797006350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3851234073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:50.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:50 compute-2 nova_compute[232428]: 2025-11-29 08:05:50.552 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-2 podman[266730]: 2025-11-29 08:05:50.701454921 +0000 UTC m=+0.103873249 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:05:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:50.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:51 compute-2 ceph-mon[77138]: pgmap v1946: 305 pgs: 305 active+clean; 67 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 205 op/s
Nov 29 08:05:52 compute-2 ceph-mon[77138]: pgmap v1947: 305 pgs: 305 active+clean; 41 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Nov 29 08:05:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3136847276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1559062296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:05:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:05:53 compute-2 ceph-mon[77138]: pgmap v1948: 305 pgs: 305 active+clean; 115 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Nov 29 08:05:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3825691271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/785794356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:53 compute-2 nova_compute[232428]: 2025-11-29 08:05:53.752 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:55 compute-2 nova_compute[232428]: 2025-11-29 08:05:55.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:55 compute-2 ceph-mon[77138]: pgmap v1949: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.6 MiB/s wr, 120 op/s
Nov 29 08:05:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:56.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:57 compute-2 ceph-mon[77138]: pgmap v1950: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Nov 29 08:05:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:58 compute-2 nova_compute[232428]: 2025-11-29 08:05:58.712 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403543.7100446, bd6a38e4-b49a-4936-8b8c-c399f5560b72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:58 compute-2 nova_compute[232428]: 2025-11-29 08:05:58.712 232432 INFO nova.compute.manager [-] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] VM Stopped (Lifecycle Event)
Nov 29 08:05:58 compute-2 nova_compute[232428]: 2025-11-29 08:05:58.739 232432 DEBUG nova.compute.manager [None req-1e17640c-16c5-42f1-a116-ed5e8d123a0c - - - - - -] [instance: bd6a38e4-b49a-4936-8b8c-c399f5560b72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:58 compute-2 nova_compute[232428]: 2025-11-29 08:05:58.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:05:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:05:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:58.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:05:59 compute-2 ceph-mon[77138]: pgmap v1951: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Nov 29 08:05:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.224 232432 DEBUG nova.compute.manager [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.298 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.299 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.321 232432 DEBUG nova.objects.instance [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_requests' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.335 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.335 232432 INFO nova.compute.claims [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.336 232432 DEBUG nova.objects.instance [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.365 232432 DEBUG nova.objects.instance [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.435 232432 INFO nova.compute.resource_tracker [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating resource usage from migration c0d2572c-886f-4b30-9307-63a2558060db
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.436 232432 DEBUG nova.compute.resource_tracker [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Starting to track incoming migration c0d2572c-886f-4b30-9307-63a2558060db with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.484 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.555 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:00.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1600442826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.971 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:00 compute-2 nova_compute[232428]: 2025-11-29 08:06:00.981 232432 DEBUG nova.compute.provider_tree [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:01 compute-2 nova_compute[232428]: 2025-11-29 08:06:01.033 232432 DEBUG nova.scheduler.client.report [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:01 compute-2 nova_compute[232428]: 2025-11-29 08:06:01.087 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:01 compute-2 nova_compute[232428]: 2025-11-29 08:06:01.088 232432 INFO nova.compute.manager [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Migrating
Nov 29 08:06:01 compute-2 ceph-mon[77138]: pgmap v1952: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 29 08:06:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1600442826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:02.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:02 compute-2 ceph-mon[77138]: pgmap v1953: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 29 08:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:03.310 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:03.311 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:03.312 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:03 compute-2 nova_compute[232428]: 2025-11-29 08:06:03.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:04.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:05 compute-2 ceph-mon[77138]: pgmap v1954: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 598 KiB/s wr, 148 op/s
Nov 29 08:06:05 compute-2 nova_compute[232428]: 2025-11-29 08:06:05.558 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:06 compute-2 sudo[266788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:06 compute-2 sudo[266788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:06 compute-2 sudo[266788]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:06 compute-2 sudo[266815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:06 compute-2 sudo[266815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:06 compute-2 sudo[266815]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:06 compute-2 sshd-session[266811]: Accepted publickey for nova from 192.168.122.101 port 54762 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 08:06:06 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 08:06:06 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 08:06:06 compute-2 systemd-logind[787]: New session 54 of user nova.
Nov 29 08:06:06 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 08:06:06 compute-2 systemd[1]: Starting User Manager for UID 42436...
Nov 29 08:06:06 compute-2 systemd[266842]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:06:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:06 compute-2 systemd[266842]: Queued start job for default target Main User Target.
Nov 29 08:06:06 compute-2 systemd[266842]: Created slice User Application Slice.
Nov 29 08:06:06 compute-2 systemd[266842]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 08:06:06 compute-2 systemd[266842]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 08:06:06 compute-2 systemd[266842]: Reached target Paths.
Nov 29 08:06:06 compute-2 systemd[266842]: Reached target Timers.
Nov 29 08:06:06 compute-2 systemd[266842]: Starting D-Bus User Message Bus Socket...
Nov 29 08:06:06 compute-2 systemd[266842]: Starting Create User's Volatile Files and Directories...
Nov 29 08:06:06 compute-2 systemd[266842]: Listening on D-Bus User Message Bus Socket.
Nov 29 08:06:06 compute-2 systemd[266842]: Reached target Sockets.
Nov 29 08:06:06 compute-2 systemd[266842]: Finished Create User's Volatile Files and Directories.
Nov 29 08:06:06 compute-2 systemd[266842]: Reached target Basic System.
Nov 29 08:06:06 compute-2 systemd[266842]: Reached target Main User Target.
Nov 29 08:06:06 compute-2 systemd[266842]: Startup finished in 177ms.
Nov 29 08:06:06 compute-2 systemd[1]: Started User Manager for UID 42436.
Nov 29 08:06:06 compute-2 systemd[1]: Started Session 54 of User nova.
Nov 29 08:06:06 compute-2 sshd-session[266811]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:06:06 compute-2 sshd-session[266857]: Received disconnect from 192.168.122.101 port 54762:11: disconnected by user
Nov 29 08:06:06 compute-2 sshd-session[266857]: Disconnected from user nova 192.168.122.101 port 54762
Nov 29 08:06:06 compute-2 sshd-session[266811]: pam_unix(sshd:session): session closed for user nova
Nov 29 08:06:06 compute-2 systemd[1]: session-54.scope: Deactivated successfully.
Nov 29 08:06:06 compute-2 systemd-logind[787]: Session 54 logged out. Waiting for processes to exit.
Nov 29 08:06:06 compute-2 systemd-logind[787]: Removed session 54.
Nov 29 08:06:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:06.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:06 compute-2 podman[266859]: 2025-11-29 08:06:06.83412859 +0000 UTC m=+0.076527915 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 08:06:06 compute-2 sshd-session[266866]: Accepted publickey for nova from 192.168.122.101 port 54776 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 08:06:06 compute-2 systemd-logind[787]: New session 56 of user nova.
Nov 29 08:06:06 compute-2 systemd[1]: Started Session 56 of User nova.
Nov 29 08:06:06 compute-2 sshd-session[266866]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:06:06 compute-2 sshd-session[266881]: Received disconnect from 192.168.122.101 port 54776:11: disconnected by user
Nov 29 08:06:06 compute-2 sshd-session[266881]: Disconnected from user nova 192.168.122.101 port 54776
Nov 29 08:06:06 compute-2 sshd-session[266866]: pam_unix(sshd:session): session closed for user nova
Nov 29 08:06:06 compute-2 systemd[1]: session-56.scope: Deactivated successfully.
Nov 29 08:06:06 compute-2 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Nov 29 08:06:06 compute-2 systemd-logind[787]: Removed session 56.
Nov 29 08:06:07 compute-2 ceph-mon[77138]: pgmap v1955: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Nov 29 08:06:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:08 compute-2 nova_compute[232428]: 2025-11-29 08:06:08.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:09 compute-2 ceph-mon[77138]: pgmap v1956: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 85 B/s wr, 92 op/s
Nov 29 08:06:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:10 compute-2 nova_compute[232428]: 2025-11-29 08:06:10.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:11 compute-2 ceph-mon[77138]: pgmap v1957: 305 pgs: 305 active+clean; 137 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 191 KiB/s wr, 106 op/s
Nov 29 08:06:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:12 compute-2 podman[266886]: 2025-11-29 08:06:12.706522547 +0000 UTC m=+0.096300364 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:06:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:13 compute-2 ceph-mon[77138]: pgmap v1958: 305 pgs: 305 active+clean; 188 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 170 op/s
Nov 29 08:06:13 compute-2 nova_compute[232428]: 2025-11-29 08:06:13.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:15 compute-2 ceph-mon[77138]: pgmap v1959: 305 pgs: 305 active+clean; 191 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 532 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Nov 29 08:06:15 compute-2 nova_compute[232428]: 2025-11-29 08:06:15.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:16.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:17 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 08:06:17 compute-2 systemd[266842]: Activating special unit Exit the Session...
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped target Main User Target.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped target Basic System.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped target Paths.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped target Sockets.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped target Timers.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 08:06:17 compute-2 systemd[266842]: Closed D-Bus User Message Bus Socket.
Nov 29 08:06:17 compute-2 systemd[266842]: Stopped Create User's Volatile Files and Directories.
Nov 29 08:06:17 compute-2 systemd[266842]: Removed slice User Application Slice.
Nov 29 08:06:17 compute-2 systemd[266842]: Reached target Shutdown.
Nov 29 08:06:17 compute-2 systemd[266842]: Finished Exit the Session.
Nov 29 08:06:17 compute-2 systemd[266842]: Reached target Exit the Session.
Nov 29 08:06:17 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 08:06:17 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 08:06:17 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 08:06:17 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 08:06:17 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 08:06:17 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 08:06:17 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 08:06:17 compute-2 ceph-mon[77138]: pgmap v1960: 305 pgs: 305 active+clean; 121 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 29 08:06:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/721166876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2636843643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:18.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:18.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.820 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.820 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.852 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.993 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:18 compute-2 nova_compute[232428]: 2025-11-29 08:06:18.994 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.003 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.004 232432 INFO nova.compute.claims [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.175 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:19 compute-2 ceph-mon[77138]: pgmap v1961: 305 pgs: 305 active+clean; 121 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 29 08:06:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1116194179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.650 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.657 232432 DEBUG nova.compute.provider_tree [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.676 232432 DEBUG nova.scheduler.client.report [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.702 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.702 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.747 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.748 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.769 232432 INFO nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.787 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.978 232432 DEBUG nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.978 232432 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.979 232432 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.979 232432 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.980 232432 DEBUG nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.980 232432 WARNING nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state resize_migrating.
Nov 29 08:06:19 compute-2 nova_compute[232428]: 2025-11-29 08:06:19.987 232432 DEBUG nova.policy [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '484a7cf7f6cc49de97903a4efa4db0a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:06:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.051 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.053 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.054 232432 INFO nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Creating image(s)
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.090 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.124 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.158 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.163 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.248 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.249 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.250 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.250 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.278 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.282 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1116194179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:20.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.564 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.823 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.891 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] resizing rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:06:20 compute-2 nova_compute[232428]: 2025-11-29 08:06:20.989 232432 DEBUG nova.objects.instance [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'migration_context' on Instance uuid 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.003 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.004 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Ensure instance console log exists: /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.004 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.004 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.004 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.164 232432 INFO nova.network.neutron [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 29 08:06:21 compute-2 ceph-mon[77138]: pgmap v1962: 305 pgs: 305 active+clean; 138 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Nov 29 08:06:21 compute-2 podman[267101]: 2025-11-29 08:06:21.69599097 +0000 UTC m=+0.098064187 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:06:21 compute-2 nova_compute[232428]: 2025-11-29 08:06:21.878 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully created port: 61d720ff-b465-41b0-a524-639d10a26a68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.070 232432 DEBUG nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.070 232432 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.071 232432 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.071 232432 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.071 232432 DEBUG nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.072 232432 WARNING nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state resize_migrated.
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.172 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.173 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.173 232432 DEBUG nova.network.neutron [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.294 232432 DEBUG nova.compute.manager [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.295 232432 DEBUG nova.compute.manager [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing instance network info cache due to event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.295 232432 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:22.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:22 compute-2 nova_compute[232428]: 2025-11-29 08:06:22.880 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully created port: 76c3d8fe-9739-4a69-9e68-39abbf4ff51e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:23 compute-2 ceph-mon[77138]: pgmap v1963: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.9 MiB/s wr, 229 op/s
Nov 29 08:06:23 compute-2 nova_compute[232428]: 2025-11-29 08:06:23.809 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:23 compute-2 nova_compute[232428]: 2025-11-29 08:06:23.917 232432 DEBUG nova.network.neutron [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:23 compute-2 nova_compute[232428]: 2025-11-29 08:06:23.949 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:23 compute-2 nova_compute[232428]: 2025-11-29 08:06:23.952 232432 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:23 compute-2 nova_compute[232428]: 2025-11-29 08:06:23.953 232432 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.042 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.043 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.044 232432 INFO nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Creating image(s)
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.081 232432 DEBUG nova.storage.rbd_utils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] creating snapshot(nova-resize) on rbd image(8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:06:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:24.205 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:24.206 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.206 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.479 232432 DEBUG nova.objects.instance [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:24.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.605 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.606 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Ensure instance console log exists: /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.607 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.607 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.607 232432 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.609 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start _get_guest_xml network_info=[{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.613 232432 WARNING nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.617 232432 DEBUG nova.virt.libvirt.host [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.618 232432 DEBUG nova.virt.libvirt.host [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.620 232432 DEBUG nova.virt.libvirt.host [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.621 232432 DEBUG nova.virt.libvirt.host [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.622 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.622 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.622 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.622 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.623 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.623 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.623 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.623 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.624 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.624 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.624 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.624 232432 DEBUG nova.virt.hardware [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.625 232432 DEBUG nova.objects.instance [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.649 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:24 compute-2 nova_compute[232428]: 2025-11-29 08:06:24.683 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully created port: 3d75a706-b904-4825-8922-462a43bc5d07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:24.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:24 compute-2 ovn_controller[134375]: 2025-11-29T08:06:24Z|00326|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:06:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1203079137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.076 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.117 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:25 compute-2 ceph-mon[77138]: pgmap v1964: 305 pgs: 305 active+clean; 171 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Nov 29 08:06:25 compute-2 ceph-mon[77138]: osdmap e257: 3 total, 3 up, 3 in
Nov 29 08:06:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1245065104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1203079137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1915432355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.558 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully updated port: 61d720ff-b465-41b0-a524-639d10a26a68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.567 232432 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.569 232432 DEBUG nova.virt.libvirt.vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:20Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.570 232432 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.570 232432 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.573 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <uuid>8f9bb224-0119-4a96-9859-d3afda2ab1ce</uuid>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <name>instance-00000050</name>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:name>tempest-DeleteServersTestJSON-server-373302301</nova:name>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:06:24</nova:creationTime>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <nova:port uuid="8b53507f-acc1-4e75-a82d-55c4a7d7abd7">
Nov 29 08:06:25 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <system>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="serial">8f9bb224-0119-4a96-9859-d3afda2ab1ce</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="uuid">8f9bb224-0119-4a96-9859-d3afda2ab1ce</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </system>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <os>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </os>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <features>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </features>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk">
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config">
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:06:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:6c:22:bc"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <target dev="tap8b53507f-ac"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/console.log" append="off"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <video>
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </video>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:06:25 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:06:25 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:06:25 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:06:25 compute-2 nova_compute[232428]: </domain>
Nov 29 08:06:25 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.575 232432 DEBUG nova.virt.libvirt.vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:20Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.576 232432 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.576 232432 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.577 232432 DEBUG os_vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.578 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.579 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.580 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.584 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b53507f-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.584 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b53507f-ac, col_values=(('external_ids', {'iface-id': '8b53507f-acc1-4e75-a82d-55c4a7d7abd7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:22:bc', 'vm-uuid': '8f9bb224-0119-4a96-9859-d3afda2ab1ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.5867] manager: (tap8b53507f-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.593 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.594 232432 INFO os_vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac')
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.643 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.643 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.644 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:6c:22:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.644 232432 INFO nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Using config drive
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.686 232432 DEBUG nova.compute.manager [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-changed-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.687 232432 DEBUG nova.compute.manager [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing instance network info cache due to event network-changed-61d720ff-b465-41b0-a524-639d10a26a68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.687 232432 DEBUG oslo_concurrency.lockutils [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.688 232432 DEBUG oslo_concurrency.lockutils [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.688 232432 DEBUG nova.network.neutron [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing network info cache for port 61d720ff-b465-41b0-a524-639d10a26a68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.721 232432 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updated VIF entry in instance network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.722 232432 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.736 232432 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:25 compute-2 kernel: tap8b53507f-ac: entered promiscuous mode
Nov 29 08:06:25 compute-2 ovn_controller[134375]: 2025-11-29T08:06:25Z|00327|binding|INFO|Claiming lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for this chassis.
Nov 29 08:06:25 compute-2 ovn_controller[134375]: 2025-11-29T08:06:25Z|00328|binding|INFO|8b53507f-acc1-4e75-a82d-55c4a7d7abd7: Claiming fa:16:3e:6c:22:bc 10.100.0.7
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.741 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.7436] manager: (tap8b53507f-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.745 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.750 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.760 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:22:bc 10.100.0.7'], port_security=['fa:16:3e:6c:22:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8f9bb224-0119-4a96-9859-d3afda2ab1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=8b53507f-acc1-4e75-a82d-55c4a7d7abd7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.762 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.763 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:06:25 compute-2 systemd-machined[194747]: New machine qemu-34-instance-00000050.
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.777 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[64c9a61d-4dfa-470b-8820-42e6708699d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.779 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.780 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.781 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc74e18-9410-4a2b-8fe7-e1cb948789b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.781 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10b894e5-a858-44be-aa7d-1e8bfddd46aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.795 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b02205e3-3702-4e4c-a2f5-669211688a85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 systemd[1]: Started Virtual Machine qemu-34-instance-00000050.
Nov 29 08:06:25 compute-2 systemd-udevd[267298]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.823 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0e351753-1ec0-4947-8988-97969089b500]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.8293] device (tap8b53507f-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.8307] device (tap8b53507f-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.832 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 ovn_controller[134375]: 2025-11-29T08:06:25Z|00329|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 ovn-installed in OVS
Nov 29 08:06:25 compute-2 ovn_controller[134375]: 2025-11-29T08:06:25Z|00330|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 up in Southbound
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.837 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.859 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bc8a4d-04df-4d9c-9c94-52b78add6967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.864 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[033a955d-a7f4-43cc-91b5-e60da7f73573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.8658] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.895 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b24684aa-bbeb-4a13-9076-edd04f579352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.897 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7d9fd-9e56-4f6c-9afb-bc709cf51afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 nova_compute[232428]: 2025-11-29 08:06:25.915 232432 DEBUG nova.network.neutron [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:25 compute-2 NetworkManager[48993]: <info>  [1764403585.9219] device (tapd5e42602-d0): carrier: link connected
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.928 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[09d2f896-7ff5-4358-957a-243684858cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.945 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5776670-e4cf-42ca-a076-63ac4726faff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655218, 'reachable_time': 37266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267328, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.964 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6552a8-a0f9-47a2-921e-f71880e309ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655218, 'tstamp': 655218}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267329, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:25.986 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[096e4430-bde3-40bf-89b7-824f106186ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655218, 'reachable_time': 37266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267330, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.018 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37a150d9-ecd8-4199-b93a-6a1ddc6ef567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.086 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b591e40-489c-43b1-a942-b319a12346fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.087 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.087 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.088 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.089 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:26 compute-2 NetworkManager[48993]: <info>  [1764403586.0905] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 08:06:26 compute-2 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.093 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:26 compute-2 ovn_controller[134375]: 2025-11-29T08:06:26Z|00331|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.095 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.096 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.096 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e58ee824-f018-4711-bc73-d7836ecd0f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.098 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.098 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.109 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:26.209 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.284 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403586.2832932, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.285 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Resumed (Lifecycle Event)
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.288 232432 DEBUG nova.compute.manager [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.294 232432 INFO nova.virt.libvirt.driver [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance running successfully.
Nov 29 08:06:26 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.297 232432 DEBUG nova.virt.libvirt.guest [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.297 232432 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.342 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.345 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.381 232432 DEBUG nova.network.neutron [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.387 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.387 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403586.2852263, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.388 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Started (Lifecycle Event)
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.434 232432 DEBUG oslo_concurrency.lockutils [req-e2495899-48d2-443e-9527-1a4c9ee05b9c req-9be3a1b5-6cb9-4a30-b053-c1b291684a84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.435 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.441 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:26 compute-2 sudo[267396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:26 compute-2 sudo[267396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1915432355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3664529898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:26 compute-2 sudo[267396]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.473 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 08:06:26 compute-2 podman[267422]: 2025-11-29 08:06:26.492877493 +0000 UTC m=+0.059488482 container create 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:06:26 compute-2 sudo[267438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:26 compute-2 sudo[267438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:26 compute-2 sudo[267438]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:26 compute-2 systemd[1]: Started libpod-conmon-58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912.scope.
Nov 29 08:06:26 compute-2 podman[267422]: 2025-11-29 08:06:26.463972758 +0000 UTC m=+0.030583777 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:06:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:26 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:06:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cd2e9f3d08b16b9780413dcafea67e7749f95af5146abcec093cce2ede4458/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:26 compute-2 podman[267422]: 2025-11-29 08:06:26.599661251 +0000 UTC m=+0.166272260 container init 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:06:26 compute-2 podman[267422]: 2025-11-29 08:06:26.605091582 +0000 UTC m=+0.171702571 container start 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:06:26 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [NOTICE]   (267470) : New worker (267472) forked
Nov 29 08:06:26 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [NOTICE]   (267470) : Loading success.
Nov 29 08:06:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:26 compute-2 nova_compute[232428]: 2025-11-29 08:06:26.992 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully updated port: 76c3d8fe-9739-4a69-9e68-39abbf4ff51e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.340 232432 DEBUG nova.compute.manager [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-changed-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.341 232432 DEBUG nova.compute.manager [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing instance network info cache due to event network-changed-76c3d8fe-9739-4a69-9e68-39abbf4ff51e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.341 232432 DEBUG oslo_concurrency.lockutils [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.341 232432 DEBUG oslo_concurrency.lockutils [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.341 232432 DEBUG nova.network.neutron [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing network info cache for port 76c3d8fe-9739-4a69-9e68-39abbf4ff51e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:27 compute-2 ceph-mon[77138]: pgmap v1966: 305 pgs: 305 active+clean; 213 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 259 op/s
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.762 232432 DEBUG nova.network.neutron [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.846 232432 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.847 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.847 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.848 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.848 232432 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.848 232432 WARNING nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state resized and task_state deleting.
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.848 232432 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.849 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.849 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.849 232432 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.849 232432 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:27 compute-2 nova_compute[232428]: 2025-11-29 08:06:27.849 232432 WARNING nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state resized and task_state deleting.
Nov 29 08:06:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/831930585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/831930585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.246 232432 DEBUG nova.network.neutron [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.280 232432 DEBUG oslo_concurrency.lockutils [req-1ae8290a-32a8-4223-b738-0821a8d3ffff req-62c42648-63cc-4955-8a74-7db4aef6c15e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.423 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Successfully updated port: 3d75a706-b904-4825-8922-462a43bc5d07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.480 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.480 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquired lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.480 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/831930585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/831930585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:28 compute-2 nova_compute[232428]: 2025-11-29 08:06:28.723 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:28.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:29 compute-2 ceph-mon[77138]: pgmap v1967: 305 pgs: 305 active+clean; 213 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 259 op/s
Nov 29 08:06:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:30 compute-2 nova_compute[232428]: 2025-11-29 08:06:30.245 232432 DEBUG nova.compute.manager [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-changed-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:30 compute-2 nova_compute[232428]: 2025-11-29 08:06:30.245 232432 DEBUG nova.compute.manager [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing instance network info cache due to event network-changed-3d75a706-b904-4825-8922-462a43bc5d07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:30 compute-2 nova_compute[232428]: 2025-11-29 08:06:30.245 232432 DEBUG oslo_concurrency.lockutils [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:30 compute-2 nova_compute[232428]: 2025-11-29 08:06:30.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 08:06:30 compute-2 nova_compute[232428]: 2025-11-29 08:06:30.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.513953) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591514035, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2367, "num_deletes": 259, "total_data_size": 5389298, "memory_usage": 5468696, "flush_reason": "Manual Compaction"}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591540924, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3530624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37575, "largest_seqno": 39937, "table_properties": {"data_size": 3521047, "index_size": 5943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20532, "raw_average_key_size": 20, "raw_value_size": 3501591, "raw_average_value_size": 3501, "num_data_blocks": 258, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403397, "oldest_key_time": 1764403397, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 27091 microseconds, and 15780 cpu microseconds.
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.541029) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3530624 bytes OK
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.541077) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.543473) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.543498) EVENT_LOG_v1 {"time_micros": 1764403591543490, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.543523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5378842, prev total WAL file size 5378842, number of live WAL files 2.
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.546259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303131' seq:72057594037927935, type:22 .. '6C6F676D0031323634' seq:0, type:0; will stop at (end)
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3447KB)], [69(9361KB)]
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591546356, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 13116417, "oldest_snapshot_seqno": -1}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7026 keys, 12955176 bytes, temperature: kUnknown
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591649277, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12955176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12905298, "index_size": 31206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180740, "raw_average_key_size": 25, "raw_value_size": 12776642, "raw_average_value_size": 1818, "num_data_blocks": 1250, "num_entries": 7026, "num_filter_entries": 7026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.649639) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12955176 bytes
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.651071) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.1 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.7) OK, records in: 7563, records dropped: 537 output_compression: NoCompression
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.651093) EVENT_LOG_v1 {"time_micros": 1764403591651082, "job": 42, "event": "compaction_finished", "compaction_time_micros": 103060, "compaction_time_cpu_micros": 29760, "output_level": 6, "num_output_files": 1, "total_output_size": 12955176, "num_input_records": 7563, "num_output_records": 7026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591651925, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591654353, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.546187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.654407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.654415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.654418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.654421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:31.654424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:31 compute-2 ceph-mon[77138]: pgmap v1968: 305 pgs: 305 active+clean; 221 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 270 op/s
Nov 29 08:06:31 compute-2 ceph-mon[77138]: osdmap e258: 3 total, 3 up, 3 in
Nov 29 08:06:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/395441916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.010 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.011 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.012 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.012 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.013 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.014 232432 INFO nova.compute.manager [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Terminating instance
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.016 232432 DEBUG nova.compute.manager [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:06:32 compute-2 kernel: tap8b53507f-ac (unregistering): left promiscuous mode
Nov 29 08:06:32 compute-2 NetworkManager[48993]: <info>  [1764403592.0793] device (tap8b53507f-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:06:32 compute-2 ovn_controller[134375]: 2025-11-29T08:06:32Z|00332|binding|INFO|Releasing lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 from this chassis (sb_readonly=0)
Nov 29 08:06:32 compute-2 ovn_controller[134375]: 2025-11-29T08:06:32Z|00333|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 down in Southbound
Nov 29 08:06:32 compute-2 ovn_controller[134375]: 2025-11-29T08:06:32Z|00334|binding|INFO|Removing iface tap8b53507f-ac ovn-installed in OVS
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.094 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.107 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:22:bc 10.100.0.7'], port_security=['fa:16:3e:6c:22:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8f9bb224-0119-4a96-9859-d3afda2ab1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=8b53507f-acc1-4e75-a82d-55c4a7d7abd7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.108 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.110 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.111 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3223a5-8717-4138-a02e-e220de169a76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.112 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.112 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 29 08:06:32 compute-2 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Consumed 6.440s CPU time.
Nov 29 08:06:32 compute-2 systemd-machined[194747]: Machine qemu-34-instance-00000050 terminated.
Nov 29 08:06:32 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [NOTICE]   (267470) : haproxy version is 2.8.14-c23fe91
Nov 29 08:06:32 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [NOTICE]   (267470) : path to executable is /usr/sbin/haproxy
Nov 29 08:06:32 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [WARNING]  (267470) : Exiting Master process...
Nov 29 08:06:32 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [ALERT]    (267470) : Current worker (267472) exited with code 143 (Terminated)
Nov 29 08:06:32 compute-2 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[267466]: [WARNING]  (267470) : All workers exited. Exiting... (0)
Nov 29 08:06:32 compute-2 systemd[1]: libpod-58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912.scope: Deactivated successfully.
Nov 29 08:06:32 compute-2 podman[267509]: 2025-11-29 08:06:32.242647685 +0000 UTC m=+0.047855508 container died 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.255 232432 INFO nova.virt.libvirt.driver [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance destroyed successfully.
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.256 232432 DEBUG nova.objects.instance [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:32 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912-userdata-shm.mount: Deactivated successfully.
Nov 29 08:06:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-63cd2e9f3d08b16b9780413dcafea67e7749f95af5146abcec093cce2ede4458-merged.mount: Deactivated successfully.
Nov 29 08:06:32 compute-2 podman[267509]: 2025-11-29 08:06:32.289969915 +0000 UTC m=+0.095177738 container cleanup 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.296 232432 DEBUG nova.virt.libvirt.vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:26Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.297 232432 DEBUG nova.network.os_vif_util [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.297 232432 DEBUG nova.network.os_vif_util [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.298 232432 DEBUG os_vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.300 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.300 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b53507f-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.305 232432 INFO os_vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac')
Nov 29 08:06:32 compute-2 systemd[1]: libpod-conmon-58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912.scope: Deactivated successfully.
Nov 29 08:06:32 compute-2 podman[267545]: 2025-11-29 08:06:32.35922583 +0000 UTC m=+0.047000970 container remove 58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.365 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[21592632-7b89-4754-b57a-d8f269b2ce55]: (4, ('Sat Nov 29 08:06:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912)\n58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912\nSat Nov 29 08:06:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912)\n58848559d6eded18f3a87b131def60f6616de75a01916791ba1494539b78e912\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.367 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0d317a0e-2cb3-4f18-82f0-602fa0bd2c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.368 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.370 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.383 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.386 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2c27f88b-1422-42a3-af2d-32419426eff7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.404 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e21637-cc64-450d-9012-84bab2bbfd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.406 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6cf57d-c2ef-406c-9fda-e8cd23fb2978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.427 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d639ea87-f5ee-4faf-b918-47895b6f80b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655212, 'reachable_time': 18427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267577, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.431 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:06:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:32.431 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c849a7bd-40ac-481f-bc0b-586374f0635d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:32.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.811 232432 INFO nova.virt.libvirt.driver [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Deleting instance files /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce_del
Nov 29 08:06:32 compute-2 nova_compute[232428]: 2025-11-29 08:06:32.812 232432 INFO nova.virt.libvirt.driver [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Deletion of /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce_del complete
Nov 29 08:06:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:32.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.002 232432 DEBUG nova.compute.manager [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.002 232432 DEBUG oslo_concurrency.lockutils [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.003 232432 DEBUG oslo_concurrency.lockutils [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.003 232432 DEBUG oslo_concurrency.lockutils [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.003 232432 DEBUG nova.compute.manager [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.004 232432 WARNING nova.compute.manager [req-35badcd1-dc6f-4fce-91b3-f691e3ad67dc req-d11066f5-3247-4647-99b1-01f29a048edc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state None.
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.029 232432 INFO nova.compute.manager [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.029 232432 DEBUG oslo.service.loopingcall [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.030 232432 DEBUG nova.compute.manager [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:06:33 compute-2 nova_compute[232428]: 2025-11-29 08:06:33.030 232432 DEBUG nova.network.neutron [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:06:33 compute-2 ceph-mon[77138]: pgmap v1970: 305 pgs: 305 active+clean; 259 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.2 MiB/s wr, 311 op/s
Nov 29 08:06:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1086621362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:34.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/234292158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.265 232432 DEBUG nova.compute.manager [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.266 232432 DEBUG oslo_concurrency.lockutils [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.267 232432 DEBUG oslo_concurrency.lockutils [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.268 232432 DEBUG oslo_concurrency.lockutils [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.268 232432 DEBUG nova.compute.manager [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.269 232432 WARNING nova.compute.manager [req-7091b665-fab6-4f94-aa67-54e6bfebaf1d req-e3a93060-9989-4659-809f-d60248e2232c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state None.
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.526 232432 DEBUG nova.network.neutron [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.557 232432 INFO nova.compute.manager [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Took 2.53 seconds to deallocate network for instance.
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.571 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.703 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.703 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.707 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.778 232432 INFO nova.scheduler.client.report [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance 8f9bb224-0119-4a96-9859-d3afda2ab1ce
Nov 29 08:06:35 compute-2 ceph-mon[77138]: pgmap v1971: 305 pgs: 305 active+clean; 235 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 261 op/s
Nov 29 08:06:35 compute-2 nova_compute[232428]: 2025-11-29 08:06:35.937 232432 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:36 compute-2 nova_compute[232428]: 2025-11-29 08:06:36.206 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:36 compute-2 nova_compute[232428]: 2025-11-29 08:06:36.657 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.310 232432 DEBUG nova.network.neutron [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [{"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.407 232432 DEBUG nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-deleted-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.436 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Releasing lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.436 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance network_info: |[{"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.438 232432 DEBUG oslo_concurrency.lockutils [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.438 232432 DEBUG nova.network.neutron [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Refreshing network info cache for port 3d75a706-b904-4825-8922-462a43bc5d07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.446 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Start _get_guest_xml network_info=[{"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.454 232432 WARNING nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.459 232432 DEBUG nova.virt.libvirt.host [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.460 232432 DEBUG nova.virt.libvirt.host [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.463 232432 DEBUG nova.virt.libvirt.host [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.464 232432 DEBUG nova.virt.libvirt.host [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.465 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.465 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.466 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.466 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.466 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.467 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.467 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.467 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.467 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.468 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.468 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.468 232432 DEBUG nova.virt.hardware [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.472 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-2 podman[267584]: 2025-11-29 08:06:37.651187666 +0000 UTC m=+0.055356412 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:06:37 compute-2 ceph-mon[77138]: pgmap v1972: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.843571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597843643, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 325, "num_deletes": 251, "total_data_size": 174985, "memory_usage": 182312, "flush_reason": "Manual Compaction"}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597847504, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 114908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39942, "largest_seqno": 40262, "table_properties": {"data_size": 112839, "index_size": 233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5269, "raw_average_key_size": 18, "raw_value_size": 108801, "raw_average_value_size": 383, "num_data_blocks": 10, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403591, "oldest_key_time": 1764403591, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4024 microseconds, and 2040 cpu microseconds.
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.847595) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 114908 bytes OK
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.847628) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.849106) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.849138) EVENT_LOG_v1 {"time_micros": 1764403597849127, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.849169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 172687, prev total WAL file size 172687, number of live WAL files 2.
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.849862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(112KB)], [72(12MB)]
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597849951, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 13070084, "oldest_snapshot_seqno": -1}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6800 keys, 11021082 bytes, temperature: kUnknown
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597927537, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 11021082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10974549, "index_size": 28437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 176755, "raw_average_key_size": 25, "raw_value_size": 10851445, "raw_average_value_size": 1595, "num_data_blocks": 1126, "num_entries": 6800, "num_filter_entries": 6800, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.927984) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11021082 bytes
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.929230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.9 rd, 141.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.4 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(209.7) write-amplify(95.9) OK, records in: 7310, records dropped: 510 output_compression: NoCompression
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.929253) EVENT_LOG_v1 {"time_micros": 1764403597929240, "job": 44, "event": "compaction_finished", "compaction_time_micros": 77840, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 11021082, "num_input_records": 7310, "num_output_records": 6800, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597929799, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597931929, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.849735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.932101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.932110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.932112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.932113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:06:37.932115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/17896842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:37 compute-2 nova_compute[232428]: 2025-11-29 08:06:37.969 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.006 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.011 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.194 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2771809006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.457 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.460 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.461 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.462 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.464 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.464 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.465 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.466 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.466 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.467 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.469 232432 DEBUG nova.objects.instance [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.514 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <uuid>91cc5e5b-8962-4843-bcd2-c3e0f4c0428d</uuid>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <name>instance-00000051</name>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestMultiNic-server-538600890</nova:name>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:06:37</nova:creationTime>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:user uuid="484a7cf7f6cc49de97903a4efa4db0a5">tempest-ServersTestMultiNic-1863571577-project-member</nova:user>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:project uuid="3fc18ed0bcfe45d99b2965a6745bb628">tempest-ServersTestMultiNic-1863571577</nova:project>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:port uuid="61d720ff-b465-41b0-a524-639d10a26a68">
Nov 29 08:06:38 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.182" ipVersion="4"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:port uuid="76c3d8fe-9739-4a69-9e68-39abbf4ff51e">
Nov 29 08:06:38 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.1.187" ipVersion="4"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <nova:port uuid="3d75a706-b904-4825-8922-462a43bc5d07">
Nov 29 08:06:38 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.106" ipVersion="4"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <system>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="serial">91cc5e5b-8962-4843-bcd2-c3e0f4c0428d</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="uuid">91cc5e5b-8962-4843-bcd2-c3e0f4c0428d</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </system>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <os>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </os>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <features>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </features>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk">
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config">
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:06:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:b1:54:2b"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <target dev="tap61d720ff-b4"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:c1:96:36"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <target dev="tap76c3d8fe-97"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:72:1b:1f"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <target dev="tap3d75a706-b9"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/console.log" append="off"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <video>
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </video>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:06:38 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:06:38 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:06:38 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:06:38 compute-2 nova_compute[232428]: </domain>
Nov 29 08:06:38 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.515 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Preparing to wait for external event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.515 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.516 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.516 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.516 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Preparing to wait for external event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.516 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.517 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.517 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.517 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Preparing to wait for external event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.517 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.518 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.518 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.519 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.519 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.519 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.520 232432 DEBUG os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.521 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.522 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.525 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.525 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61d720ff-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.526 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61d720ff-b4, col_values=(('external_ids', {'iface-id': '61d720ff-b465-41b0-a524-639d10a26a68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:54:2b', 'vm-uuid': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.528 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 NetworkManager[48993]: <info>  [1764403598.5291] manager: (tap61d720ff-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.535 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.536 232432 INFO os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4')
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.538 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.538 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.539 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.540 232432 DEBUG os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.541 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.541 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.542 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.546 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.547 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76c3d8fe-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.548 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76c3d8fe-97, col_values=(('external_ids', {'iface-id': '76c3d8fe-9739-4a69-9e68-39abbf4ff51e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:96:36', 'vm-uuid': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.551 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 NetworkManager[48993]: <info>  [1764403598.5521] manager: (tap76c3d8fe-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.561 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.562 232432 INFO os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97')
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.563 232432 DEBUG nova.virt.libvirt.vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.564 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.565 232432 DEBUG nova.network.os_vif_util [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.566 232432 DEBUG os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.567 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.567 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.571 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d75a706-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.572 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d75a706-b9, col_values=(('external_ids', {'iface-id': '3d75a706-b904-4825-8922-462a43bc5d07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:1b:1f', 'vm-uuid': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.573 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 NetworkManager[48993]: <info>  [1764403598.5746] manager: (tap3d75a706-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 08:06:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.577 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.585 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.587 232432 INFO os_vif [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9')
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.687 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.687 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.688 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No VIF found with MAC fa:16:3e:b1:54:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.688 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No VIF found with MAC fa:16:3e:c1:96:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.688 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No VIF found with MAC fa:16:3e:72:1b:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.689 232432 INFO nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Using config drive
Nov 29 08:06:38 compute-2 nova_compute[232428]: 2025-11-29 08:06:38.719 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:38.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/17896842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2771809006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:39 compute-2 nova_compute[232428]: 2025-11-29 08:06:39.759 232432 INFO nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Creating config drive at /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config
Nov 29 08:06:39 compute-2 nova_compute[232428]: 2025-11-29 08:06:39.764 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrisbpb4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:39 compute-2 ceph-mon[77138]: pgmap v1973: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 29 08:06:39 compute-2 nova_compute[232428]: 2025-11-29 08:06:39.900 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrisbpb4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:39 compute-2 nova_compute[232428]: 2025-11-29 08:06:39.930 232432 DEBUG nova.storage.rbd_utils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:39 compute-2 nova_compute[232428]: 2025-11-29 08:06:39.935 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.096 232432 DEBUG oslo_concurrency.processutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.097 232432 INFO nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Deleting local config drive /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d/disk.config because it was imported into RBD.
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.1491] manager: (tap61d720ff-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 08:06:40 compute-2 kernel: tap61d720ff-b4: entered promiscuous mode
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00335|binding|INFO|Claiming lport 61d720ff-b465-41b0-a524-639d10a26a68 for this chassis.
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00336|binding|INFO|61d720ff-b465-41b0-a524-639d10a26a68: Claiming fa:16:3e:b1:54:2b 10.100.0.182
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.202 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2114] manager: (tap76c3d8fe-97): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.218 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:54:2b 10.100.0.182'], port_security=['fa:16:3e:b1:54:2b 10.100.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.182/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f84e4-c295-4679-89a6-56b8decf7949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e28b5873-678d-422b-a542-c23a88433a5d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=61d720ff-b465-41b0-a524-639d10a26a68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.219 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 61d720ff-b465-41b0-a524-639d10a26a68 in datapath a02f84e4-c295-4679-89a6-56b8decf7949 bound to our chassis
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.221 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a02f84e4-c295-4679-89a6-56b8decf7949
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2331] manager: (tap3d75a706-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 08:06:40 compute-2 systemd-udevd[267750]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:40 compute-2 systemd-udevd[267749]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:40 compute-2 systemd-udevd[267751]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.237 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[663f7157-7f64-420f-8155-2530814ca8a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.238 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa02f84e4-c1 in ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.242 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa02f84e4-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.242 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbc3675-9122-47c6-9f39-9b392eb3c67e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.243 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d3bb5d95-c04b-472f-990c-e2a2eb92379e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2459] device (tap61d720ff-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2471] device (tap61d720ff-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2490] device (tap76c3d8fe-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:40 compute-2 kernel: tap3d75a706-b9: entered promiscuous mode
Nov 29 08:06:40 compute-2 kernel: tap76c3d8fe-97: entered promiscuous mode
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2500] device (tap76c3d8fe-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00337|binding|INFO|Claiming lport 3d75a706-b904-4825-8922-462a43bc5d07 for this chassis.
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00338|binding|INFO|3d75a706-b904-4825-8922-462a43bc5d07: Claiming fa:16:3e:72:1b:1f 10.100.0.106
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00339|binding|INFO|Claiming lport 76c3d8fe-9739-4a69-9e68-39abbf4ff51e for this chassis.
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00340|binding|INFO|76c3d8fe-9739-4a69-9e68-39abbf4ff51e: Claiming fa:16:3e:c1:96:36 10.100.1.187
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.252 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.257 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.257 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[7b24f496-43ba-4d83-b9c7-7ad15b4a74ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00341|binding|INFO|Setting lport 61d720ff-b465-41b0-a524-639d10a26a68 ovn-installed in OVS
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00342|binding|INFO|Setting lport 61d720ff-b465-41b0-a524-639d10a26a68 up in Southbound
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.262 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:96:36 10.100.1.187'], port_security=['fa:16:3e:c1:96:36 10.100.1.187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.187/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25a6412b-a34e-41d2-b367-cc006c085f4a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=76c3d8fe-9739-4a69-9e68-39abbf4ff51e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.263 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:1b:1f 10.100.0.106'], port_security=['fa:16:3e:72:1b:1f 10.100.0.106'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.106/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f84e4-c295-4679-89a6-56b8decf7949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e28b5873-678d-422b-a542-c23a88433a5d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3d75a706-b904-4825-8922-462a43bc5d07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2677] device (tap3d75a706-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.2691] device (tap3d75a706-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:40 compute-2 systemd-machined[194747]: New machine qemu-35-instance-00000051.
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.287 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[44688bdc-9179-4659-9c44-5c73bea22664]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 systemd[1]: Started Virtual Machine qemu-35-instance-00000051.
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00343|binding|INFO|Setting lport 76c3d8fe-9739-4a69-9e68-39abbf4ff51e ovn-installed in OVS
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00344|binding|INFO|Setting lport 76c3d8fe-9739-4a69-9e68-39abbf4ff51e up in Southbound
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00345|binding|INFO|Setting lport 3d75a706-b904-4825-8922-462a43bc5d07 ovn-installed in OVS
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00346|binding|INFO|Setting lport 3d75a706-b904-4825-8922-462a43bc5d07 up in Southbound
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.321 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9943bdd1-0d0a-45d3-81b1-1ca7e8d1070e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.326 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96035889-3de5-4a54-a4c6-05cae5b77655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.3280] manager: (tapa02f84e4-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.362 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a4710e4a-f2da-476f-a834-3e1a5d7b2ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.365 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[aad7e407-8eed-4d11-8d3f-a0a6aabf2497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.3937] device (tapa02f84e4-c0): carrier: link connected
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.399 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17a036-1fb5-44ca-86c6-be289f07278c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.417 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[233b9379-85e9-4bb5-b589-e35ea4b7ba82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f84e4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:34:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656666, 'reachable_time': 15275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267787, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.434 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[66ca1783-2957-40eb-88d3-0eb595032848]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:346b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656666, 'tstamp': 656666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267788, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.452 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6406743d-dd8c-44d2-968a-a11103a67073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f84e4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:34:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656666, 'reachable_time': 15275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267789, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ff79279f-da3b-4e55-a48f-3540fcc512ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.554 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[875ea590-92d6-45d9-8079-c2cd9ff9decd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.556 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f84e4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.556 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.556 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02f84e4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 NetworkManager[48993]: <info>  [1764403600.5597] manager: (tapa02f84e4-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 08:06:40 compute-2 kernel: tapa02f84e4-c0: entered promiscuous mode
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.561 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.561 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa02f84e4-c0, col_values=(('external_ids', {'iface-id': '53d7d9d6-e039-487f-8712-449552e0d9e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.563 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 ovn_controller[134375]: 2025-11-29T08:06:40Z|00347|binding|INFO|Releasing lport 53d7d9d6-e039-487f-8712-449552e0d9e1 from this chassis (sb_readonly=0)
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.573 232432 DEBUG nova.compute.manager [req-e11bd234-ebc6-43b6-8e22-16d3dcd3e89b req-50ef57ef-a6c3-44d7-b6e2-4911fccb1ec2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.573 232432 DEBUG oslo_concurrency.lockutils [req-e11bd234-ebc6-43b6-8e22-16d3dcd3e89b req-50ef57ef-a6c3-44d7-b6e2-4911fccb1ec2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.574 232432 DEBUG oslo_concurrency.lockutils [req-e11bd234-ebc6-43b6-8e22-16d3dcd3e89b req-50ef57ef-a6c3-44d7-b6e2-4911fccb1ec2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.574 232432 DEBUG oslo_concurrency.lockutils [req-e11bd234-ebc6-43b6-8e22-16d3dcd3e89b req-50ef57ef-a6c3-44d7-b6e2-4911fccb1ec2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.574 232432 DEBUG nova.compute.manager [req-e11bd234-ebc6-43b6-8e22-16d3dcd3e89b req-50ef57ef-a6c3-44d7-b6e2-4911fccb1ec2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Processing event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.581 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a02f84e4-c295-4679-89a6-56b8decf7949.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a02f84e4-c295-4679-89a6-56b8decf7949.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.582 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[892eba59-7464-48f2-871c-cf076b71bd1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.584 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-a02f84e4-c295-4679-89a6-56b8decf7949
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/a02f84e4-c295-4679-89a6-56b8decf7949.pid.haproxy
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID a02f84e4-c295-4679-89a6-56b8decf7949
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:06:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:40.584 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'env', 'PROCESS_TAG=haproxy-a02f84e4-c295-4679-89a6-56b8decf7949', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a02f84e4-c295-4679-89a6-56b8decf7949.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.760 232432 DEBUG nova.compute.manager [req-e6ce5b6b-0d9a-4cb2-83b2-65d47d4563cf req-cce920f1-ef08-4279-82f3-c7df6c096c74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.762 232432 DEBUG oslo_concurrency.lockutils [req-e6ce5b6b-0d9a-4cb2-83b2-65d47d4563cf req-cce920f1-ef08-4279-82f3-c7df6c096c74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.762 232432 DEBUG oslo_concurrency.lockutils [req-e6ce5b6b-0d9a-4cb2-83b2-65d47d4563cf req-cce920f1-ef08-4279-82f3-c7df6c096c74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.762 232432 DEBUG oslo_concurrency.lockutils [req-e6ce5b6b-0d9a-4cb2-83b2-65d47d4563cf req-cce920f1-ef08-4279-82f3-c7df6c096c74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:40 compute-2 nova_compute[232428]: 2025-11-29 08:06:40.763 232432 DEBUG nova.compute.manager [req-e6ce5b6b-0d9a-4cb2-83b2-65d47d4563cf req-cce920f1-ef08-4279-82f3-c7df6c096c74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Processing event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:40.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:40 compute-2 ceph-mon[77138]: pgmap v1974: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 141 op/s
Nov 29 08:06:40 compute-2 podman[267821]: 2025-11-29 08:06:40.961763952 +0000 UTC m=+0.054002349 container create d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:06:41 compute-2 systemd[1]: Started libpod-conmon-d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708.scope.
Nov 29 08:06:41 compute-2 podman[267821]: 2025-11-29 08:06:40.930911107 +0000 UTC m=+0.023149504 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:06:41 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:06:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af3ccdc49cafe186bad3c1801af75c1b8851635ff794d58ac365e09df1c2cf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:41 compute-2 podman[267821]: 2025-11-29 08:06:41.065251938 +0000 UTC m=+0.157490355 container init d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:41 compute-2 podman[267821]: 2025-11-29 08:06:41.073005102 +0000 UTC m=+0.165243499 container start d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:06:41 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [NOTICE]   (267841) : New worker (267851) forked
Nov 29 08:06:41 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [NOTICE]   (267841) : Loading success.
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.180 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 76c3d8fe-9739-4a69-9e68-39abbf4ff51e in datapath a370d423-b1ed-4c6b-95ee-1887ae6cfe0c unbound from our chassis
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.184 232432 DEBUG nova.network.neutron [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updated VIF entry in instance network info cache for port 3d75a706-b904-4825-8922-462a43bc5d07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.184 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a370d423-b1ed-4c6b-95ee-1887ae6cfe0c
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.185 232432 DEBUG nova.network.neutron [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [{"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.206 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[933a08a1-28d2-46fd-a731-9a9b2422a162]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.208 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa370d423-b1 in ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.210 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa370d423-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.211 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[320da264-b254-47da-ba04-a262f805c0c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4a09a93c-aae4-4efd-b475-8a8ca527ddcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.225 232432 DEBUG oslo_concurrency.lockutils [req-7e84894c-5c44-457f-ac0d-62a4cb192a88 req-45a9734f-2ecd-41c0-b59b-cee053c95e5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.231 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4f095b-2fd0-4c92-bac9-1755df220535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.268 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7a83a0-fc9d-481b-906c-8e311882d0f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.317 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2709e13c-d125-43c9-814d-5cf8f34df887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.322 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[af183023-9b53-42dd-9dd4-59cda6893523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 systemd-udevd[267777]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:41 compute-2 NetworkManager[48993]: <info>  [1764403601.3272] manager: (tapa370d423-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.359 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403601.35912, 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.360 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] VM Started (Lifecycle Event)
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.364 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7b9328-197e-418a-8b5f-243868a3881a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.368 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc64db-6331-40eb-9fec-46512e09ddab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.390 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.397 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403601.3593504, 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.397 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] VM Paused (Lifecycle Event)
Nov 29 08:06:41 compute-2 NetworkManager[48993]: <info>  [1764403601.4020] device (tapa370d423-b0): carrier: link connected
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.410 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[56bc4bc1-dc17-49d9-9eac-b10c8e751a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.417 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.421 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.432 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7cfe17-cb3c-447b-86d6-73b936e41ffa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa370d423-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:8f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656767, 'reachable_time': 24606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267907, 'error': None, 'target': 'ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.447 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.453 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[67610f30-14cb-4d0d-8119-4fdb17d932e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:8f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656767, 'tstamp': 656767}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267908, 'error': None, 'target': 'ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.475 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aff99e04-46a5-450a-9f59-27698cfa5f86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa370d423-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:8f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656767, 'reachable_time': 24606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267909, 'error': None, 'target': 'ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.519 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[38194250-2d75-48ce-946e-aa239e42ce03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.611 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96138eb5-8250-4329-8b5f-5f153ade5244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.613 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa370d423-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.613 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.614 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa370d423-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:41 compute-2 kernel: tapa370d423-b0: entered promiscuous mode
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.616 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:41 compute-2 NetworkManager[48993]: <info>  [1764403601.6180] manager: (tapa370d423-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.621 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa370d423-b0, col_values=(('external_ids', {'iface-id': '14811327-9ccb-4165-bde9-a96d648b8b5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:41 compute-2 ovn_controller[134375]: 2025-11-29T08:06:41Z|00348|binding|INFO|Releasing lport 14811327-9ccb-4165-bde9-a96d648b8b5b from this chassis (sb_readonly=0)
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.626 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a370d423-b1ed-4c6b-95ee-1887ae6cfe0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a370d423-b1ed-4c6b-95ee-1887ae6cfe0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.628 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7b887d90-b47f-4c91-a21f-37eabc691c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.629 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/a370d423-b1ed-4c6b-95ee-1887ae6cfe0c.pid.haproxy
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID a370d423-b1ed-4c6b-95ee-1887ae6cfe0c
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:06:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:41.631 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'env', 'PROCESS_TAG=haproxy-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a370d423-b1ed-4c6b-95ee-1887ae6cfe0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:06:41 compute-2 nova_compute[232428]: 2025-11-29 08:06:41.636 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2209813400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2209813400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:41 compute-2 ceph-mon[77138]: osdmap e259: 3 total, 3 up, 3 in
Nov 29 08:06:42 compute-2 podman[267940]: 2025-11-29 08:06:42.054608581 +0000 UTC m=+0.077175274 container create 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:06:42 compute-2 systemd[1]: Started libpod-conmon-89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5.scope.
Nov 29 08:06:42 compute-2 podman[267940]: 2025-11-29 08:06:42.019458152 +0000 UTC m=+0.042024825 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:06:42 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:06:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0b1bb436f5530a0343331c17441d70ffff560dc4a5d024959ff095746d507a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:42 compute-2 podman[267940]: 2025-11-29 08:06:42.165127438 +0000 UTC m=+0.187694131 container init 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:06:42 compute-2 podman[267940]: 2025-11-29 08:06:42.177835465 +0000 UTC m=+0.200402158 container start 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:06:42 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [NOTICE]   (267959) : New worker (267961) forked
Nov 29 08:06:42 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [NOTICE]   (267959) : Loading success.
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.240 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.242 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.249 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3d75a706-b904-4825-8922-462a43bc5d07 in datapath a02f84e4-c295-4679-89a6-56b8decf7949 unbound from our chassis
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.252 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a02f84e4-c295-4679-89a6-56b8decf7949
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.277 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2b26a8e2-c697-473f-b8a9-672d3846b400]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.322 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[86f6d842-fff2-4337-bfbd-7f46fee82139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.326 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8d517f2c-f17d-4743-a673-7bb8a3d717cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.380 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f633a99a-8021-4fb0-a74f-8148e5c4bbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.414 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8bb2ff-c417-46f3-b42a-7ef38b3c264c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f84e4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:34:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656666, 'reachable_time': 15275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267975, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.442 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1b17c8e3-0d5b-46c7-af82-0175477b392f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa02f84e4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656678, 'tstamp': 656678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267976, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapa02f84e4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656681, 'tstamp': 656681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267976, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.444 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f84e4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.446 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.449 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02f84e4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.449 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.449 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa02f84e4-c0, col_values=(('external_ids', {'iface-id': '53d7d9d6-e039-487f-8712-449552e0d9e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.449 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:42.449 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:42 compute-2 sudo[267977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:42 compute-2 sudo[267977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:42 compute-2 sudo[267977]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:42 compute-2 sudo[268002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:42 compute-2 sudo[268002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:42 compute-2 sudo[268002]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.812 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.813 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.813 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.813 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.813 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No event matching network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 in dict_keys([('network-vif-plugged', '76c3d8fe-9739-4a69-9e68-39abbf4ff51e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.814 232432 WARNING nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 for instance with vm_state building and task_state spawning.
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.814 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.814 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.814 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.815 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.815 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Processing event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.815 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.815 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.815 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.816 232432 DEBUG oslo_concurrency.lockutils [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.816 232432 DEBUG nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.816 232432 WARNING nova.compute.manager [req-760b537e-c450-4f75-9442-cb7cc4c719b5 req-e0790d1d-0a39-43f2-9101-58cd1dc9b616 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e for instance with vm_state building and task_state spawning.
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.817 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.820 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403602.820774, 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.821 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] VM Resumed (Lifecycle Event)
Nov 29 08:06:42 compute-2 sudo[268028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:42 compute-2 sudo[268028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.825 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:42 compute-2 sudo[268028]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.829 232432 INFO nova.virt.libvirt.driver [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance spawned successfully.
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.829 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:06:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:42.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:42 compute-2 podman[268026]: 2025-11-29 08:06:42.861455354 +0000 UTC m=+0.082173771 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.871 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.877 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.880 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.880 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.880 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.881 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.881 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.882 232432 DEBUG nova.virt.libvirt.driver [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:42 compute-2 sudo[268067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:06:42 compute-2 sudo[268067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:42 compute-2 nova_compute[232428]: 2025-11-29 08:06:42.963 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:43 compute-2 ceph-mon[77138]: pgmap v1976: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 31 KiB/s wr, 125 op/s
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.012 232432 DEBUG nova.compute.manager [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.013 232432 DEBUG oslo_concurrency.lockutils [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.013 232432 DEBUG oslo_concurrency.lockutils [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.014 232432 DEBUG oslo_concurrency.lockutils [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.015 232432 DEBUG nova.compute.manager [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.016 232432 WARNING nova.compute.manager [req-ff660f5f-25fa-4535-823f-3536d09b2ec7 req-2cc26f19-5732-49ae-ab04-da751aad8f30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 for instance with vm_state building and task_state spawning.
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.091 232432 INFO nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Took 23.04 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.092 232432 DEBUG nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.165 232432 INFO nova.compute.manager [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Took 24.21 seconds to build instance.
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.190 232432 DEBUG oslo_concurrency.lockutils [None req-dad00ddf-d924-4c5e-b1a8-57ad3bba97a1 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:43 compute-2 sudo[268067]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:43 compute-2 nova_compute[232428]: 2025-11-29 08:06:43.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/364908919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:06:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:45 compute-2 nova_compute[232428]: 2025-11-29 08:06:45.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:45 compute-2 ceph-mon[77138]: pgmap v1977: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 138 op/s
Nov 29 08:06:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1429887533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.203 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.204 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.204 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.204 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.205 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.206 232432 INFO nova.compute.manager [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Terminating instance
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.207 232432 DEBUG nova.compute.manager [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:06:46 compute-2 kernel: tap61d720ff-b4 (unregistering): left promiscuous mode
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.3177] device (tap61d720ff-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00349|binding|INFO|Releasing lport 61d720ff-b465-41b0-a524-639d10a26a68 from this chassis (sb_readonly=0)
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00350|binding|INFO|Setting lport 61d720ff-b465-41b0-a524-639d10a26a68 down in Southbound
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00351|binding|INFO|Removing iface tap61d720ff-b4 ovn-installed in OVS
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.333 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:54:2b 10.100.0.182'], port_security=['fa:16:3e:b1:54:2b 10.100.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.182/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f84e4-c295-4679-89a6-56b8decf7949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e28b5873-678d-422b-a542-c23a88433a5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=61d720ff-b465-41b0-a524-639d10a26a68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.335 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 61d720ff-b465-41b0-a524-639d10a26a68 in datapath a02f84e4-c295-4679-89a6-56b8decf7949 unbound from our chassis
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.337 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a02f84e4-c295-4679-89a6-56b8decf7949
Nov 29 08:06:46 compute-2 kernel: tap76c3d8fe-97 (unregistering): left promiscuous mode
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.3440] device (tap76c3d8fe-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00352|binding|INFO|Releasing lport 76c3d8fe-9739-4a69-9e68-39abbf4ff51e from this chassis (sb_readonly=0)
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00353|binding|INFO|Setting lport 76c3d8fe-9739-4a69-9e68-39abbf4ff51e down in Southbound
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00354|binding|INFO|Removing iface tap76c3d8fe-97 ovn-installed in OVS
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.357 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf42b1a-e2be-485d-aa93-b33a6d07c7cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.359 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.363 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:96:36 10.100.1.187'], port_security=['fa:16:3e:c1:96:36 10.100.1.187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.187/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25a6412b-a34e-41d2-b367-cc006c085f4a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=76c3d8fe-9739-4a69-9e68-39abbf4ff51e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:46 compute-2 kernel: tap3d75a706-b9 (unregistering): left promiscuous mode
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.3750] device (tap3d75a706-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.376 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00355|binding|INFO|Releasing lport 3d75a706-b904-4825-8922-462a43bc5d07 from this chassis (sb_readonly=0)
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00356|binding|INFO|Setting lport 3d75a706-b904-4825-8922-462a43bc5d07 down in Southbound
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.385 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_controller[134375]: 2025-11-29T08:06:46Z|00357|binding|INFO|Removing iface tap3d75a706-b9 ovn-installed in OVS
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.387 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.392 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:1b:1f 10.100.0.106'], port_security=['fa:16:3e:72:1b:1f 10.100.0.106'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.106/24', 'neutron:device_id': '91cc5e5b-8962-4843-bcd2-c3e0f4c0428d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f84e4-c295-4679-89a6-56b8decf7949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e28b5873-678d-422b-a542-c23a88433a5d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3d75a706-b904-4825-8922-462a43bc5d07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.396 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b067ceb7-8428-4732-a1a6-d4260aac7cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.399 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3395f7ec-1d5a-43bd-934c-5bb5eb380105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.425 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d839a517-b581-481d-8ed0-30fea3d931b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 29 08:06:46 compute-2 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Consumed 4.473s CPU time.
Nov 29 08:06:46 compute-2 systemd-machined[194747]: Machine qemu-35-instance-00000051 terminated.
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.440 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6818ae8c-e05c-4280-9616-979037654505]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f84e4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:34:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656666, 'reachable_time': 15275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268152, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.459 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[70275953-e4b6-45fa-ad89-38f29dd55542]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa02f84e4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656678, 'tstamp': 656678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268153, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapa02f84e4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656681, 'tstamp': 656681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268153, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.461 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f84e4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.462 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02f84e4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.472 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa02f84e4-c0, col_values=(('external_ids', {'iface-id': '53d7d9d6-e039-487f-8712-449552e0d9e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.472 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.473 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 76c3d8fe-9739-4a69-9e68-39abbf4ff51e in datapath a370d423-b1ed-4c6b-95ee-1887ae6cfe0c unbound from our chassis
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.474 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.475 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a66930d6-236a-4c1b-a30f-83698fb72455]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.475 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c namespace which is not needed anymore
Nov 29 08:06:46 compute-2 sudo[268173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:46 compute-2 sudo[268173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:46 compute-2 sudo[268173]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.6261] manager: (tap61d720ff-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [NOTICE]   (267959) : haproxy version is 2.8.14-c23fe91
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [NOTICE]   (267959) : path to executable is /usr/sbin/haproxy
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [WARNING]  (267959) : Exiting Master process...
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [WARNING]  (267959) : Exiting Master process...
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [ALERT]    (267959) : Current worker (267961) exited with code 143 (Terminated)
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c[267955]: [WARNING]  (267959) : All workers exited. Exiting... (0)
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.6349] manager: (tap76c3d8fe-97): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 08:06:46 compute-2 systemd[1]: libpod-89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5.scope: Deactivated successfully.
Nov 29 08:06:46 compute-2 NetworkManager[48993]: <info>  [1764403606.6422] manager: (tap3d75a706-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 08:06:46 compute-2 podman[268182]: 2025-11-29 08:06:46.643104266 +0000 UTC m=+0.068463856 container died 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:06:46 compute-2 sudo[268209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.656 232432 INFO nova.virt.libvirt.driver [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Instance destroyed successfully.
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.657 232432 DEBUG nova.objects.instance [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'resources' on Instance uuid 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:46 compute-2 sudo[268209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:46 compute-2 sudo[268209]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:46 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5-userdata-shm.mount: Deactivated successfully.
Nov 29 08:06:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-4d0b1bb436f5530a0343331c17441d70ffff560dc4a5d024959ff095746d507a-merged.mount: Deactivated successfully.
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.688 232432 DEBUG nova.virt.libvirt.vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:43Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.689 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "61d720ff-b465-41b0-a524-639d10a26a68", "address": "fa:16:3e:b1:54:2b", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61d720ff-b4", "ovs_interfaceid": "61d720ff-b465-41b0-a524-639d10a26a68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.689 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.690 232432 DEBUG os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.692 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61d720ff-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 podman[268182]: 2025-11-29 08:06:46.694700304 +0000 UTC m=+0.120059874 container cleanup 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.696 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.701 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.703 232432 INFO os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:54:2b,bridge_name='br-int',has_traffic_filtering=True,id=61d720ff-b465-41b0-a524-639d10a26a68,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61d720ff-b4')
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.704 232432 DEBUG nova.virt.libvirt.vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:43Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.705 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.706 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.706 232432 DEBUG os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.708 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.708 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c3d8fe-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.710 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:46 compute-2 systemd[1]: libpod-conmon-89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5.scope: Deactivated successfully.
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.714 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.716 232432 INFO os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:96:36,bridge_name='br-int',has_traffic_filtering=True,id=76c3d8fe-9739-4a69-9e68-39abbf4ff51e,network=Network(a370d423-b1ed-4c6b-95ee-1887ae6cfe0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76c3d8fe-97')
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.716 232432 DEBUG nova.virt.libvirt.vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-538600890',display_name='tempest-ServersTestMultiNic-server-538600890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-538600890',id=81,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-gwyuw54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:43Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=91cc5e5b-8962-4843-bcd2-c3e0f4c0428d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.717 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.717 232432 DEBUG nova.network.os_vif_util [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.718 232432 DEBUG os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.719 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.719 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d75a706-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.720 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.723 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.725 232432 INFO os_vif [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:1b:1f,bridge_name='br-int',has_traffic_filtering=True,id=3d75a706-b904-4825-8922-462a43bc5d07,network=Network(a02f84e4-c295-4679-89a6-56b8decf7949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d75a706-b9')
Nov 29 08:06:46 compute-2 podman[268285]: 2025-11-29 08:06:46.767158723 +0000 UTC m=+0.048346569 container remove 89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.776 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf48ba4-11f8-49ed-9cca-ec8f8e428f93]: (4, ('Sat Nov 29 08:06:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c (89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5)\n89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5\nSat Nov 29 08:06:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c (89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5)\n89e2e1a0f4a8ce8969ea15f32a404f711ddec414f1fe4916bfcf290adcf292d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.779 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b17b3f-36f6-43c8-89c7-271dcc1e3bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.780 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa370d423-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-2 kernel: tapa370d423-b0: left promiscuous mode
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.782 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.798 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[292ffcc4-b9cb-4038-9b3d-6c6d3f4d002c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.814 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f3b04a-ba74-43f0-8944-4deb85ababe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.816 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0be0a280-3b7f-42de-b481-bf3998c143cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.836 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[153176d2-53bd-454d-983a-9e57d2fa65d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656757, 'reachable_time': 31747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268319, 'error': None, 'target': 'ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:46.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.839 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a370d423-b1ed-4c6b-95ee-1887ae6cfe0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.839 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4f60cf02-4ca4-4ea0-919a-f55c94e63a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.840 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3d75a706-b904-4825-8922-462a43bc5d07 in datapath a02f84e4-c295-4679-89a6-56b8decf7949 unbound from our chassis
Nov 29 08:06:46 compute-2 systemd[1]: run-netns-ovnmeta\x2da370d423\x2db1ed\x2d4c6b\x2d95ee\x2d1887ae6cfe0c.mount: Deactivated successfully.
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.843 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a02f84e4-c295-4679-89a6-56b8decf7949, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.844 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f365014-b18c-4b21-93f8-cd4f2fe0727b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:46.844 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949 namespace which is not needed anymore
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.920 232432 DEBUG nova.compute.manager [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.921 232432 DEBUG oslo_concurrency.lockutils [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.922 232432 DEBUG oslo_concurrency.lockutils [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.922 232432 DEBUG oslo_concurrency.lockutils [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.923 232432 DEBUG nova.compute.manager [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-unplugged-3d75a706-b904-4825-8922-462a43bc5d07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.923 232432 DEBUG nova.compute.manager [req-2619f567-63f1-4f09-918b-221d1e171f72 req-ac42d714-8518-423d-8e88-7d1ec9a942a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-3d75a706-b904-4825-8922-462a43bc5d07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.948 232432 DEBUG nova.compute.manager [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.949 232432 DEBUG oslo_concurrency.lockutils [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.949 232432 DEBUG oslo_concurrency.lockutils [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.950 232432 DEBUG oslo_concurrency.lockutils [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.950 232432 DEBUG nova.compute.manager [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-unplugged-61d720ff-b465-41b0-a524-639d10a26a68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:46 compute-2 nova_compute[232428]: 2025-11-29 08:06:46.951 232432 DEBUG nova.compute.manager [req-e824af74-b22a-4c5c-a7b8-f6ae8bb784a3 req-a577de21-198c-4801-9bf3-19b902f01a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-61d720ff-b465-41b0-a524-639d10a26a68 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [NOTICE]   (267841) : haproxy version is 2.8.14-c23fe91
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [NOTICE]   (267841) : path to executable is /usr/sbin/haproxy
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [WARNING]  (267841) : Exiting Master process...
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [ALERT]    (267841) : Current worker (267851) exited with code 143 (Terminated)
Nov 29 08:06:46 compute-2 neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949[267837]: [WARNING]  (267841) : All workers exited. Exiting... (0)
Nov 29 08:06:46 compute-2 systemd[1]: libpod-d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708.scope: Deactivated successfully.
Nov 29 08:06:46 compute-2 podman[268338]: 2025-11-29 08:06:46.992246178 +0000 UTC m=+0.050437913 container died d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:06:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708-userdata-shm.mount: Deactivated successfully.
Nov 29 08:06:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-8af3ccdc49cafe186bad3c1801af75c1b8851635ff794d58ac365e09df1c2cf0-merged.mount: Deactivated successfully.
Nov 29 08:06:47 compute-2 podman[268338]: 2025-11-29 08:06:47.029975184 +0000 UTC m=+0.088166909 container cleanup d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:06:47 compute-2 systemd[1]: libpod-conmon-d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708.scope: Deactivated successfully.
Nov 29 08:06:47 compute-2 podman[268366]: 2025-11-29 08:06:47.130187788 +0000 UTC m=+0.067516146 container remove d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.140 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cbae4408-756a-46e1-9380-855361844d32]: (4, ('Sat Nov 29 08:06:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949 (d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708)\nd1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708\nSat Nov 29 08:06:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949 (d1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708)\nd1574e62aaef386c5093bae2af4759340ff61e1deb1f65f113a218230c594708\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.142 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a227aca1-88b4-427b-bf8d-b1bacaa2c332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.143 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f84e4-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-2 kernel: tapa02f84e4-c0: left promiscuous mode
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.160 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3891655d-65b4-43a4-aed3-4c3784ee9293]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.176 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4d17ea81-6729-4039-80b4-a92c443a568d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.177 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[765d5de9-ee9e-4e0e-9d93-f0891e2ac83d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.201 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50293498-0889-494b-b8d0-85a1e9d59421]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656658, 'reachable_time': 43216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268381, 'error': None, 'target': 'ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.204 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a02f84e4-c295-4679-89a6-56b8decf7949 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:06:47.204 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f362f27b-1706-40ce-b7fc-b41cc515e523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.253 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403592.2529974, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.254 232432 INFO nova.compute.manager [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Stopped (Lifecycle Event)
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.256 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.256 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.257 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.257 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.257 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:47 compute-2 ceph-mon[77138]: pgmap v1978: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 30 KiB/s wr, 177 op/s
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.346 232432 DEBUG nova.compute.manager [None req-2132bd33-c9de-4297-bc05-6a5d88063450 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-2 systemd[1]: run-netns-ovnmeta\x2da02f84e4\x2dc295\x2d4679\x2d89a6\x2d56b8decf7949.mount: Deactivated successfully.
Nov 29 08:06:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2151527856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.741 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.825 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.825 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.918 232432 INFO nova.virt.libvirt.driver [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Deleting instance files /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_del
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.920 232432 INFO nova.virt.libvirt.driver [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Deletion of /var/lib/nova/instances/91cc5e5b-8962-4843-bcd2-c3e0f4c0428d_del complete
Nov 29 08:06:47 compute-2 nova_compute[232428]: 2025-11-29 08:06:47.946 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.002 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.003 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4533MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.004 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.004 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.109 232432 INFO nova.compute.manager [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Took 1.90 seconds to destroy the instance on the hypervisor.
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.110 232432 DEBUG oslo.service.loopingcall [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.111 232432 DEBUG nova.compute.manager [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.112 232432 DEBUG nova.network.neutron [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.232 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.287 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2151527856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3859663900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1563873870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.762 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.770 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.788 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.820 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.821 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.822 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.823 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:06:48 compute-2 nova_compute[232428]: 2025-11-29 08:06:48.837 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:06:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:48.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.073 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.074 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.075 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.075 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.076 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.076 232432 WARNING nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-61d720ff-b465-41b0-a524-639d10a26a68 for instance with vm_state active and task_state deleting.
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.076 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.077 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.077 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.078 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.078 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-unplugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.079 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-unplugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.079 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.080 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.080 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.080 232432 DEBUG oslo_concurrency.lockutils [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.081 232432 DEBUG nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.081 232432 WARNING nova.compute.manager [req-ee626870-0d16-47c9-9d45-21da25e5680b req-3b81be12-874a-44e0-806a-5a39f836728b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-76c3d8fe-9739-4a69-9e68-39abbf4ff51e for instance with vm_state active and task_state deleting.
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.168 232432 DEBUG nova.compute.manager [req-5ea8742b-25f0-4fa1-b9e6-1ff092bccfcc req-2a5d12a7-d89d-4ff3-9299-ecbebca96334 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-deleted-61d720ff-b465-41b0-a524-639d10a26a68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.169 232432 INFO nova.compute.manager [req-5ea8742b-25f0-4fa1-b9e6-1ff092bccfcc req-2a5d12a7-d89d-4ff3-9299-ecbebca96334 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Neutron deleted interface 61d720ff-b465-41b0-a524-639d10a26a68; detaching it from the instance and deleting it from the info cache
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.169 232432 DEBUG nova.network.neutron [req-5ea8742b-25f0-4fa1-b9e6-1ff092bccfcc req-2a5d12a7-d89d-4ff3-9299-ecbebca96334 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [{"id": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "address": "fa:16:3e:c1:96:36", "network": {"id": "a370d423-b1ed-4c6b-95ee-1887ae6cfe0c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-4541786", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.187", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76c3d8fe-97", "ovs_interfaceid": "76c3d8fe-9739-4a69-9e68-39abbf4ff51e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d75a706-b904-4825-8922-462a43bc5d07", "address": "fa:16:3e:72:1b:1f", "network": {"id": "a02f84e4-c295-4679-89a6-56b8decf7949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-598987002", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.106", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d75a706-b9", "ovs_interfaceid": "3d75a706-b904-4825-8922-462a43bc5d07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.202 232432 DEBUG nova.compute.manager [req-5ea8742b-25f0-4fa1-b9e6-1ff092bccfcc req-2a5d12a7-d89d-4ff3-9299-ecbebca96334 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Detach interface failed, port_id=61d720ff-b465-41b0-a524-639d10a26a68, reason: Instance 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.256 232432 DEBUG nova.compute.manager [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.257 232432 DEBUG oslo_concurrency.lockutils [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.257 232432 DEBUG oslo_concurrency.lockutils [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.258 232432 DEBUG oslo_concurrency.lockutils [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.258 232432 DEBUG nova.compute.manager [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] No waiting events found dispatching network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-2 nova_compute[232428]: 2025-11-29 08:06:49.259 232432 WARNING nova.compute.manager [req-cc9026db-c98b-49f9-b01f-059971d9f09d req-bde22075-71c0-41f4-b995-5d589200a3b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received unexpected event network-vif-plugged-3d75a706-b904-4825-8922-462a43bc5d07 for instance with vm_state active and task_state deleting.
Nov 29 08:06:49 compute-2 ceph-mon[77138]: pgmap v1979: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 30 KiB/s wr, 177 op/s
Nov 29 08:06:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1563873870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2208697596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.138 232432 DEBUG nova.network.neutron [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.161 232432 INFO nova.compute.manager [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Took 2.05 seconds to deallocate network for instance.
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.231 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.232 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.475 232432 DEBUG oslo_concurrency.processutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.839 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1766016329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.938 232432 DEBUG oslo_concurrency.processutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.945 232432 DEBUG nova.compute.provider_tree [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:50 compute-2 nova_compute[232428]: 2025-11-29 08:06:50.972 232432 DEBUG nova.scheduler.client.report [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.027 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.106 232432 INFO nova.scheduler.client.report [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Deleted allocations for instance 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.187 232432 DEBUG oslo_concurrency.lockutils [None req-96f15ba4-665d-4685-8adb-e86242a88ac9 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "91cc5e5b-8962-4843-bcd2-c3e0f4c0428d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:51 compute-2 sudo[268452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:51 compute-2 sudo[268452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-2 sudo[268452]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-2 sudo[268477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:06:51 compute-2 sudo[268477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-2 sudo[268477]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.444 232432 DEBUG nova.compute.manager [req-20239408-a5d8-48f7-a8a9-4fa8ac5a3861 req-bb2a6006-a822-4ae1-858a-35c4959a73e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-deleted-76c3d8fe-9739-4a69-9e68-39abbf4ff51e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.444 232432 DEBUG nova.compute.manager [req-20239408-a5d8-48f7-a8a9-4fa8ac5a3861 req-bb2a6006-a822-4ae1-858a-35c4959a73e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Received event network-vif-deleted-3d75a706-b904-4825-8922-462a43bc5d07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:51 compute-2 nova_compute[232428]: 2025-11-29 08:06:51.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:51 compute-2 ceph-mon[77138]: pgmap v1980: 305 pgs: 305 active+clean; 173 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 31 KiB/s wr, 205 op/s
Nov 29 08:06:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:06:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:06:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1766016329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:52 compute-2 podman[268504]: 2025-11-29 08:06:52.704277895 +0000 UTC m=+0.108437020 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:52 compute-2 ceph-mon[77138]: pgmap v1981: 305 pgs: 305 active+clean; 152 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Nov 29 08:06:54 compute-2 nova_compute[232428]: 2025-11-29 08:06:54.578 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:55 compute-2 ceph-mon[77138]: pgmap v1982: 305 pgs: 305 active+clean; 158 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Nov 29 08:06:55 compute-2 nova_compute[232428]: 2025-11-29 08:06:55.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:56.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:56 compute-2 nova_compute[232428]: 2025-11-29 08:06:56.799 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:06:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:06:57 compute-2 ceph-mon[77138]: pgmap v1983: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.545 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.546 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.564 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:06:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:58.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.677 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.678 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.685 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.686 232432 INFO nova.compute.claims [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:06:58 compute-2 nova_compute[232428]: 2025-11-29 08:06:58.835 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:06:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:06:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:06:59 compute-2 ceph-mon[77138]: pgmap v1984: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 761 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:06:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/130860882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.278 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.285 232432 DEBUG nova.compute.provider_tree [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.303 232432 DEBUG nova.scheduler.client.report [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.325 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.326 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.373 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.373 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.400 232432 INFO nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.474 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.533 232432 INFO nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Booting with volume a58e04ca-51a8-451c-bfd9-cb5b176d93d9 at /dev/vda
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.764 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.766 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.777 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.777 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[47caed0a-1e51-4ee1-92f6-e07fcadfaf9d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.778 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.785 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.785 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff63df6-0919-459e-a9c3-1ef5e1173241]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.787 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.796 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.797 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[e771f620-bc94-4e5f-9f39-c3ce4cdbe567]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.800 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[2a077f9a-d8c8-494c-bbf1-cdb268dfa201]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.800 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.832 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.835 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.835 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.836 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.836 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:06:59 compute-2 nova_compute[232428]: 2025-11-29 08:06:59.836 232432 DEBUG nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating existing volume attachment record: 1395d964-f1da-4bec-a9a0-103e1e9e49fe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/130860882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:00 compute-2 nova_compute[232428]: 2025-11-29 08:07:00.528 232432 DEBUG nova.policy [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f5c9a929d4b248288b84a67f96ca500d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e61a0774e90545289bd82e4a71650bde', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:07:00 compute-2 nova_compute[232428]: 2025-11-29 08:07:00.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:00.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3248990344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:00.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:01 compute-2 ceph-mon[77138]: pgmap v1985: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 761 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 29 08:07:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3248990344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.376 232432 INFO nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Booting with volume 919ec135-16e6-4c87-bbf5-1726533a5182 at /dev/vdb
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.654 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403606.6538508, 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.654 232432 INFO nova.compute.manager [-] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] VM Stopped (Lifecycle Event)
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.666 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.667 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.680 232432 DEBUG nova.compute.manager [None req-b515f671-25be-4ce0-ab41-827deb522716 - - - - - -] [instance: 91cc5e5b-8962-4843-bcd2-c3e0f4c0428d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.679 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.679 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[f79701fe-1898-4f1c-8641-e5e8c8e3e06c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.682 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.693 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.694 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[d481ca96-cbe5-4c41-a305-4d576e0b3b4d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.696 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.705 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.705 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[0e126e26-c551-4859-a287-b042e80d7e8b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.707 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[811d5c0f-f7e5-4a34-be1c-f749ce1bb25e]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.707 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.741 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.743 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.744 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.744 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.744 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.745 232432 DEBUG nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating existing volume attachment record: 258684a8-57b8-4cfe-882e-0e47ea7bc6f7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:01 compute-2 nova_compute[232428]: 2025-11-29 08:07:01.800 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3542637573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4141054727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:02 compute-2 nova_compute[232428]: 2025-11-29 08:07:02.275 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully created port: 7a9fe153-f72b-4621-aee3-66b486bacae5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761497058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:02.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.005 232432 INFO nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Booting with volume 9a1b020a-829d-495d-928e-9016a22fe737 at /dev/vdc
Nov 29 08:07:03 compute-2 ceph-mon[77138]: pgmap v1986: 305 pgs: 305 active+clean; 101 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 29 08:07:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3761497058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.193 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.195 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.219 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.219 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[c8753d0f-8d94-4db0-b05f-527152d226d0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.222 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.235 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.236 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[f027ad52-224d-4f69-98d5-df67f4128ccf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.238 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.255 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.256 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb7af7f-1999-4c7a-88f9-acaf264e1255]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.257 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e70ab3-3861-4f58-9a39-4b1313c9571f]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.258 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.311 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "nvme version" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:03.312 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:03.314 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:03.314 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.315 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.315 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.316 232432 DEBUG os_brick.initiator.connectors.lightos [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.316 232432 DEBUG os_brick.utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] <== get_connector_properties: return (122ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:03 compute-2 nova_compute[232428]: 2025-11-29 08:07:03.317 232432 DEBUG nova.virt.block_device [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating existing volume attachment record: 042726ed-a817-43fa-92f5-9f21f384a7e0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.004 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully created port: 0ff1cfac-1292-47db-befc-e4a968bd8d13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/289195649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:04.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.711 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.712 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.713 232432 INFO nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Creating image(s)
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.713 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.713 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Ensure instance console log exists: /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.714 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.714 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:04 compute-2 nova_compute[232428]: 2025-11-29 08:07:04.714 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:05 compute-2 ceph-mon[77138]: pgmap v1987: 305 pgs: 305 active+clean; 106 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 1.1 MiB/s wr, 65 op/s
Nov 29 08:07:05 compute-2 nova_compute[232428]: 2025-11-29 08:07:05.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:05 compute-2 nova_compute[232428]: 2025-11-29 08:07:05.806 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully created port: d82f4054-c2c2-4966-9ef6-c7ac320cd065 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:06 compute-2 sudo[268581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:06 compute-2 sudo[268581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:06 compute-2 sudo[268581]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:06 compute-2 nova_compute[232428]: 2025-11-29 08:07:06.819 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:06 compute-2 sudo[268606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:06 compute-2 sudo[268606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:06 compute-2 sudo[268606]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:06.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:07 compute-2 nova_compute[232428]: 2025-11-29 08:07:07.038 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully created port: 9bb68ec9-77ee-431c-b89c-3384da9fa365 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:07 compute-2 ceph-mon[77138]: pgmap v1988: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Nov 29 08:07:07 compute-2 nova_compute[232428]: 2025-11-29 08:07:07.974 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully created port: ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:08 compute-2 podman[268634]: 2025-11-29 08:07:08.670780775 +0000 UTC m=+0.067240656 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:07:08 compute-2 sshd-session[268632]: Invalid user sol from 45.148.10.240 port 57296
Nov 29 08:07:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:08.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:08 compute-2 sshd-session[268632]: Connection closed by invalid user sol 45.148.10.240 port 57296 [preauth]
Nov 29 08:07:09 compute-2 ceph-mon[77138]: pgmap v1989: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 08:07:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4227337823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.456 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: 7a9fe153-f72b-4621-aee3-66b486bacae5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.666 232432 DEBUG nova.compute.manager [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.666 232432 DEBUG nova.compute.manager [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-7a9fe153-f72b-4621-aee3-66b486bacae5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.667 232432 DEBUG oslo_concurrency.lockutils [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.668 232432 DEBUG oslo_concurrency.lockutils [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:09 compute-2 nova_compute[232428]: 2025-11-29 08:07:09.668 232432 DEBUG nova.network.neutron [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port 7a9fe153-f72b-4621-aee3-66b486bacae5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:10 compute-2 nova_compute[232428]: 2025-11-29 08:07:10.047 232432 DEBUG nova.network.neutron [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:10 compute-2 nova_compute[232428]: 2025-11-29 08:07:10.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:10 compute-2 nova_compute[232428]: 2025-11-29 08:07:10.725 232432 DEBUG nova.network.neutron [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:10 compute-2 nova_compute[232428]: 2025-11-29 08:07:10.738 232432 DEBUG oslo_concurrency.lockutils [req-a9c31ea1-b9ef-49a4-ae1e-3a2661367be8 req-623ade30-913f-4268-a73c-b3816bcb969b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.274 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: b15948c7-35a3-4201-bceb-593c2b1c8704 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.775 232432 DEBUG nova.compute.manager [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-b15948c7-35a3-4201-bceb-593c2b1c8704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.776 232432 DEBUG nova.compute.manager [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-b15948c7-35a3-4201-bceb-593c2b1c8704. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.776 232432 DEBUG oslo_concurrency.lockutils [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.777 232432 DEBUG oslo_concurrency.lockutils [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.777 232432 DEBUG nova.network.neutron [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port b15948c7-35a3-4201-bceb-593c2b1c8704 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:11 compute-2 ceph-mon[77138]: pgmap v1990: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 771 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 29 08:07:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/858972445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:11 compute-2 nova_compute[232428]: 2025-11-29 08:07:11.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:12 compute-2 nova_compute[232428]: 2025-11-29 08:07:12.177 232432 DEBUG nova.network.neutron [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:12.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:13 compute-2 podman[268657]: 2025-11-29 08:07:13.67722144 +0000 UTC m=+0.072325255 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:07:13 compute-2 ceph-mon[77138]: pgmap v1991: 305 pgs: 305 active+clean; 135 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 171 op/s
Nov 29 08:07:13 compute-2 nova_compute[232428]: 2025-11-29 08:07:13.741 232432 DEBUG nova.network.neutron [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:14 compute-2 nova_compute[232428]: 2025-11-29 08:07:14.191 232432 DEBUG oslo_concurrency.lockutils [req-2e023be7-697a-4af3-bd53-bfe2435f0f5a req-8e357df4-d555-46ad-8956-3f76e0d8d1f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:14.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1548749727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:14.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.593 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:15 compute-2 ceph-mon[77138]: pgmap v1992: 305 pgs: 305 active+clean; 128 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.703 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: d0984314-851d-451b-9277-1f0fc38d3c41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.823 232432 DEBUG nova.compute.manager [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-d0984314-851d-451b-9277-1f0fc38d3c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.824 232432 DEBUG nova.compute.manager [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-d0984314-851d-451b-9277-1f0fc38d3c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.824 232432 DEBUG oslo_concurrency.lockutils [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.824 232432 DEBUG oslo_concurrency.lockutils [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:15 compute-2 nova_compute[232428]: 2025-11-29 08:07:15.824 232432 DEBUG nova.network.neutron [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port d0984314-851d-451b-9277-1f0fc38d3c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:16 compute-2 nova_compute[232428]: 2025-11-29 08:07:16.167 232432 DEBUG nova.network.neutron [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:16 compute-2 nova_compute[232428]: 2025-11-29 08:07:16.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:16 compute-2 nova_compute[232428]: 2025-11-29 08:07:16.982 232432 DEBUG nova.network.neutron [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:17 compute-2 nova_compute[232428]: 2025-11-29 08:07:17.015 232432 DEBUG oslo_concurrency.lockutils [req-da039a9e-a877-443e-b37c-91fad1ea73b0 req-deb01027-b0ca-4087-8524-5e35cf8d8408 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:17 compute-2 ceph-mon[77138]: pgmap v1993: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Nov 29 08:07:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:18 compute-2 nova_compute[232428]: 2025-11-29 08:07:18.997 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: 0ff1cfac-1292-47db-befc-e4a968bd8d13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:19 compute-2 ceph-mon[77138]: pgmap v1994: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 29 08:07:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.380 232432 DEBUG nova.compute.manager [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.381 232432 DEBUG nova.compute.manager [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-0ff1cfac-1292-47db-befc-e4a968bd8d13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.382 232432 DEBUG oslo_concurrency.lockutils [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.383 232432 DEBUG oslo_concurrency.lockutils [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.383 232432 DEBUG nova.network.neutron [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port 0ff1cfac-1292-47db-befc-e4a968bd8d13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.596 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:20 compute-2 nova_compute[232428]: 2025-11-29 08:07:20.702 232432 DEBUG nova.network.neutron [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:21 compute-2 ceph-mon[77138]: pgmap v1995: 305 pgs: 305 active+clean; 149 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 29 08:07:21 compute-2 nova_compute[232428]: 2025-11-29 08:07:21.131 232432 DEBUG nova.network.neutron [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:21 compute-2 nova_compute[232428]: 2025-11-29 08:07:21.154 232432 DEBUG oslo_concurrency.lockutils [req-02d9e4e6-a1d0-4b8b-94a3-b547af2735ca req-8e1839c9-46f6-42ab-9e2b-76511c89465f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:21 compute-2 nova_compute[232428]: 2025-11-29 08:07:21.696 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: d82f4054-c2c2-4966-9ef6-c7ac320cd065 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:21 compute-2 nova_compute[232428]: 2025-11-29 08:07:21.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.509 232432 DEBUG nova.compute.manager [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.510 232432 DEBUG nova.compute.manager [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-d82f4054-c2c2-4966-9ef6-c7ac320cd065. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.510 232432 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.510 232432 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.511 232432 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port d82f4054-c2c2-4966-9ef6-c7ac320cd065 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:22 compute-2 nova_compute[232428]: 2025-11-29 08:07:22.727 232432 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:22.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:23 compute-2 ceph-mon[77138]: pgmap v1996: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 121 op/s
Nov 29 08:07:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1407093463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1425703125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:23 compute-2 nova_compute[232428]: 2025-11-29 08:07:23.163 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: 9bb68ec9-77ee-431c-b89c-3384da9fa365 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:23 compute-2 nova_compute[232428]: 2025-11-29 08:07:23.169 232432 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:23 compute-2 nova_compute[232428]: 2025-11-29 08:07:23.196 232432 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:23 compute-2 podman[268682]: 2025-11-29 08:07:23.719174256 +0000 UTC m=+0.119358712 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:07:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:24.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:24 compute-2 nova_compute[232428]: 2025-11-29 08:07:24.974 232432 DEBUG nova.compute.manager [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:24 compute-2 nova_compute[232428]: 2025-11-29 08:07:24.975 232432 DEBUG nova.compute.manager [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-9bb68ec9-77ee-431c-b89c-3384da9fa365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:24 compute-2 nova_compute[232428]: 2025-11-29 08:07:24.975 232432 DEBUG oslo_concurrency.lockutils [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:24 compute-2 nova_compute[232428]: 2025-11-29 08:07:24.976 232432 DEBUG oslo_concurrency.lockutils [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:24 compute-2 nova_compute[232428]: 2025-11-29 08:07:24.976 232432 DEBUG nova.network.neutron [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port 9bb68ec9-77ee-431c-b89c-3384da9fa365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.219 232432 DEBUG nova.network.neutron [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.242 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Successfully updated port: ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.257 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.409 232432 DEBUG nova.compute.manager [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.410 232432 DEBUG nova.compute.manager [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.410 232432 DEBUG oslo_concurrency.lockutils [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.704 232432 DEBUG nova.network.neutron [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.718 232432 DEBUG oslo_concurrency.lockutils [req-2e0b611c-415b-4c58-bc9b-081e31f5ab37 req-a44fd0f5-5011-46d9-b5f4-b2796b381c7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.719 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.719 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:07:25 compute-2 nova_compute[232428]: 2025-11-29 08:07:25.886 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:26 compute-2 ceph-mon[77138]: pgmap v1997: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 39 op/s
Nov 29 08:07:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3806903345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/243536570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:26.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:26 compute-2 sudo[268710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:26 compute-2 sudo[268710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:26 compute-2 sudo[268710]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:26 compute-2 nova_compute[232428]: 2025-11-29 08:07:26.984 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:27 compute-2 sudo[268735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:27 compute-2 sudo[268735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:27 compute-2 sudo[268735]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:27 compute-2 ceph-mon[77138]: pgmap v1998: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 41 op/s
Nov 29 08:07:27 compute-2 nova_compute[232428]: 2025-11-29 08:07:27.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:27.980 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:27.982 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:07:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:28.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:28.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3621953641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:28 compute-2 ceph-mon[77138]: pgmap v1999: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 08:07:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3621953641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:30 compute-2 nova_compute[232428]: 2025-11-29 08:07:30.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:30.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:30.984 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:31 compute-2 ceph-mon[77138]: pgmap v2000: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 08:07:31 compute-2 nova_compute[232428]: 2025-11-29 08:07:31.986 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-2 ovn_controller[134375]: 2025-11-29T08:07:32Z|00358|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 08:07:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1174842904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:32.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:32.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.000 232432 DEBUG nova.network.neutron [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.022 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.022 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance network_info: |[{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.023 232432 DEBUG oslo_concurrency.lockutils [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.023 232432 DEBUG nova.network.neutron [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.033 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Start _get_guest_xml network_info=[{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_di
Nov 29 08:07:34 compute-2 nova_compute[232428]: sk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a58e04ca-51a8-451c-bfd9-cb5b176d93d9', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a58e04ca-51a8-451c-bfd9-cb5b176d93d9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'attached_at': '', 'detached_at': '', 'volume_id': 'a58e04ca-51a8-451c-bfd9-cb5b176d93d9', 'serial': 'a58e04ca-51a8-451c-bfd9-cb5b176d93d9'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '1395d964-f1da-4bec-a9a0-103e1e9e49fe', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-919ec135-16e6-4c87-bbf5-1726533a5182', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '919ec135-16e6-4c87-bbf5-1726533a5182', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'attached_at': '', 'detached_at': '', 'volume_id': '919ec135-16e6-4c87-bbf5-1726533a5182', 'serial': '919ec135-16e6-4c87-bbf5-1726533a5182'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 1, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': '258684a8-57b8-4cfe-882e-0e47ea7bc6f7', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9a1b020a-829d-495d-928e-9016a22fe737', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9a1b020a-829d-495d-928e-9016a22fe737', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'attached_at': '', 'detached_at': '', 'volume_id': '9a1b020a-829d-495d-928e-9016a22fe737', 'serial': '9a1b020a-829d-495d-928e-9016a22fe737'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 2, 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'delete_on_termination': False, 'attachment_id': '042726ed-a817-43fa-92f5-9f21f384a7e0', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.037 232432 WARNING nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.044 232432 DEBUG nova.virt.libvirt.host [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.045 232432 DEBUG nova.virt.libvirt.host [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.053 232432 DEBUG nova.virt.libvirt.host [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.053 232432 DEBUG nova.virt.libvirt.host [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.054 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.055 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.055 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.056 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.056 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.056 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.056 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.057 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.057 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.057 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.058 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.058 232432 DEBUG nova.virt.hardware [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.090 232432 DEBUG nova.storage.rbd_utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] rbd image 9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.093 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:34 compute-2 rsyslogd[1008]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 08:07:34.033 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 08:07:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3639936185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:34.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:34 compute-2 ceph-mon[77138]: pgmap v2001: 305 pgs: 305 active+clean; 146 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 156 op/s
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.879 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.785s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:34.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.986 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.987 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.988 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.988 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.989 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.989 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.989 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.990 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.990 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.991 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.991 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.991 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.992 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.992 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.992 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.993 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.993 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.993 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.994 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.994 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.995 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:34 compute-2 nova_compute[232428]: 2025-11-29 08:07:34.996 232432 DEBUG nova.objects.instance [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f70b4d6-e1a7-4709-8816-a19fb6569d7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.039 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <uuid>9f70b4d6-e1a7-4709-8816-a19fb6569d7c</uuid>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <name>instance-00000053</name>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:name>tempest-device-tagging-server-1544099134</nova:name>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:07:34</nova:creationTime>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:user uuid="f5c9a929d4b248288b84a67f96ca500d">tempest-TaggedBootDevicesTest_v242-307299721-project-member</nova:user>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:project uuid="e61a0774e90545289bd82e4a71650bde">tempest-TaggedBootDevicesTest_v242-307299721</nova:project>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="7a9fe153-f72b-4621-aee3-66b486bacae5">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="b15948c7-35a3-4201-bceb-593c2b1c8704">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.112" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="d0984314-851d-451b-9277-1f0fc38d3c41">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.68" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="0ff1cfac-1292-47db-befc-e4a968bd8d13">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.54" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="d82f4054-c2c2-4966-9ef6-c7ac320cd065">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.1.1.85" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="9bb68ec9-77ee-431c-b89c-3384da9fa365">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <nova:port uuid="ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb">
Nov 29 08:07:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <system>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="serial">9f70b4d6-e1a7-4709-8816-a19fb6569d7c</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="uuid">9f70b4d6-e1a7-4709-8816-a19fb6569d7c</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </system>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <os>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </os>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <features>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </features>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-a58e04ca-51a8-451c-bfd9-cb5b176d93d9">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <serial>a58e04ca-51a8-451c-bfd9-cb5b176d93d9</serial>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-919ec135-16e6-4c87-bbf5-1726533a5182">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <serial>919ec135-16e6-4c87-bbf5-1726533a5182</serial>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-9a1b020a-829d-495d-928e-9016a22fe737">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:07:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="vdc" bus="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <serial>9a1b020a-829d-495d-928e-9016a22fe737</serial>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:2b:13:82"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tap7a9fe153-f7"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:55:95:2d"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tapb15948c7-35"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:09:62:71"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tapd0984314-85"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:cb:e8:12"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tap0ff1cfac-12"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:45:6a:16"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tapd82f4054-c2"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:69:58:a8"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="tap9bb68ec9-77"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:1c:9b:c3"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <target dev="taped4379ad-b1"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/console.log" append="off"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <video>
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </video>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:07:35 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:07:35 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:07:35 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:07:35 compute-2 nova_compute[232428]: </domain>
Nov 29 08:07:35 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.040 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.040 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.040 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.040 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.041 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.041 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.041 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.041 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.042 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.042 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.042 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.042 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.042 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.043 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.043 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.043 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.043 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.044 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.044 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.044 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.044 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.044 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.045 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.045 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.045 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Preparing to wait for external event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.045 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.045 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.046 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.046 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.047 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.047 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.048 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.049 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.050 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.053 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a9fe153-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.054 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a9fe153-f7, col_values=(('external_ids', {'iface-id': '7a9fe153-f72b-4621-aee3-66b486bacae5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:13:82', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.055 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.0562] manager: (tap7a9fe153-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.058 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.062 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.063 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.064 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.064 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.064 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.065 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.065 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.065 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.065 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.067 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb15948c7-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.067 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb15948c7-35, col_values=(('external_ids', {'iface-id': 'b15948c7-35a3-4201-bceb-593c2b1c8704', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:95:2d', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.069 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.0697] manager: (tapb15948c7-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.073 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.075 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.076 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.077 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.077 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.077 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.078 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.078 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.078 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.078 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.080 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.080 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0984314-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.080 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0984314-85, col_values=(('external_ids', {'iface-id': 'd0984314-851d-451b-9277-1f0fc38d3c41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:62:71', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.081 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.0820] manager: (tapd0984314-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.082 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.092 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.093 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.094 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.094 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.095 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.095 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.095 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.096 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.096 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.098 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.098 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ff1cfac-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.099 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ff1cfac-12, col_values=(('external_ids', {'iface-id': '0ff1cfac-1292-47db-befc-e4a968bd8d13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:e8:12', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.100 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.1013] manager: (tap0ff1cfac-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.102 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.114 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.116 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.116 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.117 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.117 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.118 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.118 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.119 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.121 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd82f4054-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.121 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd82f4054-c2, col_values=(('external_ids', {'iface-id': 'd82f4054-c2c2-4966-9ef6-c7ac320cd065', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:6a:16', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.1239] manager: (tapd82f4054-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.125 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.139 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.140 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.141 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.141 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.142 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.142 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.143 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.143 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.145 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bb68ec9-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.146 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9bb68ec9-77, col_values=(('external_ids', {'iface-id': '9bb68ec9-77ee-431c-b89c-3384da9fa365', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:58:a8', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.147 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.1481] manager: (tap9bb68ec9-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.149 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.168 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.169 232432 DEBUG nova.virt.libvirt.vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.169 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.170 232432 DEBUG nova.network.os_vif_util [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.170 232432 DEBUG os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.171 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.171 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.171 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.173 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.173 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped4379ad-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.174 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped4379ad-b1, col_values=(('external_ids', {'iface-id': 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:9b:c3', 'vm-uuid': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:35 compute-2 NetworkManager[48993]: <info>  [1764403655.1761] manager: (taped4379ad-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.177 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.196 232432 INFO os_vif [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1')
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.450 232432 DEBUG nova.network.neutron [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updated VIF entry in instance network info cache for port ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.450 232432 DEBUG nova.network.neutron [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.601 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.654 232432 DEBUG oslo_concurrency.lockutils [req-7b257f83-c9e2-4f70-aab5-7e69bf2e9a2d req-86d9a370-f964-4d26-9d6c-2bc8d0774713 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.658 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.659 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.659 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] No VIF found with MAC fa:16:3e:2b:13:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.659 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] No VIF found with MAC fa:16:3e:45:6a:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.660 232432 INFO nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Using config drive
Nov 29 08:07:35 compute-2 nova_compute[232428]: 2025-11-29 08:07:35.689 232432 DEBUG nova.storage.rbd_utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] rbd image 9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:35 compute-2 ceph-mon[77138]: pgmap v2002: 305 pgs: 305 active+clean; 134 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 26 KiB/s wr, 167 op/s
Nov 29 08:07:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3639936185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/49730514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1009189897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:36 compute-2 nova_compute[232428]: 2025-11-29 08:07:36.106 232432 INFO nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Creating config drive at /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config
Nov 29 08:07:36 compute-2 nova_compute[232428]: 2025-11-29 08:07:36.113 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvwv5zcz4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:36 compute-2 nova_compute[232428]: 2025-11-29 08:07:36.258 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvwv5zcz4" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:36 compute-2 nova_compute[232428]: 2025-11-29 08:07:36.292 232432 DEBUG nova.storage.rbd_utils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] rbd image 9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:36 compute-2 nova_compute[232428]: 2025-11-29 08:07:36.296 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config 9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:36.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.040 232432 DEBUG oslo_concurrency.processutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config 9f70b4d6-e1a7-4709-8816-a19fb6569d7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.041 232432 INFO nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Deleting local config drive /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c/disk.config because it was imported into RBD.
Nov 29 08:07:37 compute-2 kernel: tap7a9fe153-f7: entered promiscuous mode
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1133] manager: (tap7a9fe153-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1305] manager: (tapb15948c7-35): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00359|binding|INFO|Claiming lport 7a9fe153-f72b-4621-aee3-66b486bacae5 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00360|binding|INFO|7a9fe153-f72b-4621-aee3-66b486bacae5: Claiming fa:16:3e:2b:13:82 10.100.0.5
Nov 29 08:07:37 compute-2 kernel: tapb15948c7-35: entered promiscuous mode
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1456] manager: (tapd0984314-85): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.152 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:13:82 10.100.0.5'], port_security=['fa:16:3e:2b:13:82 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69307759-c430-40fb-9425-950cfd24a6e4, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7a9fe153-f72b-4621-aee3-66b486bacae5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 kernel: tapd0984314-85: entered promiscuous mode
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.153 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7a9fe153-f72b-4621-aee3-66b486bacae5 in datapath e4f17807-9d16-4b74-9bf3-d79f60746fbd bound to our chassis
Nov 29 08:07:37 compute-2 systemd-udevd[268918]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:37 compute-2 systemd-udevd[268917]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:37 compute-2 systemd-udevd[268919]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.156 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4f17807-9d16-4b74-9bf3-d79f60746fbd
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.172 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e82a7f8-3a9c-4bea-80a1-ad47472eddd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1736] device (tapb15948c7-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1746] device (tap7a9fe153-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.174 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4f17807-91 in ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1759] device (tapd0984314-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.176 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4f17807-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.176 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1da2474-a00a-499d-b58d-b804d6ba1448]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1777] manager: (tap0ff1cfac-12): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.177 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a71f8b18-5b1b-4a63-bc02-06367f76a62f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1781] device (tapb15948c7-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1786] device (tap7a9fe153-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1790] device (tapd0984314-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.194 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[48c20d40-969d-4f34-a606-73210fec92ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.1984] manager: (tapd82f4054-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 08:07:37 compute-2 systemd-udevd[268927]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2174] manager: (tap9bb68ec9-77): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00361|binding|INFO|Claiming lport b15948c7-35a3-4201-bceb-593c2b1c8704 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00362|binding|INFO|b15948c7-35a3-4201-bceb-593c2b1c8704: Claiming fa:16:3e:55:95:2d 10.1.1.112
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00363|binding|INFO|Claiming lport d0984314-851d-451b-9277-1f0fc38d3c41 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00364|binding|INFO|d0984314-851d-451b-9277-1f0fc38d3c41: Claiming fa:16:3e:09:62:71 10.1.1.68
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.218 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 kernel: tap0ff1cfac-12: entered promiscuous mode
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2222] device (tap0ff1cfac-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2232] device (tap0ff1cfac-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.226 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:95:2d 10.1.1.112'], port_security=['fa:16:3e:55:95:2d 10.1.1.112'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-2054854358', 'neutron:cidrs': '10.1.1.112/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-2054854358', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd283247-d149-4bbd-bedb-6a3a3aca31ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=b15948c7-35a3-4201-bceb-593c2b1c8704) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2271] device (tapd82f4054-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 kernel: tapd82f4054-c2: entered promiscuous mode
Nov 29 08:07:37 compute-2 kernel: tap9bb68ec9-77: entered promiscuous mode
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.228 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:62:71 10.1.1.68'], port_security=['fa:16:3e:09:62:71 10.1.1.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1258835387', 'neutron:cidrs': '10.1.1.68/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1258835387', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd283247-d149-4bbd-bedb-6a3a3aca31ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d0984314-851d-451b-9277-1f0fc38d3c41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2286] device (tapd82f4054-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.225 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[757a5129-a34a-4b78-9aae-7705caf7911d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2328] device (tap9bb68ec9-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2342] device (tap9bb68ec9-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2380] manager: (taped4379ad-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 08:07:37 compute-2 kernel: taped4379ad-b1: entered promiscuous mode
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00365|binding|INFO|Claiming lport d82f4054-c2c2-4966-9ef6-c7ac320cd065 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00366|binding|INFO|d82f4054-c2c2-4966-9ef6-c7ac320cd065: Claiming fa:16:3e:45:6a:16 10.1.1.85
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00367|binding|INFO|Claiming lport 0ff1cfac-1292-47db-befc-e4a968bd8d13 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00368|binding|INFO|0ff1cfac-1292-47db-befc-e4a968bd8d13: Claiming fa:16:3e:cb:e8:12 10.1.1.54
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00369|binding|INFO|Claiming lport 9bb68ec9-77ee-431c-b89c-3384da9fa365 for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00370|binding|INFO|9bb68ec9-77ee-431c-b89c-3384da9fa365: Claiming fa:16:3e:69:58:a8 10.2.2.100
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00371|if_status|INFO|Dropped 4 log messages in last 211 seconds (most recently, 211 seconds ago) due to excessive rate
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00372|if_status|INFO|Not updating pb chassis for ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb now as sb is readonly
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00373|binding|INFO|Setting lport 7a9fe153-f72b-4621-aee3-66b486bacae5 ovn-installed in OVS
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.251 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00374|binding|INFO|Claiming lport ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb for this chassis.
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00375|binding|INFO|ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb: Claiming fa:16:3e:1c:9b:c3 10.2.2.200
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.253 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:58:a8 10.2.2.100'], port_security=['fa:16:3e:69:58:a8 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-375dc49d-ec99-4657-9ba6-74087890a298', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfd2343a-4417-4955-ae4f-9d95292d94dd, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9bb68ec9-77ee-431c-b89c-3384da9fa365) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2556] device (taped4379ad-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.255 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:e8:12 10.1.1.54'], port_security=['fa:16:3e:cb:e8:12 10.1.1.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.54/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0ff1cfac-1292-47db-befc-e4a968bd8d13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2562] device (taped4379ad-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00376|binding|INFO|Setting lport 7a9fe153-f72b-4621-aee3-66b486bacae5 up in Southbound
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.257 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:6a:16 10.1.1.85'], port_security=['fa:16:3e:45:6a:16 10.1.1.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.85/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d82f4054-c2c2-4966-9ef6-c7ac320cd065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.262 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:9b:c3 10.2.2.200'], port_security=['fa:16:3e:1c:9b:c3 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-375dc49d-ec99-4657-9ba6-74087890a298', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfd2343a-4417-4955-ae4f-9d95292d94dd, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.268 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[74cfd209-3044-41f9-a024-a33fc7b3d630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.2781] manager: (tape4f17807-90): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.277 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ff1dbe-42de-488f-be42-18b1f72b4df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 systemd-machined[194747]: New machine qemu-36-instance-00000053.
Nov 29 08:07:37 compute-2 systemd[1]: Started Virtual Machine qemu-36-instance-00000053.
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.314 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d97bc8b6-83db-458c-8db9-2c7a7d7a9fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.317 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[207b27d6-dddb-417a-9389-ea4c4114aa78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.3462] device (tape4f17807-90): carrier: link connected
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.354 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1067a30c-7524-4413-ba01-fdce3c247bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00377|binding|INFO|Setting lport b15948c7-35a3-4201-bceb-593c2b1c8704 ovn-installed in OVS
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00378|binding|INFO|Setting lport b15948c7-35a3-4201-bceb-593c2b1c8704 up in Southbound
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00379|binding|INFO|Setting lport d0984314-851d-451b-9277-1f0fc38d3c41 ovn-installed in OVS
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00380|binding|INFO|Setting lport d0984314-851d-451b-9277-1f0fc38d3c41 up in Southbound
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00381|binding|INFO|Setting lport 9bb68ec9-77ee-431c-b89c-3384da9fa365 ovn-installed in OVS
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00382|binding|INFO|Setting lport 9bb68ec9-77ee-431c-b89c-3384da9fa365 up in Southbound
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00383|binding|INFO|Setting lport 0ff1cfac-1292-47db-befc-e4a968bd8d13 ovn-installed in OVS
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00384|binding|INFO|Setting lport 0ff1cfac-1292-47db-befc-e4a968bd8d13 up in Southbound
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00385|binding|INFO|Setting lport d82f4054-c2c2-4966-9ef6-c7ac320cd065 ovn-installed in OVS
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00386|binding|INFO|Setting lport d82f4054-c2c2-4966-9ef6-c7ac320cd065 up in Southbound
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.356 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00387|binding|INFO|Setting lport ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb ovn-installed in OVS
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.375 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.375 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2a518f-f6f1-41a5-bf5c-885f26193fa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4f17807-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:96:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662361, 'reachable_time': 24616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268974, 'error': None, 'target': 'ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00388|binding|INFO|Setting lport ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb up in Southbound
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.392 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6a8db8-fa25-4ab7-b105-4afaf25741ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:9600'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662361, 'tstamp': 662361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268976, 'error': None, 'target': 'ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.411 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a5acfeb1-b316-45ba-bcea-2e54130e92d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4f17807-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:96:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662361, 'reachable_time': 24616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268977, 'error': None, 'target': 'ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.446 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[478667aa-eb3c-4bf8-8228-fea74f694a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.507 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1c18a415-9375-48b3-bf63-5b56a4c509bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.509 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4f17807-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.509 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.509 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4f17807-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.511 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 NetworkManager[48993]: <info>  [1764403657.5125] manager: (tape4f17807-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 08:07:37 compute-2 kernel: tape4f17807-90: entered promiscuous mode
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.515 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4f17807-90, col_values=(('external_ids', {'iface-id': '1d661dac-16f7-44cf-8e8a-b78e677ba4a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.516 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_controller[134375]: 2025-11-29T08:07:37Z|00389|binding|INFO|Releasing lport 1d661dac-16f7-44cf-8e8a-b78e677ba4a2 from this chassis (sb_readonly=0)
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.583 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.584 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4f17807-9d16-4b74-9bf3-d79f60746fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4f17807-9d16-4b74-9bf3-d79f60746fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.585 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de31a553-3127-4306-b761-25f2f2d76fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.586 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-e4f17807-9d16-4b74-9bf3-d79f60746fbd
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/e4f17807-9d16-4b74-9bf3-d79f60746fbd.pid.haproxy
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID e4f17807-9d16-4b74-9bf3-d79f60746fbd
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:07:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:37.586 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'env', 'PROCESS_TAG=haproxy-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4f17807-9d16-4b74-9bf3-d79f60746fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.651 232432 DEBUG nova.compute.manager [req-c3df416b-c4d9-461d-94f2-a5fc5df69d24 req-eb8c7e9a-25d3-4526-b24b-d5d1641c3167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.652 232432 DEBUG oslo_concurrency.lockutils [req-c3df416b-c4d9-461d-94f2-a5fc5df69d24 req-eb8c7e9a-25d3-4526-b24b-d5d1641c3167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.652 232432 DEBUG oslo_concurrency.lockutils [req-c3df416b-c4d9-461d-94f2-a5fc5df69d24 req-eb8c7e9a-25d3-4526-b24b-d5d1641c3167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.652 232432 DEBUG oslo_concurrency.lockutils [req-c3df416b-c4d9-461d-94f2-a5fc5df69d24 req-eb8c7e9a-25d3-4526-b24b-d5d1641c3167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.653 232432 DEBUG nova.compute.manager [req-c3df416b-c4d9-461d-94f2-a5fc5df69d24 req-eb8c7e9a-25d3-4526-b24b-d5d1641c3167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.653 232432 DEBUG nova.compute.manager [req-6287190b-bcdb-4364-8ec4-98070060e0d9 req-97a9540e-f3f2-49af-86d3-0d269c37d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.654 232432 DEBUG oslo_concurrency.lockutils [req-6287190b-bcdb-4364-8ec4-98070060e0d9 req-97a9540e-f3f2-49af-86d3-0d269c37d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.654 232432 DEBUG oslo_concurrency.lockutils [req-6287190b-bcdb-4364-8ec4-98070060e0d9 req-97a9540e-f3f2-49af-86d3-0d269c37d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.654 232432 DEBUG oslo_concurrency.lockutils [req-6287190b-bcdb-4364-8ec4-98070060e0d9 req-97a9540e-f3f2-49af-86d3-0d269c37d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.654 232432 DEBUG nova.compute.manager [req-6287190b-bcdb-4364-8ec4-98070060e0d9 req-97a9540e-f3f2-49af-86d3-0d269c37d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:37 compute-2 ceph-mon[77138]: pgmap v2003: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 26 KiB/s wr, 168 op/s
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.855 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403657.8544955, 9f70b4d6-e1a7-4709-8816-a19fb6569d7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.855 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] VM Started (Lifecycle Event)
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.887 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.892 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403657.8546484, 9f70b4d6-e1a7-4709-8816-a19fb6569d7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.892 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] VM Paused (Lifecycle Event)
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.918 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.921 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:37 compute-2 nova_compute[232428]: 2025-11-29 08:07:37.940 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:07:38 compute-2 podman[269093]: 2025-11-29 08:07:38.01142001 +0000 UTC m=+0.059035651 container create 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:07:38 compute-2 systemd[1]: Started libpod-conmon-63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a.scope.
Nov 29 08:07:38 compute-2 podman[269093]: 2025-11-29 08:07:37.976730709 +0000 UTC m=+0.024346390 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:07:38 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fc6e6623b629fb022973799fe5bde61df969782d1ea27131c13047f276f34d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:38 compute-2 podman[269093]: 2025-11-29 08:07:38.09805002 +0000 UTC m=+0.145665671 container init 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:07:38 compute-2 podman[269093]: 2025-11-29 08:07:38.103589282 +0000 UTC m=+0.151204923 container start 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:07:38 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [NOTICE]   (269113) : New worker (269115) forked
Nov 29 08:07:38 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [NOTICE]   (269113) : Loading success.
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.157 143801 INFO neutron.agent.ovn.metadata.agent [-] Port b15948c7-35a3-4201-bceb-593c2b1c8704 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.160 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.174 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cfced90f-02a8-4416-a08c-b2ba9ed08904]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.175 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74ca2dd9-b1 in ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.177 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74ca2dd9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.178 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4812e7aa-b5eb-4ebc-84de-71384b7e8859]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.178 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5212483e-d53d-488c-8c7d-78de89e004e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.192 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d91f98-067b-47ea-8c94-846707c6647d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.207 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe6d816-20a8-4fd5-8a61-6edc23b58cc9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.224 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.246 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[955fbe30-b037-41a0-b6b5-1cd21134b918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.252 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[85aa134f-2f5c-4cbc-b506-58ebf3cda189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 NetworkManager[48993]: <info>  [1764403658.2531] manager: (tap74ca2dd9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 08:07:38 compute-2 systemd-udevd[268963]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.278 232432 DEBUG nova.compute.manager [req-9d79f1fa-cf6a-4e56-bbff-aec43eb5c1da req-b39c8eaa-4be9-4820-913b-6a53084bf676 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.278 232432 DEBUG oslo_concurrency.lockutils [req-9d79f1fa-cf6a-4e56-bbff-aec43eb5c1da req-b39c8eaa-4be9-4820-913b-6a53084bf676 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.279 232432 DEBUG oslo_concurrency.lockutils [req-9d79f1fa-cf6a-4e56-bbff-aec43eb5c1da req-b39c8eaa-4be9-4820-913b-6a53084bf676 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.279 232432 DEBUG oslo_concurrency.lockutils [req-9d79f1fa-cf6a-4e56-bbff-aec43eb5c1da req-b39c8eaa-4be9-4820-913b-6a53084bf676 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.279 232432 DEBUG nova.compute.manager [req-9d79f1fa-cf6a-4e56-bbff-aec43eb5c1da req-b39c8eaa-4be9-4820-913b-6a53084bf676 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.287 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7edad4ba-21ae-4fff-8401-12f915b699ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.289 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[195a2efe-0366-4626-8eef-62b438440c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 NetworkManager[48993]: <info>  [1764403658.3160] device (tap74ca2dd9-b0): carrier: link connected
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.322 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2a95f30b-d0f5-46d9-91c4-8bd39ee13196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.339 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f24ffe2b-b167-4cf2-bebb-63e684a045c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74ca2dd9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:7e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662458, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269134, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.355 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f55d26f-1a5b-49f6-b76e-ba79dd8d83e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:7eda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662458, 'tstamp': 662458}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269135, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.374 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2734c4cc-e267-4f1d-af03-1a81a34ab65f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74ca2dd9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:7e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662458, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269136, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.407 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[70e2607b-992a-458f-b293-84c2c1533556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f079b50-5e09-495f-bef9-d5aacf52601b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.469 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74ca2dd9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.469 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.469 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74ca2dd9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:38 compute-2 NetworkManager[48993]: <info>  [1764403658.4721] manager: (tap74ca2dd9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 08:07:38 compute-2 kernel: tap74ca2dd9-b0: entered promiscuous mode
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.475 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74ca2dd9-b0, col_values=(('external_ids', {'iface-id': 'da34963c-5ff6-4f59-968f-e4a05d938684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.476 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:38 compute-2 ovn_controller[134375]: 2025-11-29T08:07:38Z|00390|binding|INFO|Releasing lport da34963c-5ff6-4f59-968f-e4a05d938684 from this chassis (sb_readonly=0)
Nov 29 08:07:38 compute-2 nova_compute[232428]: 2025-11-29 08:07:38.490 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.491 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74ca2dd9-be23-4b0d-bbc1-976490587a78.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74ca2dd9-be23-4b0d-bbc1-976490587a78.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.492 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a12c32e0-0312-4494-9bba-eaa718a2fdc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.493 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/74ca2dd9-be23-4b0d-bbc1-976490587a78.pid.haproxy
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:07:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:38.494 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'env', 'PROCESS_TAG=haproxy-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74ca2dd9-be23-4b0d-bbc1-976490587a78.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:07:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:38.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:38 compute-2 podman[269168]: 2025-11-29 08:07:38.848402538 +0000 UTC m=+0.047514952 container create 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:07:38 compute-2 systemd[1]: Started libpod-conmon-613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2.scope.
Nov 29 08:07:38 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4cdc3afba38f552c060190d7c9e6dd32dcb5df0aa9fc14eed95235ae2f6a21a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:38.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:38 compute-2 podman[269168]: 2025-11-29 08:07:38.826759913 +0000 UTC m=+0.025872347 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:07:38 compute-2 podman[269168]: 2025-11-29 08:07:38.942884813 +0000 UTC m=+0.141997247 container init 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:07:38 compute-2 podman[269168]: 2025-11-29 08:07:38.949173569 +0000 UTC m=+0.148285973 container start 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:07:38 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [NOTICE]   (269196) : New worker (269206) forked
Nov 29 08:07:38 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [NOTICE]   (269196) : Loading success.
Nov 29 08:07:38 compute-2 podman[269183]: 2025-11-29 08:07:38.990243879 +0000 UTC m=+0.098304535 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.014 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d0984314-851d-451b-9277-1f0fc38d3c41 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.016 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.035 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d4749627-ed67-4b95-a9b6-14c6e00d292b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.065 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5d31c03f-b02f-47a5-9931-496d5269c132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.069 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2a245a84-f491-4660-80e5-b9e990c4307f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.099 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[83ba004f-a4ff-4a19-91b4-329c38cdecc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.120 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43ec551b-615e-4e64-a314-7cc72723e893]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74ca2dd9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:7e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662458, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269223, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.136 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a9494bb9-30a3-4048-ade5-00d9743d30f1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662469, 'tstamp': 662469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269224, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662472, 'tstamp': 662472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269224, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.138 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74ca2dd9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.141 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.141 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74ca2dd9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.141 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74ca2dd9-b0, col_values=(('external_ids', {'iface-id': 'da34963c-5ff6-4f59-968f-e4a05d938684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.143 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9bb68ec9-77ee-431c-b89c-3384da9fa365 in datapath 375dc49d-ec99-4657-9ba6-74087890a298 unbound from our chassis
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.145 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 375dc49d-ec99-4657-9ba6-74087890a298
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.157 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[04040cb5-95e1-4384-9ac7-9351c0e00dd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.159 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap375dc49d-e1 in ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.160 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap375dc49d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.160 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb96d2b-911b-4aaa-87c9-1f196b9110a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.161 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[63a95f68-9c61-47eb-a5e8-6a4c11788966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.174 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a203c746-32e4-49ac-9d3a-0127bfaeb07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.188 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12248aee-f52f-4842-be80-97bc7f82e125]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.217 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[67c64c8c-fd70-4928-9ce9-cf6337c2126c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 NetworkManager[48993]: <info>  [1764403659.2243] manager: (tap375dc49d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.224 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7620db-25c9-4dc9-b889-33d46c7304db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.261 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b93257ee-f15e-4ff7-b0f3-d9c88311b6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.264 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[872f26d5-7218-4017-8021-ec9cef05b20c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 NetworkManager[48993]: <info>  [1764403659.2908] device (tap375dc49d-e0): carrier: link connected
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.297 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9d386c74-8b60-4a0f-b0ce-baf3fb008302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.320 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[20eaf3e1-802f-48a9-a0bd-50c2baf1174b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap375dc49d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:63:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662555, 'reachable_time': 42700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269235, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.336 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfc51e7-30b7-4fee-b8b0-d9938ab916a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:6374'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662555, 'tstamp': 662555}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269236, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.356 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7569e2c4-5c15-4066-b83c-5c176eaae195]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap375dc49d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:63:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662555, 'reachable_time': 42700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269238, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.393 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a6233286-9b99-4192-afd0-260484cc4cb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.468 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[99e7b30d-aebb-4497-9f13-0d5fde58a4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.470 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap375dc49d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.470 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap375dc49d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.473 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 kernel: tap375dc49d-e0: entered promiscuous mode
Nov 29 08:07:39 compute-2 NetworkManager[48993]: <info>  [1764403659.4739] manager: (tap375dc49d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.476 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.477 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap375dc49d-e0, col_values=(('external_ids', {'iface-id': 'ad607a37-3d31-4486-8419-ee1e1c296435'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 ovn_controller[134375]: 2025-11-29T08:07:39Z|00391|binding|INFO|Releasing lport ad607a37-3d31-4486-8419-ee1e1c296435 from this chassis (sb_readonly=0)
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.508 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/375dc49d-ec99-4657-9ba6-74087890a298.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/375dc49d-ec99-4657-9ba6-74087890a298.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.510 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d66a75-09fb-44f2-bbdb-d754580751d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.511 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-375dc49d-ec99-4657-9ba6-74087890a298
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/375dc49d-ec99-4657-9ba6-74087890a298.pid.haproxy
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 375dc49d-ec99-4657-9ba6-74087890a298
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:07:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:39.512 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'env', 'PROCESS_TAG=haproxy-375dc49d-ec99-4657-9ba6-74087890a298', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/375dc49d-ec99-4657-9ba6-74087890a298.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.824 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 ceph-mon[77138]: pgmap v2004: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 164 op/s
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.825 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.826 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.826 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.827 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 in dict_keys([('network-vif-plugged', 'b15948c7-35a3-4201-bceb-593c2b1c8704'), ('network-vif-plugged', '0ff1cfac-1292-47db-befc-e4a968bd8d13'), ('network-vif-plugged', 'd82f4054-c2c2-4966-9ef6-c7ac320cd065'), ('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.827 232432 WARNING nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 for instance with vm_state building and task_state spawning.
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.828 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.828 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.829 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.829 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.832 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.832 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.834 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.834 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.836 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.836 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 in dict_keys([('network-vif-plugged', 'b15948c7-35a3-4201-bceb-593c2b1c8704'), ('network-vif-plugged', '0ff1cfac-1292-47db-befc-e4a968bd8d13'), ('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.838 232432 WARNING nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 for instance with vm_state building and task_state spawning.
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.839 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.840 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.841 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.842 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.842 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.842 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.842 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.843 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.843 232432 DEBUG oslo_concurrency.lockutils [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.843 232432 DEBUG nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 in dict_keys([('network-vif-plugged', '0ff1cfac-1292-47db-befc-e4a968bd8d13'), ('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.844 232432 WARNING nova.compute.manager [req-f7fbc078-46af-4520-8595-c84adb18585e req-78f64bb3-0eb6-47c1-b377-cd8a54c5237c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 for instance with vm_state building and task_state spawning.
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.848 232432 DEBUG nova.compute.manager [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.848 232432 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.848 232432 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.849 232432 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.849 232432 DEBUG nova.compute.manager [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 in dict_keys([('network-vif-plugged', '0ff1cfac-1292-47db-befc-e4a968bd8d13'), ('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:39 compute-2 nova_compute[232428]: 2025-11-29 08:07:39.849 232432 WARNING nova.compute.manager [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 for instance with vm_state building and task_state spawning.
Nov 29 08:07:39 compute-2 podman[269271]: 2025-11-29 08:07:39.848548461 +0000 UTC m=+0.032758642 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:07:39 compute-2 podman[269271]: 2025-11-29 08:07:39.957307871 +0000 UTC m=+0.141518012 container create 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:07:39 compute-2 systemd[1]: Started libpod-conmon-0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b.scope.
Nov 29 08:07:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:07:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a8c9db6d1c4a81ee7bf3f8ae7119ef75a19cda28d84ad38bcda58fde953aee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.175 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 podman[269271]: 2025-11-29 08:07:40.188193498 +0000 UTC m=+0.372403699 container init 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:07:40 compute-2 podman[269271]: 2025-11-29 08:07:40.202566736 +0000 UTC m=+0.386776857 container start 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:07:40 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [NOTICE]   (269290) : New worker (269292) forked
Nov 29 08:07:40 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [NOTICE]   (269290) : Loading success.
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.322 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0ff1cfac-1292-47db-befc-e4a968bd8d13 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.325 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.352 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4c5ffa-9e1b-4848-96c1-a5fc42e762be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.392 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a565c6d3-de06-4a90-aa0c-739ab3979866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.396 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dc29602b-26b1-4c94-ba2d-e6102815d2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.429 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.429 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.430 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.430 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.431 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 in dict_keys([('network-vif-plugged', '0ff1cfac-1292-47db-befc-e4a968bd8d13'), ('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.431 232432 WARNING nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 for instance with vm_state building and task_state spawning.
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.432 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.432 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.433 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.432 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7fc3cf-8563-467c-b799-b5341a2a4ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.433 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.433 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.434 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.434 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.435 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.435 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.435 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No event matching network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 in dict_keys([('network-vif-plugged', 'ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.436 232432 WARNING nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 for instance with vm_state building and task_state spawning.
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.436 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.436 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.436 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.437 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.437 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Processing event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.437 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.438 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.438 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.438 232432 DEBUG oslo_concurrency.lockutils [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.439 232432 DEBUG nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.439 232432 WARNING nova.compute.manager [req-6883f946-0515-45b0-b621-9bc4a93845e5 req-a6097b1d-08f2-4c8c-a9a1-bac0876a7d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb for instance with vm_state building and task_state spawning.
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.440 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.445 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403660.4453716, 9f70b4d6-e1a7-4709-8816-a19fb6569d7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.446 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] VM Resumed (Lifecycle Event)
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.447 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.451 232432 INFO nova.virt.libvirt.driver [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance spawned successfully.
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.452 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.454 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b7259240-c5f4-4df5-9244-341af494b7a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74ca2dd9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:7e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662458, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269306, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.479 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.482 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ab563068-7429-42d9-b981-0bb0163ed9c5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662469, 'tstamp': 662469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269307, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662472, 'tstamp': 662472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269307, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.484 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74ca2dd9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.484 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.485 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.485 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.486 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.486 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.487 232432 DEBUG nova.virt.libvirt.driver [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.487 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74ca2dd9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.487 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.488 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74ca2dd9-b0, col_values=(('external_ids', {'iface-id': 'da34963c-5ff6-4f59-968f-e4a05d938684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.488 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.489 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d82f4054-c2c2-4966-9ef6-c7ac320cd065 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.491 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74ca2dd9-be23-4b0d-bbc1-976490587a78
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.493 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.512 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d670b333-d220-47a1-8265-7bcfc669ff53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.532 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.558 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[efceb95a-ef36-4fb8-bf97-dce6a5719164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.560 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b073d758-024c-4441-867a-dffcdf3ddaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.570 232432 INFO nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Took 35.86 seconds to spawn the instance on the hypervisor.
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.570 232432 DEBUG nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.594 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1676f8ea-f8b2-4bfd-bd72-40a0a0efde83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.627 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c068a135-39e9-4b82-92c1-0e13d6b3df4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74ca2dd9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:7e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662458, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269313, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.654 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4ddbfb01-2b30-4e20-a743-4cd36becbd96]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662469, 'tstamp': 662469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269314, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap74ca2dd9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662472, 'tstamp': 662472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269314, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.655 232432 INFO nova.compute.manager [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Took 42.02 seconds to build instance.
Nov 29 08:07:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:40.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.656 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74ca2dd9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.659 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.659 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74ca2dd9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.659 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.660 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74ca2dd9-b0, col_values=(('external_ids', {'iface-id': 'da34963c-5ff6-4f59-968f-e4a05d938684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.660 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.661 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb in datapath 375dc49d-ec99-4657-9ba6-74087890a298 unbound from our chassis
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.663 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 375dc49d-ec99-4657-9ba6-74087890a298
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.674 232432 DEBUG oslo_concurrency.lockutils [None req-9b19209e-6c2b-40de-a623-c33aa4c5c969 f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 42.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.689 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf82cca-9b2c-448d-9949-6385fd3702eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.740 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef4f8c1-6042-42f7-94fa-3e24c0dbf803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.745 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[91608be6-ff71-461e-af66-08ab29a38ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.785 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad27a45-521c-485c-bdc2-d2701ab656bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.805 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ae718f-010b-4782-a782-b4e16ba87211]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap375dc49d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:63:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662555, 'reachable_time': 42700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269320, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.832 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96f148e4-0c55-401e-84b9-9d1ac485b117]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap375dc49d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662569, 'tstamp': 662569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269321, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap375dc49d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662572, 'tstamp': 662572}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269321, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.835 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap375dc49d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.837 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 nova_compute[232428]: 2025-11-29 08:07:40.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.840 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap375dc49d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.841 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.841 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap375dc49d-e0, col_values=(('external_ids', {'iface-id': 'ad607a37-3d31-4486-8419-ee1e1c296435'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:07:40.842 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/108798779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:40.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:41 compute-2 nova_compute[232428]: 2025-11-29 08:07:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:41 compute-2 nova_compute[232428]: 2025-11-29 08:07:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:42 compute-2 ceph-mon[77138]: pgmap v2005: 305 pgs: 305 active+clean; 145 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 351 KiB/s wr, 181 op/s
Nov 29 08:07:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2229134424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:42.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:07:43 compute-2 ceph-mon[77138]: pgmap v2006: 305 pgs: 305 active+clean; 190 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 211 op/s
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.526 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.527 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.527 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.528 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9f70b4d6-e1a7-4709-8816-a19fb6569d7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:43 compute-2 nova_compute[232428]: 2025-11-29 08:07:43.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:43 compute-2 NetworkManager[48993]: <info>  [1764403663.8888] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 08:07:43 compute-2 NetworkManager[48993]: <info>  [1764403663.8901] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 08:07:44 compute-2 nova_compute[232428]: 2025-11-29 08:07:44.130 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:44 compute-2 ovn_controller[134375]: 2025-11-29T08:07:44Z|00392|binding|INFO|Releasing lport ad607a37-3d31-4486-8419-ee1e1c296435 from this chassis (sb_readonly=0)
Nov 29 08:07:44 compute-2 ovn_controller[134375]: 2025-11-29T08:07:44Z|00393|binding|INFO|Releasing lport 1d661dac-16f7-44cf-8e8a-b78e677ba4a2 from this chassis (sb_readonly=0)
Nov 29 08:07:44 compute-2 ovn_controller[134375]: 2025-11-29T08:07:44Z|00394|binding|INFO|Releasing lport da34963c-5ff6-4f59-968f-e4a05d938684 from this chassis (sb_readonly=0)
Nov 29 08:07:44 compute-2 nova_compute[232428]: 2025-11-29 08:07:44.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:44 compute-2 nova_compute[232428]: 2025-11-29 08:07:44.527 232432 DEBUG nova.compute.manager [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-changed-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:44 compute-2 nova_compute[232428]: 2025-11-29 08:07:44.528 232432 DEBUG nova.compute.manager [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing instance network info cache due to event network-changed-7a9fe153-f72b-4621-aee3-66b486bacae5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:44 compute-2 nova_compute[232428]: 2025-11-29 08:07:44.528 232432 DEBUG oslo_concurrency.lockutils [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:44.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:44 compute-2 podman[269325]: 2025-11-29 08:07:44.691419428 +0000 UTC m=+0.091422000 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:07:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:44.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:45 compute-2 nova_compute[232428]: 2025-11-29 08:07:45.180 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:45 compute-2 ceph-mon[77138]: pgmap v2007: 305 pgs: 305 active+clean; 206 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 125 op/s
Nov 29 08:07:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1395507505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:45 compute-2 nova_compute[232428]: 2025-11-29 08:07:45.607 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/123183881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:46.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:47 compute-2 sudo[269347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:47 compute-2 sudo[269347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:47 compute-2 sudo[269347]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:47 compute-2 sudo[269372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:47 compute-2 sudo[269372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:47 compute-2 sudo[269372]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:47 compute-2 ceph-mon[77138]: pgmap v2008: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 08:07:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:07:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:07:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:48.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:49 compute-2 ceph-mon[77138]: pgmap v2009: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Nov 29 08:07:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2090791071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:50 compute-2 nova_compute[232428]: 2025-11-29 08:07:50.184 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:50 compute-2 nova_compute[232428]: 2025-11-29 08:07:50.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2442679403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:50.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:51 compute-2 sudo[269399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:51 compute-2 sudo[269399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-2 sudo[269399]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:51 compute-2 sudo[269425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:07:51 compute-2 sudo[269425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-2 sudo[269425]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:51 compute-2 sudo[269450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:51 compute-2 sudo[269450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-2 sudo[269450]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:51 compute-2 sudo[269475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:07:51 compute-2 sudo[269475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-2 ceph-mon[77138]: pgmap v2010: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 29 08:07:52 compute-2 sudo[269475]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:52 compute-2 nova_compute[232428]: 2025-11-29 08:07:52.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:07:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:07:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:52.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 08:07:53 compute-2 ceph-mon[77138]: pgmap v2011: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 231 op/s
Nov 29 08:07:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:54.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:54 compute-2 podman[269534]: 2025-11-29 08:07:54.715428227 +0000 UTC m=+0.109130563 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 08:07:54 compute-2 ceph-mon[77138]: osdmap e260: 3 total, 3 up, 3 in
Nov 29 08:07:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3084548911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:55 compute-2 nova_compute[232428]: 2025-11-29 08:07:55.187 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:55 compute-2 nova_compute[232428]: 2025-11-29 08:07:55.613 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:55 compute-2 ceph-mon[77138]: pgmap v2013: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 152 KiB/s wr, 177 op/s
Nov 29 08:07:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/243775416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:56 compute-2 ovn_controller[134375]: 2025-11-29T08:07:56Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:9b:c3 10.2.2.200
Nov 29 08:07:56 compute-2 ovn_controller[134375]: 2025-11-29T08:07:56Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:9b:c3 10.2.2.200
Nov 29 08:07:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:07:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:56.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:07:56 compute-2 ovn_controller[134375]: 2025-11-29T08:07:56Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:13:82 10.100.0.5
Nov 29 08:07:56 compute-2 ovn_controller[134375]: 2025-11-29T08:07:56Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:13:82 10.100.0.5
Nov 29 08:07:56 compute-2 ceph-mon[77138]: pgmap v2014: 305 pgs: 305 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 118 op/s
Nov 29 08:07:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:62:71 10.1.1.68
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:62:71 10.1.1.68
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:6a:16 10.1.1.85
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:6a:16 10.1.1.85
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:58:a8 10.2.2.100
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:58:a8 10.2.2.100
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:95:2d 10.1.1.112
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:95:2d 10.1.1.112
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:e8:12 10.1.1.54
Nov 29 08:07:57 compute-2 ovn_controller[134375]: 2025-11-29T08:07:57Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:e8:12 10.1.1.54
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.866 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.945 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.945 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.946 232432 DEBUG oslo_concurrency.lockutils [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.946 232432 DEBUG nova.network.neutron [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Refreshing network info cache for port 7a9fe153-f72b-4621-aee3-66b486bacae5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.947 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.948 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.948 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.948 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.949 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.998 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.999 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.999 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:57 compute-2 nova_compute[232428]: 2025-11-29 08:07:57.999 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.000 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1524966499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.438 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.504 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.505 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.505 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.506 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:58.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.755 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.757 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4262MB free_disk=20.92166519165039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.757 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.758 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.849 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9f70b4d6-e1a7-4709-8816-a19fb6569d7c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.850 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.850 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.920 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:07:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:07:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:58.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:07:58 compute-2 nova_compute[232428]: 2025-11-29 08:07:58.954 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:07:58 compute-2 sudo[269585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:58 compute-2 sudo[269585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:59 compute-2 sudo[269585]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.019 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.021 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.047 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:07:59 compute-2 sudo[269610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:07:59 compute-2 sudo[269610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:59 compute-2 sudo[269610]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:59 compute-2 ceph-mon[77138]: pgmap v2015: 305 pgs: 305 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 118 op/s
Nov 29 08:07:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1524966499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.102 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.170 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/821591847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.645 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.651 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.674 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.697 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:07:59 compute-2 nova_compute[232428]: 2025-11-29 08:07:59.697 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/821591847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:00 compute-2 nova_compute[232428]: 2025-11-29 08:08:00.155 232432 DEBUG nova.network.neutron [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updated VIF entry in instance network info cache for port 7a9fe153-f72b-4621-aee3-66b486bacae5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:00 compute-2 nova_compute[232428]: 2025-11-29 08:08:00.156 232432 DEBUG nova.network.neutron [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:00 compute-2 nova_compute[232428]: 2025-11-29 08:08:00.171 232432 DEBUG oslo_concurrency.lockutils [req-76f9447a-137c-468d-bc8b-d47906b5f83c req-a16fe129-7724-4c42-8f9c-e5e2467fe67d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9f70b4d6-e1a7-4709-8816-a19fb6569d7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:00 compute-2 nova_compute[232428]: 2025-11-29 08:08:00.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:00 compute-2 nova_compute[232428]: 2025-11-29 08:08:00.617 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 08:08:01 compute-2 ceph-mon[77138]: pgmap v2016: 305 pgs: 305 active+clean; 247 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 147 op/s
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.126 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.126 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.141 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:08:02 compute-2 ceph-mon[77138]: osdmap e261: 3 total, 3 up, 3 in
Nov 29 08:08:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3964345231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.225 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.225 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.230 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.231 232432 INFO nova.compute.claims [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.395 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3250502199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.885 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.894 232432 DEBUG nova.compute.provider_tree [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.919 232432 DEBUG nova.scheduler.client.report [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.941 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:02 compute-2 nova_compute[232428]: 2025-11-29 08:08:02.942 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:08:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.015 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.015 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.040 232432 INFO nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.068 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:08:03 compute-2 ceph-mon[77138]: pgmap v2018: 305 pgs: 305 active+clean; 279 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Nov 29 08:08:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3250502199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.196 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.197 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.198 232432 INFO nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Creating image(s)
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.223 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.249 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.278 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.282 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:03.313 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:03.315 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.315 232432 DEBUG nova.policy [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '29543b27de044f598a4f01771690222b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bd1d90fea8c547aea033c26b5dd1ccc2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:03.316 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.351 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.352 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.352 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.352 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.382 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.387 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.753 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:03 compute-2 nova_compute[232428]: 2025-11-29 08:08:03.865 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] resizing rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.024 232432 DEBUG nova.objects.instance [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.059 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.060 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Ensure instance console log exists: /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.061 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.062 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.062 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:04 compute-2 nova_compute[232428]: 2025-11-29 08:08:04.218 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Successfully created port: 97402309-c430-4daf-89ba-2236d0f5e144 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:08:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:04.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.191 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:05 compute-2 ceph-mon[77138]: pgmap v2019: 305 pgs: 305 active+clean; 280 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 269 op/s
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.621 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.710 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Successfully updated port: 97402309-c430-4daf-89ba-2236d0f5e144 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.764 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.764 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquired lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.765 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.799 232432 DEBUG nova.compute.manager [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-changed-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.800 232432 DEBUG nova.compute.manager [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Refreshing instance network info cache due to event network-changed-97402309-c430-4daf-89ba-2236d0f5e144. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:05 compute-2 nova_compute[232428]: 2025-11-29 08:08:05.800 232432 DEBUG oslo_concurrency.lockutils [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 08:08:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:06 compute-2 nova_compute[232428]: 2025-11-29 08:08:06.777 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:08:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:06.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:07 compute-2 sudo[269850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:07 compute-2 sudo[269850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:07 compute-2 sudo[269850]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:07 compute-2 sudo[269875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:07 compute-2 sudo[269875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:07 compute-2 sudo[269875]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:07 compute-2 ceph-mon[77138]: pgmap v2020: 305 pgs: 305 active+clean; 315 MiB data, 870 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 254 op/s
Nov 29 08:08:07 compute-2 ceph-mon[77138]: osdmap e262: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.238 232432 DEBUG nova.network.neutron [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Updating instance_info_cache with network_info: [{"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.365 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Releasing lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.366 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Instance network_info: |[{"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.368 232432 DEBUG oslo_concurrency.lockutils [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.369 232432 DEBUG nova.network.neutron [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Refreshing network info cache for port 97402309-c430-4daf-89ba-2236d0f5e144 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.372 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Start _get_guest_xml network_info=[{"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.376 232432 WARNING nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.380 232432 DEBUG nova.virt.libvirt.host [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.381 232432 DEBUG nova.virt.libvirt.host [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.384 232432 DEBUG nova.virt.libvirt.host [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.385 232432 DEBUG nova.virt.libvirt.host [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.386 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.386 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.387 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.387 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.387 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.388 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.388 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.388 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.388 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.388 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.389 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.389 232432 DEBUG nova.virt.hardware [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.393 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1217670934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.859 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.891 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:08 compute-2 nova_compute[232428]: 2025-11-29 08:08:08.895 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:08:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:08.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:08:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1530692804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.358 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.361 232432 DEBUG nova.virt.libvirt.vif [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-532174622',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-532174622',id=87,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd1d90fea8c547aea033c26b5dd1ccc2',ramdisk_id='',reservation_id='r-n79z4ec8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-806176304',owner_user_name='tempest-InstanceActionsV221TestJSON-806176304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:03Z,user_data=None,user_id='29543b27de044f598a4f01771690222b',uuid=5d5aff6c-84c4-4140-9d05-42bfc4dfab9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.362 232432 DEBUG nova.network.os_vif_util [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converting VIF {"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.363 232432 DEBUG nova.network.os_vif_util [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.365 232432 DEBUG nova.objects.instance [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.387 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <uuid>5d5aff6c-84c4-4140-9d05-42bfc4dfab9b</uuid>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <name>instance-00000057</name>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-532174622</nova:name>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:08:08</nova:creationTime>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:user uuid="29543b27de044f598a4f01771690222b">tempest-InstanceActionsV221TestJSON-806176304-project-member</nova:user>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:project uuid="bd1d90fea8c547aea033c26b5dd1ccc2">tempest-InstanceActionsV221TestJSON-806176304</nova:project>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <nova:port uuid="97402309-c430-4daf-89ba-2236d0f5e144">
Nov 29 08:08:09 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <system>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="serial">5d5aff6c-84c4-4140-9d05-42bfc4dfab9b</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="uuid">5d5aff6c-84c4-4140-9d05-42bfc4dfab9b</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </system>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <os>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </os>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <features>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </features>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk">
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config">
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:25:39:26"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <target dev="tap97402309-c4"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/console.log" append="off"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <video>
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </video>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:08:09 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:08:09 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:08:09 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:08:09 compute-2 nova_compute[232428]: </domain>
Nov 29 08:08:09 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.389 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Preparing to wait for external event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.389 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.390 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.390 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.391 232432 DEBUG nova.virt.libvirt.vif [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-532174622',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-532174622',id=87,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd1d90fea8c547aea033c26b5dd1ccc2',ramdisk_id='',reservation_id='r-n79z4ec8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-806176304',owner_user_name='tempest-InstanceActionsV221TestJSON-806176304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:03Z,user_data=None,user_id='29543b27de044f598a4f01771690222b',uuid=5d5aff6c-84c4-4140-9d05-42bfc4dfab9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.391 232432 DEBUG nova.network.os_vif_util [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converting VIF {"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.392 232432 DEBUG nova.network.os_vif_util [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.392 232432 DEBUG os_vif [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.393 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.393 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.394 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.398 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.398 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97402309-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.398 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97402309-c4, col_values=(('external_ids', {'iface-id': '97402309-c430-4daf-89ba-2236d0f5e144', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:39:26', 'vm-uuid': '5d5aff6c-84c4-4140-9d05-42bfc4dfab9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.400 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-2 NetworkManager[48993]: <info>  [1764403689.4020] manager: (tap97402309-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.402 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.409 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.410 232432 INFO os_vif [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4')
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.547 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.548 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.548 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] No VIF found with MAC fa:16:3e:25:39:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.549 232432 INFO nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Using config drive
Nov 29 08:08:09 compute-2 nova_compute[232428]: 2025-11-29 08:08:09.573 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:09 compute-2 ceph-mon[77138]: pgmap v2022: 305 pgs: 305 active+clean; 315 MiB data, 870 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 253 op/s
Nov 29 08:08:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1217670934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1530692804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:09 compute-2 podman[269984]: 2025-11-29 08:08:09.725807574 +0000 UTC m=+0.116535263 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 08:08:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.511 232432 INFO nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Creating config drive at /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.523 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1t53m32 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.674 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1t53m32" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:10.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.738 232432 DEBUG nova.storage.rbd_utils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] rbd image 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:10 compute-2 nova_compute[232428]: 2025-11-29 08:08:10.744 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.058 232432 DEBUG nova.network.neutron [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Updated VIF entry in instance network info cache for port 97402309-c430-4daf-89ba-2236d0f5e144. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.060 232432 DEBUG nova.network.neutron [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Updating instance_info_cache with network_info: [{"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.106 232432 DEBUG oslo_concurrency.processutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.106 232432 INFO nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Deleting local config drive /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b/disk.config because it was imported into RBD.
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.108 232432 DEBUG oslo_concurrency.lockutils [req-b3ddb71c-d816-4e6e-ad76-e80013974218 req-8ddfa82c-dc27-4a17-9a13-2f90b22d5c8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:11 compute-2 kernel: tap97402309-c4: entered promiscuous mode
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.1923] manager: (tap97402309-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 08:08:11 compute-2 ovn_controller[134375]: 2025-11-29T08:08:11Z|00395|binding|INFO|Claiming lport 97402309-c430-4daf-89ba-2236d0f5e144 for this chassis.
Nov 29 08:08:11 compute-2 ovn_controller[134375]: 2025-11-29T08:08:11Z|00396|binding|INFO|97402309-c430-4daf-89ba-2236d0f5e144: Claiming fa:16:3e:25:39:26 10.100.0.10
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.193 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.204 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:39:26 10.100.0.10'], port_security=['fa:16:3e:25:39:26 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5d5aff6c-84c4-4140-9d05-42bfc4dfab9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd1d90fea8c547aea033c26b5dd1ccc2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8aa1d51c-f34f-4e6a-beb9-efc146bbe5e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fde98610-53b6-4d73-aa65-51956d2be981, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=97402309-c430-4daf-89ba-2236d0f5e144) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.207 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 97402309-c430-4daf-89ba-2236d0f5e144 in datapath a76e0d7d-b563-4701-a0eb-f1be04e4d936 bound to our chassis
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.212 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a76e0d7d-b563-4701-a0eb-f1be04e4d936
Nov 29 08:08:11 compute-2 ovn_controller[134375]: 2025-11-29T08:08:11Z|00397|binding|INFO|Setting lport 97402309-c430-4daf-89ba-2236d0f5e144 ovn-installed in OVS
Nov 29 08:08:11 compute-2 ovn_controller[134375]: 2025-11-29T08:08:11Z|00398|binding|INFO|Setting lport 97402309-c430-4daf-89ba-2236d0f5e144 up in Southbound
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.223 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.226 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.232 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2e779306-1b00-49b3-807c-f1669bc83404]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.234 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa76e0d7d-b1 in ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.238 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa76e0d7d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.238 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[daf1b59c-b3c1-4c59-bd74-d839c29d73fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 systemd-udevd[270056]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.240 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[050bd24d-d55d-4f42-92d3-3091cee9d4a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.2604] device (tap97402309-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.2634] device (tap97402309-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:08:11 compute-2 systemd-machined[194747]: New machine qemu-37-instance-00000057.
Nov 29 08:08:11 compute-2 systemd[1]: Started Virtual Machine qemu-37-instance-00000057.
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.268 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[736b251f-9902-4376-ae3e-e22d68688dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.297 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96952529-e1c6-46e0-ab99-61a1f9914ab3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.343 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b38dd2c6-f90f-45f9-a443-c581357deaab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.3523] manager: (tapa76e0d7d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.352 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[caa26c66-33b4-4fa7-95da-65401e623ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.402 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[76ba5b45-073b-4214-aeb0-3d1282dfb99d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.407 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9f605a26-67fb-4038-846c-ef2b7598bfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.4340] device (tapa76e0d7d-b0): carrier: link connected
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.440 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[19ae670c-4f50-4e5e-9c32-46a779cad65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.457 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[47e60b60-0713-466a-a650-dcd5a3c53e86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa76e0d7d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665770, 'reachable_time': 30497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270090, 'error': None, 'target': 'ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.473 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[adc904cb-ed9d-436d-96d9-5d8744f1092e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:b108'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665770, 'tstamp': 665770}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270091, 'error': None, 'target': 'ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4a728b71-79e8-4344-9209-1fc70f6a5cad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa76e0d7d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665770, 'reachable_time': 30497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270092, 'error': None, 'target': 'ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.520 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c476e74-a6db-43a1-ab72-4022e0333821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.579 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[56cd05cc-587c-4c3e-8ae1-649c4171d7da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.581 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa76e0d7d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.581 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.582 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa76e0d7d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 kernel: tapa76e0d7d-b0: entered promiscuous mode
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 NetworkManager[48993]: <info>  [1764403691.6310] manager: (tapa76e0d7d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.631 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa76e0d7d-b0, col_values=(('external_ids', {'iface-id': '16e7deea-684c-496e-a30e-f806dea5fa9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 ovn_controller[134375]: 2025-11-29T08:08:11Z|00399|binding|INFO|Releasing lport 16e7deea-684c-496e-a30e-f806dea5fa9d from this chassis (sb_readonly=0)
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.635 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a76e0d7d-b563-4701-a0eb-f1be04e4d936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a76e0d7d-b563-4701-a0eb-f1be04e4d936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.636 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50b30397-fe8d-4a28-a989-63116b7b67cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.637 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-a76e0d7d-b563-4701-a0eb-f1be04e4d936
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/a76e0d7d-b563-4701-a0eb-f1be04e4d936.pid.haproxy
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID a76e0d7d-b563-4701-a0eb-f1be04e4d936
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:08:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:11.637 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'env', 'PROCESS_TAG=haproxy-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a76e0d7d-b563-4701-a0eb-f1be04e4d936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:08:11 compute-2 nova_compute[232428]: 2025-11-29 08:08:11.649 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:11 compute-2 ceph-mon[77138]: pgmap v2023: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 53 op/s
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.062 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403692.0621922, 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.064 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] VM Started (Lifecycle Event)
Nov 29 08:08:12 compute-2 podman[270166]: 2025-11-29 08:08:12.054803357 +0000 UTC m=+0.020927043 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.169 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.176 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403692.0624182, 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.177 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] VM Paused (Lifecycle Event)
Nov 29 08:08:12 compute-2 podman[270166]: 2025-11-29 08:08:12.197724711 +0000 UTC m=+0.163848417 container create 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.209 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.213 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:12 compute-2 nova_compute[232428]: 2025-11-29 08:08:12.229 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:12 compute-2 systemd[1]: Started libpod-conmon-370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4.scope.
Nov 29 08:08:12 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:08:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128003bbfc3caea0d31013d8b3fe12d26060aef0e309a334b04b19b12b70c777/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:12 compute-2 podman[270166]: 2025-11-29 08:08:12.361194517 +0000 UTC m=+0.327318263 container init 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:12 compute-2 podman[270166]: 2025-11-29 08:08:12.370276369 +0000 UTC m=+0.336400065 container start 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:08:12 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [NOTICE]   (270186) : New worker (270188) forked
Nov 29 08:08:12 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [NOTICE]   (270186) : Loading success.
Nov 29 08:08:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.233 232432 DEBUG nova.compute.manager [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.233 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.234 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.234 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.235 232432 DEBUG nova.compute.manager [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Processing event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.235 232432 DEBUG nova.compute.manager [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.235 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.236 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.236 232432 DEBUG oslo_concurrency.lockutils [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.236 232432 DEBUG nova.compute.manager [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] No waiting events found dispatching network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.237 232432 WARNING nova.compute.manager [req-0164fde8-fd4b-4871-9ab5-855a64a321c4 req-09fc0dd6-3015-44d4-9b57-a97260a2e5f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received unexpected event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 for instance with vm_state building and task_state spawning.
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.237 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.242 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403693.2421288, 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.242 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] VM Resumed (Lifecycle Event)
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.245 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.248 232432 INFO nova.virt.libvirt.driver [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Instance spawned successfully.
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.249 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.272 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.276 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.285 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.285 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.285 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.286 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.286 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.287 232432 DEBUG nova.virt.libvirt.driver [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.348 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.493 232432 INFO nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Took 10.30 seconds to spawn the instance on the hypervisor.
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.494 232432 DEBUG nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.590 232432 INFO nova.compute.manager [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Took 11.39 seconds to build instance.
Nov 29 08:08:13 compute-2 nova_compute[232428]: 2025-11-29 08:08:13.804 232432 DEBUG oslo_concurrency.lockutils [None req-96796976-ca5d-4e9e-a549-d42ae24cfb14 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:13 compute-2 ceph-mon[77138]: pgmap v2024: 305 pgs: 305 active+clean; 278 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 08:08:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4130764057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:14 compute-2 nova_compute[232428]: 2025-11-29 08:08:14.402 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:14 compute-2 ceph-mon[77138]: pgmap v2025: 305 pgs: 305 active+clean; 247 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 29 08:08:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.250 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.251 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.252 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.253 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.254 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.257 232432 INFO nova.compute.manager [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Terminating instance
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.259 232432 DEBUG nova.compute.manager [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:15 compute-2 kernel: tap97402309-c4 (unregistering): left promiscuous mode
Nov 29 08:08:15 compute-2 NetworkManager[48993]: <info>  [1764403695.3185] device (tap97402309-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.385 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 ovn_controller[134375]: 2025-11-29T08:08:15Z|00400|binding|INFO|Releasing lport 97402309-c430-4daf-89ba-2236d0f5e144 from this chassis (sb_readonly=0)
Nov 29 08:08:15 compute-2 ovn_controller[134375]: 2025-11-29T08:08:15Z|00401|binding|INFO|Setting lport 97402309-c430-4daf-89ba-2236d0f5e144 down in Southbound
Nov 29 08:08:15 compute-2 ovn_controller[134375]: 2025-11-29T08:08:15Z|00402|binding|INFO|Removing iface tap97402309-c4 ovn-installed in OVS
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.389 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.401 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 29 08:08:15 compute-2 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000057.scope: Consumed 2.902s CPU time.
Nov 29 08:08:15 compute-2 systemd-machined[194747]: Machine qemu-37-instance-00000057 terminated.
Nov 29 08:08:15 compute-2 podman[270198]: 2025-11-29 08:08:15.46859113 +0000 UTC m=+0.066235524 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.499 232432 INFO nova.virt.libvirt.driver [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Instance destroyed successfully.
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.499 232432 DEBUG nova.objects.instance [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lazy-loading 'resources' on Instance uuid 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.524 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:39:26 10.100.0.10'], port_security=['fa:16:3e:25:39:26 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5d5aff6c-84c4-4140-9d05-42bfc4dfab9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd1d90fea8c547aea033c26b5dd1ccc2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8aa1d51c-f34f-4e6a-beb9-efc146bbe5e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fde98610-53b6-4d73-aa65-51956d2be981, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=97402309-c430-4daf-89ba-2236d0f5e144) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.526 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 97402309-c430-4daf-89ba-2236d0f5e144 in datapath a76e0d7d-b563-4701-a0eb-f1be04e4d936 unbound from our chassis
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.529 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a76e0d7d-b563-4701-a0eb-f1be04e4d936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.531 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2664a898-64a9-4465-933d-256c803daa76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.531 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936 namespace which is not needed anymore
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.542 232432 DEBUG nova.virt.libvirt.vif [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-532174622',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-532174622',id=87,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:13Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd1d90fea8c547aea033c26b5dd1ccc2',ramdisk_id='',reservation_id='r-n79z4ec8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-806176304',owner_user_name='tempest-InstanceActionsV221TestJSON-806176304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:13Z,user_data=None,user_id='29543b27de044f598a4f01771690222b',uuid=5d5aff6c-84c4-4140-9d05-42bfc4dfab9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.543 232432 DEBUG nova.network.os_vif_util [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converting VIF {"id": "97402309-c430-4daf-89ba-2236d0f5e144", "address": "fa:16:3e:25:39:26", "network": {"id": "a76e0d7d-b563-4701-a0eb-f1be04e4d936", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1630635051-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd1d90fea8c547aea033c26b5dd1ccc2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97402309-c4", "ovs_interfaceid": "97402309-c430-4daf-89ba-2236d0f5e144", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.543 232432 DEBUG nova.network.os_vif_util [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.544 232432 DEBUG os_vif [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.545 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.546 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97402309-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.547 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.549 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.551 232432 INFO os_vif [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:39:26,bridge_name='br-int',has_traffic_filtering=True,id=97402309-c430-4daf-89ba-2236d0f5e144,network=Network(a76e0d7d-b563-4701-a0eb-f1be04e4d936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97402309-c4')
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.627 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [NOTICE]   (270186) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:15 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [NOTICE]   (270186) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:15 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [WARNING]  (270186) : Exiting Master process...
Nov 29 08:08:15 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [ALERT]    (270186) : Current worker (270188) exited with code 143 (Terminated)
Nov 29 08:08:15 compute-2 neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936[270182]: [WARNING]  (270186) : All workers exited. Exiting... (0)
Nov 29 08:08:15 compute-2 systemd[1]: libpod-370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4.scope: Deactivated successfully.
Nov 29 08:08:15 compute-2 podman[270272]: 2025-11-29 08:08:15.712494743 +0000 UTC m=+0.084374272 container died 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:08:15 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-128003bbfc3caea0d31013d8b3fe12d26060aef0e309a334b04b19b12b70c777-merged.mount: Deactivated successfully.
Nov 29 08:08:15 compute-2 podman[270272]: 2025-11-29 08:08:15.755409449 +0000 UTC m=+0.127288948 container cleanup 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:08:15 compute-2 systemd[1]: libpod-conmon-370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4.scope: Deactivated successfully.
Nov 29 08:08:15 compute-2 podman[270302]: 2025-11-29 08:08:15.884807103 +0000 UTC m=+0.097876132 container remove 370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.890 232432 DEBUG nova.compute.manager [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-unplugged-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.891 232432 DEBUG oslo_concurrency.lockutils [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.892 232432 DEBUG oslo_concurrency.lockutils [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.892 232432 DEBUG oslo_concurrency.lockutils [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.893 232432 DEBUG nova.compute.manager [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] No waiting events found dispatching network-vif-unplugged-97402309-c430-4daf-89ba-2236d0f5e144 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.893 232432 DEBUG nova.compute.manager [req-4ac6ea52-7488-4efc-bd63-b8a0c8cd911a req-23b19e4f-0fa5-4658-9b0a-90fe50781de6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-unplugged-97402309-c430-4daf-89ba-2236d0f5e144 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.893 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf8efbb-bd6c-45aa-942a-5e66b4ea1d26]: (4, ('Sat Nov 29 08:08:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936 (370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4)\n370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4\nSat Nov 29 08:08:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936 (370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4)\n370dca9c76c2741f18e695ca0e0b3ff1f860c1a2180fa109edc10dc0cc429dd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.896 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[043269ce-c14c-4fda-84fb-d0b2564855fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.897 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa76e0d7d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.899 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 kernel: tapa76e0d7d-b0: left promiscuous mode
Nov 29 08:08:15 compute-2 nova_compute[232428]: 2025-11-29 08:08:15.920 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.925 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0159d3f1-b404-4c5f-a8e7-643a0c0c967e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.945 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cb41af9f-2d9c-44d3-a87d-fe611a816f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.947 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50b0dffc-4be3-488c-9cbe-33708740a462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf556f1-9904-48b9-a8b7-bed286002306]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665760, 'reachable_time': 18051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270318, 'error': None, 'target': 'ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.969 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a76e0d7d-b563-4701-a0eb-f1be04e4d936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:15.969 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[9822b447-74c4-4aba-b708-4d05d5bbd126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:15 compute-2 systemd[1]: run-netns-ovnmeta\x2da76e0d7d\x2db563\x2d4701\x2da0eb\x2df1be04e4d936.mount: Deactivated successfully.
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.363 232432 INFO nova.virt.libvirt.driver [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Deleting instance files /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_del
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.364 232432 INFO nova.virt.libvirt.driver [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Deletion of /var/lib/nova/instances/5d5aff6c-84c4-4140-9d05-42bfc4dfab9b_del complete
Nov 29 08:08:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.817 232432 INFO nova.compute.manager [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Took 1.56 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.818 232432 DEBUG oslo.service.loopingcall [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.818 232432 DEBUG nova.compute.manager [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:16 compute-2 nova_compute[232428]: 2025-11-29 08:08:16.818 232432 DEBUG nova.network.neutron [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:17 compute-2 ceph-mon[77138]: pgmap v2026: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.572 232432 DEBUG nova.network.neutron [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.665 232432 DEBUG nova.compute.manager [req-d69cb8be-0da5-48a8-9924-02bc181bb541 req-091d9088-0829-4123-baf6-56eed6050c8c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-deleted-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.666 232432 INFO nova.compute.manager [req-d69cb8be-0da5-48a8-9924-02bc181bb541 req-091d9088-0829-4123-baf6-56eed6050c8c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Neutron deleted interface 97402309-c430-4daf-89ba-2236d0f5e144; detaching it from the instance and deleting it from the info cache
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.666 232432 DEBUG nova.network.neutron [req-d69cb8be-0da5-48a8-9924-02bc181bb541 req-091d9088-0829-4123-baf6-56eed6050c8c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.670 232432 INFO nova.compute.manager [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Took 0.85 seconds to deallocate network for instance.
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.698 232432 DEBUG nova.compute.manager [req-d69cb8be-0da5-48a8-9924-02bc181bb541 req-091d9088-0829-4123-baf6-56eed6050c8c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Detach interface failed, port_id=97402309-c430-4daf-89ba-2236d0f5e144, reason: Instance 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.828 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.829 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:17 compute-2 nova_compute[232428]: 2025-11-29 08:08:17.953 232432 DEBUG oslo_concurrency.processutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:18.003 143912 DEBUG eventlet.wsgi.server [-] (143912) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:18.004 143912 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: Accept: */*
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: Connection: close
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: Content-Type: text/plain
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: Host: 169.254.169.254
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: User-Agent: curl/7.84.0
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: X-Forwarded-For: 10.100.0.5
Nov 29 08:08:18 compute-2 ovn_metadata_agent[143796]: X-Ovn-Network-Id: e4f17807-9d16-4b74-9bf3-d79f60746fbd __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.022 232432 DEBUG nova.compute.manager [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.023 232432 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.024 232432 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.024 232432 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.025 232432 DEBUG nova.compute.manager [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] No waiting events found dispatching network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.025 232432 WARNING nova.compute.manager [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Received unexpected event network-vif-plugged-97402309-c430-4daf-89ba-2236d0f5e144 for instance with vm_state deleted and task_state None.
Nov 29 08:08:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1936587442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3645831843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.493 232432 DEBUG oslo_concurrency.processutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.500 232432 DEBUG nova.compute.provider_tree [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.616 232432 DEBUG nova.scheduler.client.report [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:18.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.804 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:18 compute-2 nova_compute[232428]: 2025-11-29 08:08:18.830 232432 INFO nova.scheduler.client.report [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Deleted allocations for instance 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b
Nov 29 08:08:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:19 compute-2 nova_compute[232428]: 2025-11-29 08:08:19.014 232432 DEBUG oslo_concurrency.lockutils [None req-e80730f0-b1eb-40e1-bc7c-fbc3828a92fb 29543b27de044f598a4f01771690222b bd1d90fea8c547aea033c26b5dd1ccc2 - - default default] Lock "5d5aff6c-84c4-4140-9d05-42bfc4dfab9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:19 compute-2 ceph-mon[77138]: pgmap v2027: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 163 op/s
Nov 29 08:08:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3645831843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:19.442 143912 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 29 08:08:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:19.443 143912 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 1.4387417
Nov 29 08:08:19 compute-2 haproxy-metadata-proxy-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269115]: 10.100.0.5:34330 [29/Nov/2025:08:08:18.001] listener listener/metadata 0/0/0/1441/1441 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 08:08:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/499904687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1066852043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.520 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.521 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.522 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.522 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.523 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.524 232432 INFO nova.compute.manager [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Terminating instance
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.526 232432 DEBUG nova.compute.manager [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 kernel: tap7a9fe153-f7 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.6926] device (tap7a9fe153-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00403|binding|INFO|Releasing lport 7a9fe153-f72b-4621-aee3-66b486bacae5 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00404|binding|INFO|Setting lport 7a9fe153-f72b-4621-aee3-66b486bacae5 down in Southbound
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.713 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00405|binding|INFO|Removing iface tap7a9fe153-f7 ovn-installed in OVS
Nov 29 08:08:20 compute-2 kernel: tapb15948c7-35 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.719 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.725 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:13:82 10.100.0.5'], port_security=['fa:16:3e:2b:13:82 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69307759-c430-40fb-9425-950cfd24a6e4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7a9fe153-f72b-4621-aee3-66b486bacae5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.7261] device (tapb15948c7-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.729 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7a9fe153-f72b-4621-aee3-66b486bacae5 in datapath e4f17807-9d16-4b74-9bf3-d79f60746fbd unbound from our chassis
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.733 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4f17807-9d16-4b74-9bf3-d79f60746fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.736 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[16774f7a-95d0-4926-9154-b7dc04d2ccbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.738 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd namespace which is not needed anymore
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00406|binding|INFO|Releasing lport b15948c7-35a3-4201-bceb-593c2b1c8704 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00407|binding|INFO|Setting lport b15948c7-35a3-4201-bceb-593c2b1c8704 down in Southbound
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00408|binding|INFO|Removing iface tapb15948c7-35 ovn-installed in OVS
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.755 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:95:2d 10.1.1.112'], port_security=['fa:16:3e:55:95:2d 10.1.1.112'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-2054854358', 'neutron:cidrs': '10.1.1.112/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-2054854358', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd283247-d149-4bbd-bedb-6a3a3aca31ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=b15948c7-35a3-4201-bceb-593c2b1c8704) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 kernel: tapd0984314-85 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.7762] device (tapd0984314-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.781 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 kernel: tap0ff1cfac-12 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00409|binding|INFO|Releasing lport d0984314-851d-451b-9277-1f0fc38d3c41 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00410|binding|INFO|Setting lport d0984314-851d-451b-9277-1f0fc38d3c41 down in Southbound
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00411|binding|INFO|Removing iface tapd0984314-85 ovn-installed in OVS
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.802 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.806 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:62:71 10.1.1.68'], port_security=['fa:16:3e:09:62:71 10.1.1.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1258835387', 'neutron:cidrs': '10.1.1.68/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1258835387', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd283247-d149-4bbd-bedb-6a3a3aca31ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d0984314-851d-451b-9277-1f0fc38d3c41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.8100] device (tap0ff1cfac-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 kernel: tapd82f4054-c2 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.8357] device (tapd82f4054-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 kernel: tap9bb68ec9-77 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00412|binding|INFO|Releasing lport 0ff1cfac-1292-47db-befc-e4a968bd8d13 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00413|binding|INFO|Setting lport 0ff1cfac-1292-47db-befc-e4a968bd8d13 down in Southbound
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00414|binding|INFO|Removing iface tap0ff1cfac-12 ovn-installed in OVS
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.8556] device (tap9bb68ec9-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.856 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.861 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:e8:12 10.1.1.54'], port_security=['fa:16:3e:cb:e8:12 10.1.1.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.54/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0ff1cfac-1292-47db-befc-e4a968bd8d13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 kernel: taped4379ad-b1 (unregistering): left promiscuous mode
Nov 29 08:08:20 compute-2 NetworkManager[48993]: <info>  [1764403700.8897] device (taped4379ad-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00415|binding|INFO|Releasing lport 9bb68ec9-77ee-431c-b89c-3384da9fa365 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00416|binding|INFO|Setting lport 9bb68ec9-77ee-431c-b89c-3384da9fa365 down in Southbound
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00417|binding|INFO|Releasing lport d82f4054-c2c2-4966-9ef6-c7ac320cd065 from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00418|binding|INFO|Setting lport d82f4054-c2c2-4966-9ef6-c7ac320cd065 down in Southbound
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00419|binding|INFO|Removing iface tapd82f4054-c2 ovn-installed in OVS
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00420|binding|INFO|Removing iface tap9bb68ec9-77 ovn-installed in OVS
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.914 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.919 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:58:a8 10.2.2.100'], port_security=['fa:16:3e:69:58:a8 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-375dc49d-ec99-4657-9ba6-74087890a298', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfd2343a-4417-4955-ae4f-9d95292d94dd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9bb68ec9-77ee-431c-b89c-3384da9fa365) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.921 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:6a:16 10.1.1.85'], port_security=['fa:16:3e:45:6a:16 10.1.1.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.85/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67053cf6-fb35-4a63-a633-2c66b5a66cc9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d82f4054-c2c2-4966-9ef6-c7ac320cd065) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:20 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [NOTICE]   (269113) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:20 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [NOTICE]   (269113) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:20 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [WARNING]  (269113) : Exiting Master process...
Nov 29 08:08:20 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [ALERT]    (269113) : Current worker (269115) exited with code 143 (Terminated)
Nov 29 08:08:20 compute-2 neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd[269109]: [WARNING]  (269113) : All workers exited. Exiting... (0)
Nov 29 08:08:20 compute-2 systemd[1]: libpod-63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a.scope: Deactivated successfully.
Nov 29 08:08:20 compute-2 podman[270383]: 2025-11-29 08:08:20.960307112 +0000 UTC m=+0.070294322 container died 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:08:20 compute-2 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 29 08:08:20 compute-2 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000053.scope: Consumed 18.537s CPU time.
Nov 29 08:08:20 compute-2 systemd-machined[194747]: Machine qemu-36-instance-00000053 terminated.
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.969 232432 DEBUG nova.compute.manager [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.970 232432 DEBUG oslo_concurrency.lockutils [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.971 232432 DEBUG oslo_concurrency.lockutils [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.971 232432 DEBUG oslo_concurrency.lockutils [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.971 232432 DEBUG nova.compute.manager [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-7a9fe153-f72b-4621-aee3-66b486bacae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.972 232432 DEBUG nova.compute.manager [req-1b5f72e4-ae5f-45d9-a730-8b33cb64bd0d req-2cf678df-4e54-4b89-85b6-5367182beb1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-7a9fe153-f72b-4621-aee3-66b486bacae5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.977 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00421|binding|INFO|Releasing lport ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb from this chassis (sb_readonly=0)
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00422|binding|INFO|Setting lport ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb down in Southbound
Nov 29 08:08:20 compute-2 ovn_controller[134375]: 2025-11-29T08:08:20Z|00423|binding|INFO|Removing iface taped4379ad-b1 ovn-installed in OVS
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 nova_compute[232428]: 2025-11-29 08:08:20.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:20.989 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:9b:c3 10.2.2.200'], port_security=['fa:16:3e:1c:9b:c3 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '9f70b4d6-e1a7-4709-8816-a19fb6569d7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-375dc49d-ec99-4657-9ba6-74087890a298', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e61a0774e90545289bd82e4a71650bde', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fb855c-4da6-41ee-be02-d5aa13343c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfd2343a-4417-4955-ae4f-9d95292d94dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-54fc6e6623b629fb022973799fe5bde61df969782d1ea27131c13047f276f34d-merged.mount: Deactivated successfully.
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.013 232432 DEBUG nova.compute.manager [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-b15948c7-35a3-4201-bceb-593c2b1c8704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.014 232432 DEBUG oslo_concurrency.lockutils [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.014 232432 DEBUG oslo_concurrency.lockutils [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.015 232432 DEBUG oslo_concurrency.lockutils [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.015 232432 DEBUG nova.compute.manager [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-b15948c7-35a3-4201-bceb-593c2b1c8704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.016 232432 DEBUG nova.compute.manager [req-1ac3a604-e988-494a-8113-851879628bba req-9b0ee927-b458-4fe5-9c51-b0bab93a2311 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-b15948c7-35a3-4201-bceb-593c2b1c8704 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 podman[270383]: 2025-11-29 08:08:21.017617088 +0000 UTC m=+0.127604298 container cleanup 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:08:21 compute-2 systemd[1]: libpod-conmon-63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a.scope: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270427]: 2025-11-29 08:08:21.106337483 +0000 UTC m=+0.052457565 container remove 63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.116 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[466315fe-c030-4836-87c2-6ba1fe65417b]: (4, ('Sat Nov 29 08:08:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd (63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a)\n63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a\nSat Nov 29 08:08:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd (63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a)\n63859c8bcabd401cc605f861a6d2ac096152d35c50bc62be37a925a72b0faf5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.119 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6e86d413-917a-4355-9823-e4b19ccbc91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.120 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4f17807-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 kernel: tape4f17807-90: left promiscuous mode
Nov 29 08:08:21 compute-2 NetworkManager[48993]: <info>  [1764403701.1556] manager: (tap7a9fe153-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.177 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 NetworkManager[48993]: <info>  [1764403701.1784] manager: (tapb15948c7-35): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.182 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12e6c4cc-57e0-4e67-9b23-50d40b992b5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.197 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[16e250b1-47b7-48ae-a703-8f4264f9737d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.199 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[51610b7f-4409-4f00-a99b-46eaf6990773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 NetworkManager[48993]: <info>  [1764403701.2118] manager: (tap0ff1cfac-12): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 08:08:21 compute-2 NetworkManager[48993]: <info>  [1764403701.2242] manager: (tapd82f4054-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.239 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1b4d95-8f17-4bd9-895c-02d1db2ef423]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662352, 'reachable_time': 16387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270472, 'error': None, 'target': 'ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.243 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4f17807-9d16-4b74-9bf3-d79f60746fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:21 compute-2 systemd[1]: run-netns-ovnmeta\x2de4f17807\x2d9d16\x2d4b74\x2d9bf3\x2dd79f60746fbd.mount: Deactivated successfully.
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.243 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[759cd9b4-1aa4-4d6a-b594-94c491717795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.245 143801 INFO neutron.agent.ovn.metadata.agent [-] Port b15948c7-35a3-4201-bceb-593c2b1c8704 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.246 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74ca2dd9-be23-4b0d-bbc1-976490587a78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.248 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[442e9812-1051-4eb0-8dc7-fa33946377e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.249 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78 namespace which is not needed anymore
Nov 29 08:08:21 compute-2 NetworkManager[48993]: <info>  [1764403701.2536] manager: (taped4379ad-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.275 232432 INFO nova.virt.libvirt.driver [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Instance destroyed successfully.
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.276 232432 DEBUG nova.objects.instance [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lazy-loading 'resources' on Instance uuid 9f70b4d6-e1a7-4709-8816-a19fb6569d7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.291 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.291 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "7a9fe153-f72b-4621-aee3-66b486bacae5", "address": "fa:16:3e:2b:13:82", "network": {"id": "e4f17807-9d16-4b74-9bf3-d79f60746fbd", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1769170953-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a9fe153-f7", "ovs_interfaceid": "7a9fe153-f72b-4621-aee3-66b486bacae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.292 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.293 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.297 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a9fe153-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.299 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 ceph-mon[77138]: pgmap v2028: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.324 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.329 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:13:82,bridge_name='br-int',has_traffic_filtering=True,id=7a9fe153-f72b-4621-aee3-66b486bacae5,network=Network(e4f17807-9d16-4b74-9bf3-d79f60746fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a9fe153-f7')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.330 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.331 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.331 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.332 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.333 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb15948c7-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.357 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:95:2d,bridge_name='br-int',has_traffic_filtering=True,id=b15948c7-35a3-4201-bceb-593c2b1c8704,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15948c7-35')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.358 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.358 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.359 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.359 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.361 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0984314-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.375 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.378 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:62:71,bridge_name='br-int',has_traffic_filtering=True,id=d0984314-851d-451b-9277-1f0fc38d3c41,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0984314-85')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.379 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.380 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.380 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.381 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.384 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.384 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff1cfac-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.386 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.388 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.398 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.402 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:e8:12,bridge_name='br-int',has_traffic_filtering=True,id=0ff1cfac-1292-47db-befc-e4a968bd8d13,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ff1cfac-12')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.403 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.403 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [NOTICE]   (269196) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [NOTICE]   (269196) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [WARNING]  (269196) : Exiting Master process...
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [WARNING]  (269196) : Exiting Master process...
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.404 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.405 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [ALERT]    (269196) : Current worker (269206) exited with code 143 (Terminated)
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78[269184]: [WARNING]  (269196) : All workers exited. Exiting... (0)
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.407 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.407 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd82f4054-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.409 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 systemd[1]: libpod-613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2.scope: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270551]: 2025-11-29 08:08:21.415638493 +0000 UTC m=+0.045207979 container died 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.420 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.424 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:6a:16,bridge_name='br-int',has_traffic_filtering=True,id=d82f4054-c2c2-4966-9ef6-c7ac320cd065,network=Network(74ca2dd9-be23-4b0d-bbc1-976490587a78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd82f4054-c2')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.426 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.426 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.427 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.427 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.429 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.429 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bb68ec9-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.433 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.439 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.440 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:58:a8,bridge_name='br-int',has_traffic_filtering=True,id=9bb68ec9-77ee-431c-b89c-3384da9fa365,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bb68ec9-77')
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.441 232432 DEBUG nova.virt.libvirt.vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1544099134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1544099134',id=83,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7piip/bHfelSt60kbFjXyu6AffytQ8S4F/UHaFfBpkzmeU9MqpspTfTFY4Apcs8zhVzIUb95RGWcFTIPh3y2m3/tcri4HordSAfR5LzcQJe4UJLNq5ZrsLFzIGYHIsfw==',key_name='tempest-keypair-129955961',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e61a0774e90545289bd82e4a71650bde',ramdisk_id='',reservation_id='r-t5hcqchc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-307299721',owner_user_name='tempest-TaggedBootDevicesTest_v242-307299721-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5c9a929d4b248288b84a67f96ca500d',uuid=9f70b4d6-e1a7-4709-8816-a19fb6569d7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.441 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converting VIF {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.442 232432 DEBUG nova.network.os_vif_util [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.442 232432 DEBUG os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.444 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped4379ad-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.446 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.447 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.448 232432 INFO os_vif [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:9b:c3,bridge_name='br-int',has_traffic_filtering=True,id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb,network=Network(375dc49d-ec99-4657-9ba6-74087890a298),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped4379ad-b1')
Nov 29 08:08:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.456 232432 DEBUG nova.compute.manager [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.456 232432 DEBUG oslo_concurrency.lockutils [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.457 232432 DEBUG oslo_concurrency.lockutils [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.457 232432 DEBUG oslo_concurrency.lockutils [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.457 232432 DEBUG nova.compute.manager [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.457 232432 DEBUG nova.compute.manager [req-9ed2d4d3-7673-4bf6-82af-47238d43b4ef req-353e7cbe-ec26-4796-b39b-4c9f15404ec7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-c4cdc3afba38f552c060190d7c9e6dd32dcb5df0aa9fc14eed95235ae2f6a21a-merged.mount: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270551]: 2025-11-29 08:08:21.46716762 +0000 UTC m=+0.096737086 container cleanup 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:08:21 compute-2 systemd[1]: libpod-conmon-613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2.scope: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270599]: 2025-11-29 08:08:21.542222319 +0000 UTC m=+0.048750260 container remove 613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.549 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0a143879-c7aa-4c79-81af-a55cbeee4adc]: (4, ('Sat Nov 29 08:08:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78 (613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2)\n613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2\nSat Nov 29 08:08:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78 (613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2)\n613c77c4989e0d69be867c8088ae397b8ddf243b7214c17278ae8d45df91c3b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.552 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ccc573-986c-43de-810e-246f8cb0a8f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.553 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74ca2dd9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 kernel: tap74ca2dd9-b0: left promiscuous mode
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.573 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d129104d-06b0-42e5-9fe9-0ba585790674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.590 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc1c5c1-e7e9-4943-8996-3bc617ffd345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.592 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[30dfb595-f399-4fa8-a9f2-d2fc9d7a2306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.614 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50ee1c42-05c8-4852-8d60-a759b8496297]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662450, 'reachable_time': 43619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270625, 'error': None, 'target': 'ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.618 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74ca2dd9-be23-4b0d-bbc1-976490587a78 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.618 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a95ba412-0a50-43e1-a702-0ba157a578d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.620 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d0984314-851d-451b-9277-1f0fc38d3c41 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.623 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74ca2dd9-be23-4b0d-bbc1-976490587a78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.624 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[665226ed-dcf6-40f5-b8bc-903c9dd16ab1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.625 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0ff1cfac-1292-47db-befc-e4a968bd8d13 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.627 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74ca2dd9-be23-4b0d-bbc1-976490587a78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.629 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cef23b3c-945b-401b-aa95-aa22667162ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.630 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9bb68ec9-77ee-431c-b89c-3384da9fa365 in datapath 375dc49d-ec99-4657-9ba6-74087890a298 unbound from our chassis
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.632 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 375dc49d-ec99-4657-9ba6-74087890a298, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.633 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[52964f09-4816-4e19-b63b-ba46bb75fd1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.634 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298 namespace which is not needed anymore
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.725 232432 INFO nova.virt.libvirt.driver [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Deleting instance files /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c_del
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.726 232432 INFO nova.virt.libvirt.driver [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Deletion of /var/lib/nova/instances/9f70b4d6-e1a7-4709-8816-a19fb6569d7c_del complete
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [NOTICE]   (269290) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [NOTICE]   (269290) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [WARNING]  (269290) : Exiting Master process...
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [ALERT]    (269290) : Current worker (269292) exited with code 143 (Terminated)
Nov 29 08:08:21 compute-2 neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298[269286]: [WARNING]  (269290) : All workers exited. Exiting... (0)
Nov 29 08:08:21 compute-2 systemd[1]: libpod-0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b.scope: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270644]: 2025-11-29 08:08:21.788152444 +0000 UTC m=+0.052140106 container died 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.804 232432 INFO nova.compute.manager [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Took 1.28 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.805 232432 DEBUG oslo.service.loopingcall [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.805 232432 DEBUG nova.compute.manager [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.806 232432 DEBUG nova.network.neutron [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:21 compute-2 podman[270644]: 2025-11-29 08:08:21.83035861 +0000 UTC m=+0.094346272 container cleanup 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:08:21 compute-2 systemd[1]: libpod-conmon-0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b.scope: Deactivated successfully.
Nov 29 08:08:21 compute-2 podman[270675]: 2025-11-29 08:08:21.919049255 +0000 UTC m=+0.058268258 container remove 0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.930 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4aeec887-2d67-4310-b4bd-45ba42af7abb]: (4, ('Sat Nov 29 08:08:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298 (0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b)\n0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b\nSat Nov 29 08:08:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298 (0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b)\n0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.933 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[69a92be8-d274-44fa-a752-32cbbe6df892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.934 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap375dc49d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.940 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 kernel: tap375dc49d-e0: left promiscuous mode
Nov 29 08:08:21 compute-2 nova_compute[232428]: 2025-11-29 08:08:21.959 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7326e0f4-f818-45dc-9118-6572ffbac6be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.989 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[667f8dfa-cc8f-4093-b101-f3e8338c7dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:21.990 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7acd0c98-940d-484f-9533-14041e148203]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-60a8c9db6d1c4a81ee7bf3f8ae7119ef75a19cda28d84ad38bcda58fde953aee-merged.mount: Deactivated successfully.
Nov 29 08:08:22 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0524df9241288f437311117bbb3d8186b33ba172ef7d9acf2080730f0806115b-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:22 compute-2 systemd[1]: run-netns-ovnmeta\x2d74ca2dd9\x2dbe23\x2d4b0d\x2dbbc1\x2d976490587a78.mount: Deactivated successfully.
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.016 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef40595-2f88-443f-8899-2daf4f26bc44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662548, 'reachable_time': 34031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270690, 'error': None, 'target': 'ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.019 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-375dc49d-ec99-4657-9ba6-74087890a298 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.020 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1ff109-28f5-4e9f-9f90-d8eb05868005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.021 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d82f4054-c2c2-4966-9ef6-c7ac320cd065 in datapath 74ca2dd9-be23-4b0d-bbc1-976490587a78 unbound from our chassis
Nov 29 08:08:22 compute-2 systemd[1]: run-netns-ovnmeta\x2d375dc49d\x2dec99\x2d4657\x2d9ba6\x2d74087890a298.mount: Deactivated successfully.
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.024 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74ca2dd9-be23-4b0d-bbc1-976490587a78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.025 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7da1b67b-89a8-4006-9251-37507033aab6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.026 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb in datapath 375dc49d-ec99-4657-9ba6-74087890a298 unbound from our chassis
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.029 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 375dc49d-ec99-4657-9ba6-74087890a298, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:22.030 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df2f3381-03a8-4743-a85a-d2bb8374e96b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:22.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.120 232432 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.120 232432 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.120 232432 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.120 232432 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.120 232432 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.121 232432 WARNING nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-7a9fe153-f72b-4621-aee3-66b486bacae5 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.123 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.123 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.123 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.123 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.123 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 WARNING nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-b15948c7-35a3-4201-bceb-593c2b1c8704 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-d0984314-851d-451b-9277-1f0fc38d3c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-d0984314-851d-451b-9277-1f0fc38d3c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.124 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-d0984314-851d-451b-9277-1f0fc38d3c41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 WARNING nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-d0984314-851d-451b-9277-1f0fc38d3c41 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.125 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.126 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.127 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.127 232432 DEBUG oslo_concurrency.lockutils [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.127 232432 DEBUG nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.127 232432 WARNING nova.compute.manager [req-c455f40e-0983-425e-a431-58ccc56f310f req-99b0ee25-89d7-4fed-8f55-c912e858f429 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-d82f4054-c2c2-4966-9ef6-c7ac320cd065 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 ceph-mon[77138]: pgmap v2029: 305 pgs: 305 active+clean; 247 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.590 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.590 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.591 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.591 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.592 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.592 232432 WARNING nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-0ff1cfac-1292-47db-befc-e4a968bd8d13 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.592 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.592 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.593 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.593 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.593 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.594 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.594 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.594 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.594 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.595 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.595 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.595 232432 WARNING nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-9bb68ec9-77ee-431c-b89c-3384da9fa365 for instance with vm_state active and task_state deleting.
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.595 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.596 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.596 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.596 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.596 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-unplugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.596 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-unplugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.597 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.597 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.597 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.597 232432 DEBUG oslo_concurrency.lockutils [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.598 232432 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] No waiting events found dispatching network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:23 compute-2 nova_compute[232428]: 2025-11-29 08:08:23.598 232432 WARNING nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received unexpected event network-vif-plugged-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb for instance with vm_state active and task_state deleting.
Nov 29 08:08:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:24 compute-2 nova_compute[232428]: 2025-11-29 08:08:24.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:24.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:25 compute-2 nova_compute[232428]: 2025-11-29 08:08:25.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:25 compute-2 ceph-mon[77138]: pgmap v2030: 305 pgs: 305 active+clean; 247 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 08:08:25 compute-2 nova_compute[232428]: 2025-11-29 08:08:25.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:25 compute-2 podman[270694]: 2025-11-29 08:08:25.766191185 +0000 UTC m=+0.148483718 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:08:26 compute-2 nova_compute[232428]: 2025-11-29 08:08:26.447 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:26.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:27 compute-2 ceph-mon[77138]: pgmap v2031: 305 pgs: 305 active+clean; 247 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 205 op/s
Nov 29 08:08:27 compute-2 nova_compute[232428]: 2025-11-29 08:08:27.421 232432 DEBUG nova.compute.manager [req-be9422a2-a91c-4f5f-99f8-1da18cb70af7 req-6325b0e0-06a3-4453-9f83-2c1bdbfdf897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-deleted-7a9fe153-f72b-4621-aee3-66b486bacae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:27 compute-2 nova_compute[232428]: 2025-11-29 08:08:27.422 232432 INFO nova.compute.manager [req-be9422a2-a91c-4f5f-99f8-1da18cb70af7 req-6325b0e0-06a3-4453-9f83-2c1bdbfdf897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Neutron deleted interface 7a9fe153-f72b-4621-aee3-66b486bacae5; detaching it from the instance and deleting it from the info cache
Nov 29 08:08:27 compute-2 nova_compute[232428]: 2025-11-29 08:08:27.422 232432 DEBUG nova.network.neutron [req-be9422a2-a91c-4f5f-99f8-1da18cb70af7 req-6325b0e0-06a3-4453-9f83-2c1bdbfdf897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "address": "fa:16:3e:1c:9b:c3", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped4379ad-b1", "ovs_interfaceid": "ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:27 compute-2 nova_compute[232428]: 2025-11-29 08:08:27.450 232432 DEBUG nova.compute.manager [req-be9422a2-a91c-4f5f-99f8-1da18cb70af7 req-6325b0e0-06a3-4453-9f83-2c1bdbfdf897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Detach interface failed, port_id=7a9fe153-f72b-4621-aee3-66b486bacae5, reason: Instance 9f70b4d6-e1a7-4709-8816-a19fb6569d7c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:08:27 compute-2 sudo[270721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:27 compute-2 sudo[270721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:27 compute-2 sudo[270721]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:27 compute-2 sudo[270746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:27 compute-2 sudo[270746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:27 compute-2 sudo[270746]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2609822175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2609822175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:28.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:28.837 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:28 compute-2 nova_compute[232428]: 2025-11-29 08:08:28.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:28.841 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:08:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.307 232432 DEBUG nova.compute.manager [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 08:08:29 compute-2 ceph-mon[77138]: pgmap v2032: 305 pgs: 305 active+clean; 247 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 439 KiB/s wr, 121 op/s
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.465 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.466 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.501 232432 DEBUG nova.objects.instance [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.527 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.528 232432 INFO nova.compute.claims [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.528 232432 DEBUG nova.objects.instance [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.567 232432 DEBUG nova.objects.instance [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.638 232432 DEBUG nova.compute.manager [req-a695e2a5-185e-427e-80d6-eca36a1c0a2f req-22fcdd60-658d-41d7-b677-ac930915ddb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-deleted-ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.639 232432 INFO nova.compute.manager [req-a695e2a5-185e-427e-80d6-eca36a1c0a2f req-22fcdd60-658d-41d7-b677-ac930915ddb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Neutron deleted interface ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb; detaching it from the instance and deleting it from the info cache
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.640 232432 DEBUG nova.network.neutron [req-a695e2a5-185e-427e-80d6-eca36a1c0a2f req-22fcdd60-658d-41d7-b677-ac930915ddb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [{"id": "b15948c7-35a3-4201-bceb-593c2b1c8704", "address": "fa:16:3e:55:95:2d", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.112", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15948c7-35", "ovs_interfaceid": "b15948c7-35a3-4201-bceb-593c2b1c8704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d0984314-851d-451b-9277-1f0fc38d3c41", "address": "fa:16:3e:09:62:71", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0984314-85", "ovs_interfaceid": "d0984314-851d-451b-9277-1f0fc38d3c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "address": "fa:16:3e:cb:e8:12", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ff1cfac-12", "ovs_interfaceid": "0ff1cfac-1292-47db-befc-e4a968bd8d13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "address": "fa:16:3e:45:6a:16", "network": {"id": "74ca2dd9-be23-4b0d-bbc1-976490587a78", "bridge": "br-int", "label": "tempest-device-tagging-net1-1293951608", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd82f4054-c2", "ovs_interfaceid": "d82f4054-c2c2-4966-9ef6-c7ac320cd065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "address": "fa:16:3e:69:58:a8", "network": {"id": "375dc49d-ec99-4657-9ba6-74087890a298", "bridge": "br-int", "label": "tempest-device-tagging-net2-803831650", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e61a0774e90545289bd82e4a71650bde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bb68ec9-77", "ovs_interfaceid": "9bb68ec9-77ee-431c-b89c-3384da9fa365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.770 232432 INFO nova.compute.resource_tracker [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating resource usage from migration 58f403b9-08e9-4f12-8ffa-9a463890ee42
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.771 232432 DEBUG nova.compute.resource_tracker [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Starting to track incoming migration 58f403b9-08e9-4f12-8ffa-9a463890ee42 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.775 232432 DEBUG nova.compute.manager [req-a695e2a5-185e-427e-80d6-eca36a1c0a2f req-22fcdd60-658d-41d7-b677-ac930915ddb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Detach interface failed, port_id=ed4379ad-b1b5-45f9-b1a8-8531d61a3cbb, reason: Instance 9f70b4d6-e1a7-4709-8816-a19fb6569d7c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:08:29 compute-2 nova_compute[232428]: 2025-11-29 08:08:29.954 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3379344817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/612719578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.468 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.479 232432 DEBUG nova.compute.provider_tree [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.497 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403695.497046, 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.498 232432 INFO nova.compute.manager [-] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] VM Stopped (Lifecycle Event)
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.511 232432 DEBUG nova.scheduler.client.report [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.517 232432 DEBUG nova.compute.manager [None req-e4d07076-bf8c-4050-aeb7-a6beefd84161 - - - - - -] [instance: 5d5aff6c-84c4-4140-9d05-42bfc4dfab9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.532 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.533 232432 INFO nova.compute.manager [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Migrating
Nov 29 08:08:30 compute-2 nova_compute[232428]: 2025-11-29 08:08:30.635 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:30.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.242 232432 DEBUG nova.network.neutron [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.268 232432 INFO nova.compute.manager [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Took 9.46 seconds to deallocate network for instance.
Nov 29 08:08:31 compute-2 ceph-mon[77138]: pgmap v2033: 305 pgs: 305 active+clean; 230 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 439 KiB/s wr, 144 op/s
Nov 29 08:08:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/612719578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.450 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.810 232432 DEBUG nova.compute.manager [req-490e9c93-19a8-4dda-92cf-36471f192f19 req-4ce86fff-8cc7-4c17-993e-175b6471ab32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-deleted-d82f4054-c2c2-4966-9ef6-c7ac320cd065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.811 232432 DEBUG nova.compute.manager [req-490e9c93-19a8-4dda-92cf-36471f192f19 req-4ce86fff-8cc7-4c17-993e-175b6471ab32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-deleted-9bb68ec9-77ee-431c-b89c-3384da9fa365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:31 compute-2 nova_compute[232428]: 2025-11-29 08:08:31.812 232432 DEBUG nova.compute.manager [req-490e9c93-19a8-4dda-92cf-36471f192f19 req-4ce86fff-8cc7-4c17-993e-175b6471ab32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Received event network-vif-deleted-0ff1cfac-1292-47db-befc-e4a968bd8d13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:31.844 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.070 232432 INFO nova.compute.manager [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Took 0.80 seconds to detach 3 volumes for instance.
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.122 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.123 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.230 232432 DEBUG oslo_concurrency.processutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:32 compute-2 sshd-session[270797]: Accepted publickey for nova from 192.168.122.101 port 55670 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 08:08:32 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 08:08:32 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 08:08:32 compute-2 systemd-logind[787]: New session 57 of user nova.
Nov 29 08:08:32 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 08:08:32 compute-2 systemd[1]: Starting User Manager for UID 42436...
Nov 29 08:08:32 compute-2 systemd[270820]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:08:32 compute-2 sshd-session[270795]: Connection closed by 220.250.59.155 port 52140
Nov 29 08:08:32 compute-2 systemd[270820]: Queued start job for default target Main User Target.
Nov 29 08:08:32 compute-2 systemd[270820]: Created slice User Application Slice.
Nov 29 08:08:32 compute-2 systemd[270820]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 08:08:32 compute-2 systemd[270820]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 08:08:32 compute-2 systemd[270820]: Reached target Paths.
Nov 29 08:08:32 compute-2 systemd[270820]: Reached target Timers.
Nov 29 08:08:32 compute-2 systemd[270820]: Starting D-Bus User Message Bus Socket...
Nov 29 08:08:32 compute-2 systemd[270820]: Starting Create User's Volatile Files and Directories...
Nov 29 08:08:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:32.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:32 compute-2 systemd[270820]: Listening on D-Bus User Message Bus Socket.
Nov 29 08:08:32 compute-2 systemd[270820]: Reached target Sockets.
Nov 29 08:08:32 compute-2 systemd[270820]: Finished Create User's Volatile Files and Directories.
Nov 29 08:08:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/39211276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:32 compute-2 systemd[270820]: Reached target Basic System.
Nov 29 08:08:32 compute-2 systemd[270820]: Reached target Main User Target.
Nov 29 08:08:32 compute-2 systemd[270820]: Startup finished in 206ms.
Nov 29 08:08:32 compute-2 systemd[1]: Started User Manager for UID 42436.
Nov 29 08:08:32 compute-2 systemd[1]: Started Session 57 of User nova.
Nov 29 08:08:32 compute-2 sshd-session[270797]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.786 232432 DEBUG oslo_concurrency.processutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.801 232432 DEBUG nova.compute.provider_tree [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.822 232432 DEBUG nova.scheduler.client.report [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:32 compute-2 sshd-session[270837]: Received disconnect from 192.168.122.101 port 55670:11: disconnected by user
Nov 29 08:08:32 compute-2 sshd-session[270837]: Disconnected from user nova 192.168.122.101 port 55670
Nov 29 08:08:32 compute-2 sshd-session[270797]: pam_unix(sshd:session): session closed for user nova
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.858 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:32 compute-2 systemd[1]: session-57.scope: Deactivated successfully.
Nov 29 08:08:32 compute-2 systemd-logind[787]: Session 57 logged out. Waiting for processes to exit.
Nov 29 08:08:32 compute-2 systemd-logind[787]: Removed session 57.
Nov 29 08:08:32 compute-2 nova_compute[232428]: 2025-11-29 08:08:32.929 232432 INFO nova.scheduler.client.report [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Deleted allocations for instance 9f70b4d6-e1a7-4709-8816-a19fb6569d7c
Nov 29 08:08:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:33 compute-2 nova_compute[232428]: 2025-11-29 08:08:33.021 232432 DEBUG oslo_concurrency.lockutils [None req-cdd93ece-b33f-43be-ab3c-f1477b556cee f5c9a929d4b248288b84a67f96ca500d e61a0774e90545289bd82e4a71650bde - - default default] Lock "9f70b4d6-e1a7-4709-8816-a19fb6569d7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:33 compute-2 sshd-session[270839]: Accepted publickey for nova from 192.168.122.101 port 55684 ssh2: ECDSA SHA256:RWhQOD4fQeK3z0Y87ncOBQfqA+HTfmlAKq/ERvgvDy8
Nov 29 08:08:33 compute-2 systemd-logind[787]: New session 59 of user nova.
Nov 29 08:08:33 compute-2 systemd[1]: Started Session 59 of User nova.
Nov 29 08:08:33 compute-2 sshd-session[270839]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Nov 29 08:08:33 compute-2 sshd-session[270842]: Received disconnect from 192.168.122.101 port 55684:11: disconnected by user
Nov 29 08:08:33 compute-2 sshd-session[270842]: Disconnected from user nova 192.168.122.101 port 55684
Nov 29 08:08:33 compute-2 sshd-session[270839]: pam_unix(sshd:session): session closed for user nova
Nov 29 08:08:33 compute-2 systemd[1]: session-59.scope: Deactivated successfully.
Nov 29 08:08:33 compute-2 systemd-logind[787]: Session 59 logged out. Waiting for processes to exit.
Nov 29 08:08:33 compute-2 systemd-logind[787]: Removed session 59.
Nov 29 08:08:33 compute-2 ceph-mon[77138]: pgmap v2034: 305 pgs: 305 active+clean; 167 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 139 op/s
Nov 29 08:08:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/39211276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:34.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:35.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:35 compute-2 ceph-mon[77138]: pgmap v2035: 305 pgs: 305 active+clean; 167 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 114 op/s
Nov 29 08:08:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1093215457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1093215457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:35 compute-2 nova_compute[232428]: 2025-11-29 08:08:35.640 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:36 compute-2 nova_compute[232428]: 2025-11-29 08:08:36.272 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403701.2708204, 9f70b4d6-e1a7-4709-8816-a19fb6569d7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:36 compute-2 nova_compute[232428]: 2025-11-29 08:08:36.273 232432 INFO nova.compute.manager [-] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] VM Stopped (Lifecycle Event)
Nov 29 08:08:36 compute-2 nova_compute[232428]: 2025-11-29 08:08:36.299 232432 DEBUG nova.compute.manager [None req-fface384-6cb3-40aa-90e6-a7dc4ee9b8fb - - - - - -] [instance: 9f70b4d6-e1a7-4709-8816-a19fb6569d7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:36 compute-2 nova_compute[232428]: 2025-11-29 08:08:36.452 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:36.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:37 compute-2 ceph-mon[77138]: pgmap v2036: 305 pgs: 305 active+clean; 187 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Nov 29 08:08:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:38.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:39.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:39 compute-2 ceph-mon[77138]: pgmap v2037: 305 pgs: 305 active+clean; 187 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 08:08:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:40 compute-2 nova_compute[232428]: 2025-11-29 08:08:40.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:40 compute-2 podman[270848]: 2025-11-29 08:08:40.685716363 +0000 UTC m=+0.081410519 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:08:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39327313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39327313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:40.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:41.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:41 compute-2 nova_compute[232428]: 2025-11-29 08:08:41.470 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:41 compute-2 ceph-mon[77138]: pgmap v2038: 305 pgs: 305 active+clean; 195 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 2.0 MiB/s wr, 105 op/s
Nov 29 08:08:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/39327313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/39327313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:41 compute-2 nova_compute[232428]: 2025-11-29 08:08:41.952 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:41 compute-2 nova_compute[232428]: 2025-11-29 08:08:41.953 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:41 compute-2 nova_compute[232428]: 2025-11-29 08:08:41.973 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:41 compute-2 nova_compute[232428]: 2025-11-29 08:08:41.974 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2489168676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2489168676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.307 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.307 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.330 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.463 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.464 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.476 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.477 232432 INFO nova.compute.claims [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:08:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2489168676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2489168676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:42 compute-2 nova_compute[232428]: 2025-11-29 08:08:42.704 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:42.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:43.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/828844777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.254 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.264 232432 DEBUG nova.compute.provider_tree [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.290 232432 DEBUG nova.scheduler.client.report [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:43 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 08:08:43 compute-2 systemd[270820]: Activating special unit Exit the Session...
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped target Main User Target.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped target Basic System.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped target Paths.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped target Sockets.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped target Timers.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 08:08:43 compute-2 systemd[270820]: Closed D-Bus User Message Bus Socket.
Nov 29 08:08:43 compute-2 systemd[270820]: Stopped Create User's Volatile Files and Directories.
Nov 29 08:08:43 compute-2 systemd[270820]: Removed slice User Application Slice.
Nov 29 08:08:43 compute-2 systemd[270820]: Reached target Shutdown.
Nov 29 08:08:43 compute-2 systemd[270820]: Finished Exit the Session.
Nov 29 08:08:43 compute-2 systemd[270820]: Reached target Exit the Session.
Nov 29 08:08:43 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 08:08:43 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.349 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.350 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:08:43 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 08:08:43 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 08:08:43 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 08:08:43 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 08:08:43 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.430 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.431 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.470 232432 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.492 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:08:43 compute-2 ceph-mon[77138]: pgmap v2039: 305 pgs: 305 active+clean; 200 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 08:08:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3296093370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/828844777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3823946930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3823946930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.612 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.614 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.615 232432 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Creating image(s)
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.663 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.714 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.769 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.776 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.826 232432 DEBUG nova.policy [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8306d30b5b844909866bec7b9c8242d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e860226190f4eb8971376b16032da1b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.884 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.885 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.886 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.887 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.944 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-2 nova_compute[232428]: 2025-11-29 08:08:43.953 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.533 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.664 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] resizing rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:08:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:44.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.839 232432 DEBUG nova.objects.instance [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'migration_context' on Instance uuid 7c7c7782-a5d2-4ccb-8023-a36759a6da8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.859 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.860 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Ensure instance console log exists: /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.861 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.861 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:44 compute-2 nova_compute[232428]: 2025-11-29 08:08:44.862 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:08:45 compute-2 ceph-mon[77138]: pgmap v2040: 305 pgs: 305 active+clean; 200 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 29 08:08:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3314600850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.646 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:45 compute-2 podman[271061]: 2025-11-29 08:08:45.735264862 +0000 UTC m=+0.128056983 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.813 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Successfully created port: 22393dc1-f71d-4c22-9057-03fb9e6e359e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.972 232432 DEBUG nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.973 232432 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.973 232432 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.974 232432 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.974 232432 DEBUG nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:45 compute-2 nova_compute[232428]: 2025-11-29 08:08:45.974 232432 WARNING nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_migrating.
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.516 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/609851083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.736 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Successfully updated port: 22393dc1-f71d-4c22-9057-03fb9e6e359e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:08:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:46.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.762 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.762 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquired lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.762 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:46 compute-2 nova_compute[232428]: 2025-11-29 08:08:46.987 232432 INFO nova.network.neutron [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating port ccf625f8-471d-4406-9844-a3872b34137c with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 29 08:08:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:47.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:47 compute-2 nova_compute[232428]: 2025-11-29 08:08:47.048 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:08:47 compute-2 ceph-mon[77138]: pgmap v2041: 305 pgs: 305 active+clean; 217 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 4.9 MiB/s wr, 173 op/s
Nov 29 08:08:47 compute-2 sudo[271080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:47 compute-2 sudo[271080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:47 compute-2 sudo[271080]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:47 compute-2 sudo[271105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:47 compute-2 sudo[271105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:47 compute-2 sudo[271105]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.062 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.063 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.063 232432 DEBUG nova.network.neutron [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.128 232432 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.129 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.129 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.130 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.130 232432 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.131 232432 WARNING nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_migrated.
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.131 232432 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-changed-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.132 232432 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Refreshing instance network info cache due to event network-changed-22393dc1-f71d-4c22-9057-03fb9e6e359e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.132 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.365 232432 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Updating instance_info_cache with network_info: [{"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.391 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Releasing lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.391 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Instance network_info: |[{"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.392 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.393 232432 DEBUG nova.network.neutron [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Refreshing network info cache for port 22393dc1-f71d-4c22-9057-03fb9e6e359e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.398 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Start _get_guest_xml network_info=[{"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.407 232432 WARNING nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.417 232432 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.419 232432 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.432 232432 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.433 232432 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.436 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.436 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.437 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.438 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.438 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.439 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.439 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.440 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.440 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.441 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.441 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.442 232432 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.447 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2257482976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2185130790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2658750865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.951 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.982 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:48 compute-2 nova_compute[232428]: 2025-11-29 08:08:48.986 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3064714241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.467 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.469 232432 DEBUG nova.virt.libvirt.vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-1',id=89,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:43Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=7c7c7782-a5d2-4ccb-8023-a36759a6da8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.469 232432 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.470 232432 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.472 232432 DEBUG nova.objects.instance [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c7c7782-a5d2-4ccb-8023-a36759a6da8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.528 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <uuid>7c7c7782-a5d2-4ccb-8023-a36759a6da8e</uuid>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <name>instance-00000059</name>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:name>tempest-tempest.common.compute-instance-1330031541-1</nova:name>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:08:48</nova:creationTime>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:user uuid="f8306d30b5b844909866bec7b9c8242d">tempest-MultipleCreateTestJSON-36900569-project-member</nova:user>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:project uuid="8e860226190f4eb8971376b16032da1b">tempest-MultipleCreateTestJSON-36900569</nova:project>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <nova:port uuid="22393dc1-f71d-4c22-9057-03fb9e6e359e">
Nov 29 08:08:49 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <system>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="serial">7c7c7782-a5d2-4ccb-8023-a36759a6da8e</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="uuid">7c7c7782-a5d2-4ccb-8023-a36759a6da8e</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </system>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <os>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </os>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <features>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </features>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk">
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config">
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:49 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:df:5e:3d"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <target dev="tap22393dc1-f7"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/console.log" append="off"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <video>
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </video>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:08:49 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:08:49 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:08:49 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:08:49 compute-2 nova_compute[232428]: </domain>
Nov 29 08:08:49 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.530 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Preparing to wait for external event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.531 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.531 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.532 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.533 232432 DEBUG nova.virt.libvirt.vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-1',id=89,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:43Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=7c7c7782-a5d2-4ccb-8023-a36759a6da8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.534 232432 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.535 232432 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.536 232432 DEBUG os_vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.538 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.539 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.544 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22393dc1-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.546 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22393dc1-f7, col_values=(('external_ids', {'iface-id': '22393dc1-f71d-4c22-9057-03fb9e6e359e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:5e:3d', 'vm-uuid': '7c7c7782-a5d2-4ccb-8023-a36759a6da8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:49 compute-2 NetworkManager[48993]: <info>  [1764403729.5957] manager: (tap22393dc1-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.608 232432 INFO os_vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7')
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.673 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.674 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.674 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No VIF found with MAC fa:16:3e:df:5e:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.674 232432 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Using config drive
Nov 29 08:08:49 compute-2 nova_compute[232428]: 2025-11-29 08:08:49.876 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:50 compute-2 ceph-mon[77138]: pgmap v2042: 305 pgs: 305 active+clean; 217 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Nov 29 08:08:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2658750865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3064714241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3934048991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.151 232432 DEBUG nova.network.neutron [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.180 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.228 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.229 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.305 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.307 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.307 232432 INFO nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Creating image(s)
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.353 232432 DEBUG nova.storage.rbd_utils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] creating snapshot(nova-resize) on rbd image(9b9952a8-61d7-410f-9f29-081ff912c4cb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.420 232432 DEBUG nova.compute.manager [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-changed-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.420 232432 DEBUG nova.compute.manager [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing instance network info cache due to event network-changed-ccf625f8-471d-4406-9844-a3872b34137c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.421 232432 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.421 232432 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.421 232432 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing network info cache for port ccf625f8-471d-4406-9844-a3872b34137c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3126421904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.682 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.698 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.730 232432 DEBUG nova.network.neutron [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Updated VIF entry in instance network info cache for port 22393dc1-f71d-4c22-9057-03fb9e6e359e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.731 232432 DEBUG nova.network.neutron [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Updating instance_info_cache with network_info: [{"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.746 232432 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Creating config drive at /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config
Nov 29 08:08:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.754 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyq3zbn8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.803 232432 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7c7c7782-a5d2-4ccb-8023-a36759a6da8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.823 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.824 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.915 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyq3zbn8" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.956 232432 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:50 compute-2 nova_compute[232428]: 2025-11-29 08:08:50.961 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 08:08:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:51 compute-2 ceph-mon[77138]: pgmap v2043: 305 pgs: 305 active+clean; 213 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 219 KiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 29 08:08:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3126421904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3342535025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.106 232432 DEBUG nova.objects.instance [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.204 232432 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config 7c7c7782-a5d2-4ccb-8023-a36759a6da8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.205 232432 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Deleting local config drive /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e/disk.config because it was imported into RBD.
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.234 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.234 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Ensure instance console log exists: /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.234 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.235 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.235 232432 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.237 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start _get_guest_xml network_info=[{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.247 232432 WARNING nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.252 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.253 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4516MB free_disk=20.910842895507812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.254 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.254 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.255 232432 DEBUG nova.virt.libvirt.host [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.256 232432 DEBUG nova.virt.libvirt.host [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.260 232432 DEBUG nova.virt.libvirt.host [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.261 232432 DEBUG nova.virt.libvirt.host [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.262 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.262 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.263 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.263 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:51 compute-2 kernel: tap22393dc1-f7: entered promiscuous mode
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.263 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.263 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.263 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.264 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.264 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.264 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.264 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.264 232432 DEBUG nova.virt.hardware [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.265 232432 DEBUG nova.objects.instance [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.2659] manager: (tap22393dc1-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 08:08:51 compute-2 ovn_controller[134375]: 2025-11-29T08:08:51Z|00424|binding|INFO|Claiming lport 22393dc1-f71d-4c22-9057-03fb9e6e359e for this chassis.
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.266 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 ovn_controller[134375]: 2025-11-29T08:08:51Z|00425|binding|INFO|22393dc1-f71d-4c22-9057-03fb9e6e359e: Claiming fa:16:3e:df:5e:3d 10.100.0.3
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.270 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.273 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.281 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:5e:3d 10.100.0.3'], port_security=['fa:16:3e:df:5e:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c7c7782-a5d2-4ccb-8023-a36759a6da8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=22393dc1-f71d-4c22-9057-03fb9e6e359e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.283 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 22393dc1-f71d-4c22-9057-03fb9e6e359e in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 bound to our chassis
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.285 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 08:08:51 compute-2 systemd-machined[194747]: New machine qemu-38-instance-00000059.
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.298 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd64661-742f-4ac7-826b-7e216c781fed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.299 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a4a6f7c-91 in ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.301 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a4a6f7c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.301 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[048eb210-aa1c-45e4-b77e-cf44c314b872]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 systemd-udevd[271362]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.303 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9507deec-4f9b-44fc-bad0-5cacfa705c88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.309 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.314 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[af4d79aa-3d12-44bc-8c09-caf4ba0035b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.3171] device (tap22393dc1-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.3186] device (tap22393dc1-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:08:51 compute-2 systemd[1]: Started Virtual Machine qemu-38-instance-00000059.
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.341 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d447a9ea-3d1f-45ab-841c-97d1cd5d57aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_controller[134375]: 2025-11-29T08:08:51Z|00426|binding|INFO|Setting lport 22393dc1-f71d-4c22-9057-03fb9e6e359e ovn-installed in OVS
Nov 29 08:08:51 compute-2 ovn_controller[134375]: 2025-11-29T08:08:51Z|00427|binding|INFO|Setting lport 22393dc1-f71d-4c22-9057-03fb9e6e359e up in Southbound
Nov 29 08:08:51 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:08:51 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.366 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Applying migration context for instance 9b9952a8-61d7-410f-9f29-081ff912c4cb as it has an incoming, in-progress migration 58f403b9-08e9-4f12-8ffa-9a463890ee42. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.367 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating resource usage from migration 58f403b9-08e9-4f12-8ffa-9a463890ee42
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.375 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d165605d-28b9-45e6-b5cc-fc5f640f49a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 systemd-udevd[271365]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.3859] manager: (tap6a4a6f7c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[22330947-4cd5-4d77-a514-a578bca88d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.398 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9b9952a8-61d7-410f-9f29-081ff912c4cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.398 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 7c7c7782-a5d2-4ccb-8023-a36759a6da8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.398 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.399 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.425 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3efd4e0f-d6d3-410f-a50d-4d65a575a06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.428 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[da8cb3c8-b230-42e8-a095-aa27f3d61753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.4588] device (tap6a4a6f7c-90): carrier: link connected
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.475 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[09197840-b61b-4d13-99c5-41618157e2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.491 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.493 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d5a8fc-b3ec-4ab7-bbc5-3c5fd9af6ef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669772, 'reachable_time': 32762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271416, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.512 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b399a097-2aff-40b9-b163-778e01aeb1c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:ede0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669772, 'tstamp': 669772}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271417, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.536 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a81ac37-6a23-404b-a282-ee713372d351]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669772, 'reachable_time': 32762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271419, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.572 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eba102-5d8f-45bc-9c59-24736c8a6008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.634 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[11820f2e-3181-493a-9d38-5cd761d8d2a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.636 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.637 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.637 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a4a6f7c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:51 compute-2 NetworkManager[48993]: <info>  [1764403731.6401] manager: (tap6a4a6f7c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 08:08:51 compute-2 kernel: tap6a4a6f7c-90: entered promiscuous mode
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.642 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a4a6f7c-90, col_values=(('external_ids', {'iface-id': 'b10f5520-b53f-45d0-9de3-4af0dc481ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:51 compute-2 ovn_controller[134375]: 2025-11-29T08:08:51Z|00428|binding|INFO|Releasing lport b10f5520-b53f-45d0-9de3-4af0dc481ad3 from this chassis (sb_readonly=0)
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.643 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.661 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.662 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.662 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d9df83-4196-4711-97ec-ee907beaa512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.663 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:08:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:51.664 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'env', 'PROCESS_TAG=haproxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:08:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3453646451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.807 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.843 232432 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updated VIF entry in instance network info cache for port ccf625f8-471d-4406-9844-a3872b34137c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.843 232432 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.846 232432 DEBUG nova.compute.manager [req-2f16db96-51ce-4db4-8fb6-245f92af9b2f req-5e96635e-6c64-4864-bf82-f396e5d31d73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.846 232432 DEBUG oslo_concurrency.lockutils [req-2f16db96-51ce-4db4-8fb6-245f92af9b2f req-5e96635e-6c64-4864-bf82-f396e5d31d73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.846 232432 DEBUG oslo_concurrency.lockutils [req-2f16db96-51ce-4db4-8fb6-245f92af9b2f req-5e96635e-6c64-4864-bf82-f396e5d31d73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.847 232432 DEBUG oslo_concurrency.lockutils [req-2f16db96-51ce-4db4-8fb6-245f92af9b2f req-5e96635e-6c64-4864-bf82-f396e5d31d73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.847 232432 DEBUG nova.compute.manager [req-2f16db96-51ce-4db4-8fb6-245f92af9b2f req-5e96635e-6c64-4864-bf82-f396e5d31d73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Processing event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.860 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.903 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403731.8489668, 7c7c7782-a5d2-4ccb-8023-a36759a6da8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.903 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] VM Started (Lifecycle Event)
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.907 232432 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.909 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.913 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.919 232432 INFO nova.virt.libvirt.driver [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Instance spawned successfully.
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.920 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.956 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.961 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.965 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.965 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.966 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.966 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.966 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:51 compute-2 nova_compute[232428]: 2025-11-29 08:08:51.967 232432 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3752774581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.008 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.008 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403731.8490424, 7c7c7782-a5d2-4ccb-8023-a36759a6da8e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.008 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] VM Paused (Lifecycle Event)
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.040 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.043 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:52 compute-2 ceph-mon[77138]: osdmap e263: 3 total, 3 up, 3 in
Nov 29 08:08:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3453646451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3752774581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.049 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403731.9120908, 7c7c7782-a5d2-4ccb-8023-a36759a6da8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.050 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] VM Resumed (Lifecycle Event)
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.053 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.056 232432 INFO nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Took 8.44 seconds to spawn the instance on the hypervisor.
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.056 232432 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.069 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.072 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.082 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:52 compute-2 podman[271542]: 2025-11-29 08:08:52.097519318 +0000 UTC m=+0.053400466 container create b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.110 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.126 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.126 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.127 232432 INFO nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Took 9.71 seconds to build instance.
Nov 29 08:08:52 compute-2 systemd[1]: Started libpod-conmon-b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b.scope.
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.152 232432 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:52 compute-2 podman[271542]: 2025-11-29 08:08:52.072071144 +0000 UTC m=+0.027952312 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:08:52 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:08:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aec3c085861b63b1b3107b3b6588ffca873df679397edc02d2a4fb827aeb42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:52 compute-2 podman[271542]: 2025-11-29 08:08:52.197352299 +0000 UTC m=+0.153233467 container init b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:08:52 compute-2 podman[271542]: 2025-11-29 08:08:52.203754249 +0000 UTC m=+0.159635387 container start b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:08:52 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [NOTICE]   (271571) : New worker (271573) forked
Nov 29 08:08:52 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [NOTICE]   (271571) : Loading success.
Nov 29 08:08:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3892665650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.350 232432 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.352 232432 DEBUG nova.virt.libvirt.vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:46Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.352 232432 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.353 232432 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.356 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <uuid>9b9952a8-61d7-410f-9f29-081ff912c4cb</uuid>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <name>instance-00000058</name>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1617536379</nova:name>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:08:51</nova:creationTime>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <nova:port uuid="ccf625f8-471d-4406-9844-a3872b34137c">
Nov 29 08:08:52 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <system>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="serial">9b9952a8-61d7-410f-9f29-081ff912c4cb</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="uuid">9b9952a8-61d7-410f-9f29-081ff912c4cb</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </system>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <os>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </os>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <features>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </features>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9b9952a8-61d7-410f-9f29-081ff912c4cb_disk">
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config">
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </source>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:08:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:89:02:a4"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <target dev="tapccf625f8-47"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/console.log" append="off"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <video>
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </video>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:08:52 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:08:52 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:08:52 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:08:52 compute-2 nova_compute[232428]: </domain>
Nov 29 08:08:52 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.357 232432 DEBUG nova.virt.libvirt.vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:46Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.358 232432 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.358 232432 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.359 232432 DEBUG os_vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.360 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.361 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.364 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.365 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccf625f8-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.366 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccf625f8-47, col_values=(('external_ids', {'iface-id': 'ccf625f8-471d-4406-9844-a3872b34137c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:02:a4', 'vm-uuid': '9b9952a8-61d7-410f-9f29-081ff912c4cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.367 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.3684] manager: (tapccf625f8-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.370 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.380 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.381 232432 INFO os_vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47')
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.468 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.469 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.469 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:89:02:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.481 232432 INFO nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Using config drive
Nov 29 08:08:52 compute-2 kernel: tapccf625f8-47: entered promiscuous mode
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.6108] manager: (tapccf625f8-47): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 08:08:52 compute-2 systemd-udevd[271384]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:52 compute-2 ovn_controller[134375]: 2025-11-29T08:08:52Z|00429|binding|INFO|Claiming lport ccf625f8-471d-4406-9844-a3872b34137c for this chassis.
Nov 29 08:08:52 compute-2 ovn_controller[134375]: 2025-11-29T08:08:52Z|00430|binding|INFO|ccf625f8-471d-4406-9844-a3872b34137c: Claiming fa:16:3e:89:02:a4 10.100.0.9
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.625 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:02:a4 10.100.0.9'], port_security=['fa:16:3e:89:02:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b9952a8-61d7-410f-9f29-081ff912c4cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ccf625f8-471d-4406-9844-a3872b34137c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.628 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ccf625f8-471d-4406-9844-a3872b34137c in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.629 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.6301] device (tapccf625f8-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.6315] device (tapccf625f8-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.648 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[795b74ff-7869-465a-b6be-d6c2a69bcc8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.649 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.652 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.653 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[942ab95a-2b56-4159-a8f9-f09665a4c5f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.654 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6304f4-cb14-4a78-924a-76cea991db0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.669 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[923bbb19-167d-43d5-9f1a-8bda91662e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 systemd-machined[194747]: New machine qemu-39-instance-00000058.
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.689 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.696 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 ovn_controller[134375]: 2025-11-29T08:08:52Z|00431|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c ovn-installed in OVS
Nov 29 08:08:52 compute-2 ovn_controller[134375]: 2025-11-29T08:08:52Z|00432|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c up in Southbound
Nov 29 08:08:52 compute-2 nova_compute[232428]: 2025-11-29 08:08:52.699 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:52 compute-2 systemd[1]: Started Virtual Machine qemu-39-instance-00000058.
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.704 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cc649a71-c3b3-476d-a1bd-2a0790931f32]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.746 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f6fe92-2648-4325-8813-ff861bfe937d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:08:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.7590] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.757 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8156cb-adc0-4323-8a6f-0b09557fc429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.812 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3bed1c0e-4b0f-47f2-9f31-68b786b34e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.817 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dab6fd-41bb-4460-afd1-75c88a3aad0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 NetworkManager[48993]: <info>  [1764403732.8548] device (tap8665acc6-10): carrier: link connected
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.865 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0587e819-a96a-4118-bc6f-944424690453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.909 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[79cd3823-db7b-4746-a6c3-ea094b75c812]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669912, 'reachable_time': 20930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271633, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.942 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8490bbd9-dd5e-4fa0-a370-8c19955d3843]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669912, 'tstamp': 669912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271634, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:52.967 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[65422208-bb42-4ce1-98af-a3b2d79a580c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669912, 'reachable_time': 20930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271635, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.025 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ce992978-f94b-4f65-9659-f279157407a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.049 232432 DEBUG nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.050 232432 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.050 232432 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.051 232432 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.051 232432 DEBUG nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.052 232432 WARNING nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_finish.
Nov 29 08:08:53 compute-2 ceph-mon[77138]: pgmap v2045: 305 pgs: 305 active+clean; 213 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Nov 29 08:08:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3892665650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.152 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a417881b-85f7-43ce-9d0e-69fe1e22e093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.154 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.155 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.156 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:53 compute-2 NetworkManager[48993]: <info>  [1764403733.1588] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-2 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.165 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-2 ovn_controller[134375]: 2025-11-29T08:08:53Z|00433|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.197 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.200 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.203 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab2f258-02a2-448a-a532-7b94b3087fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.204 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:08:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:53.207 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.417 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403733.4166126, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.417 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Resumed (Lifecycle Event)
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.422 232432 DEBUG nova.compute.manager [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.428 232432 INFO nova.virt.libvirt.driver [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance running successfully.
Nov 29 08:08:53 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.432 232432 DEBUG nova.virt.libvirt.guest [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.432 232432 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.452 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.469 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.516 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.516 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403733.4202924, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.517 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Started (Lifecycle Event)
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.557 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.565 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:53 compute-2 podman[271710]: 2025-11-29 08:08:53.673860849 +0000 UTC m=+0.075618857 container create c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:08:53 compute-2 podman[271710]: 2025-11-29 08:08:53.631809149 +0000 UTC m=+0.033567247 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:08:53 compute-2 systemd[1]: Started libpod-conmon-c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f.scope.
Nov 29 08:08:53 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:08:53 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b07e05943bb7a34baf6e9841da21c702be1a29292f90154aede013e5f25f1dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:53 compute-2 podman[271710]: 2025-11-29 08:08:53.778588854 +0000 UTC m=+0.180346922 container init c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:08:53 compute-2 podman[271710]: 2025-11-29 08:08:53.789122512 +0000 UTC m=+0.190880520 container start c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:08:53 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [NOTICE]   (271729) : New worker (271731) forked
Nov 29 08:08:53 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [NOTICE]   (271729) : Loading success.
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.974 232432 DEBUG nova.compute.manager [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.975 232432 DEBUG oslo_concurrency.lockutils [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.975 232432 DEBUG oslo_concurrency.lockutils [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.976 232432 DEBUG oslo_concurrency.lockutils [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.976 232432 DEBUG nova.compute.manager [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] No waiting events found dispatching network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:53 compute-2 nova_compute[232428]: 2025-11-29 08:08:53.977 232432 WARNING nova.compute.manager [req-841acbed-0b00-46ec-8c0e-01a05556f4e6 req-88580563-0ff8-4684-ac11-3e2f9dbc6aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received unexpected event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e for instance with vm_state active and task_state None.
Nov 29 08:08:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:55.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:55 compute-2 ceph-mon[77138]: pgmap v2046: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.129 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.287 232432 DEBUG nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.288 232432 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.288 232432 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.288 232432 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.289 232432 DEBUG nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.289 232432 WARNING nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state resized and task_state None.
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.364 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.365 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.366 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.366 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.367 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.369 232432 INFO nova.compute.manager [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Terminating instance
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.371 232432 DEBUG nova.compute.manager [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:55 compute-2 kernel: tap22393dc1-f7 (unregistering): left promiscuous mode
Nov 29 08:08:55 compute-2 NetworkManager[48993]: <info>  [1764403735.4250] device (tap22393dc1-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.440 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 ovn_controller[134375]: 2025-11-29T08:08:55Z|00434|binding|INFO|Releasing lport 22393dc1-f71d-4c22-9057-03fb9e6e359e from this chassis (sb_readonly=0)
Nov 29 08:08:55 compute-2 ovn_controller[134375]: 2025-11-29T08:08:55Z|00435|binding|INFO|Setting lport 22393dc1-f71d-4c22-9057-03fb9e6e359e down in Southbound
Nov 29 08:08:55 compute-2 ovn_controller[134375]: 2025-11-29T08:08:55Z|00436|binding|INFO|Removing iface tap22393dc1-f7 ovn-installed in OVS
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.453 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:5e:3d 10.100.0.3'], port_security=['fa:16:3e:df:5e:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c7c7782-a5d2-4ccb-8023-a36759a6da8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=22393dc1-f71d-4c22-9057-03fb9e6e359e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.458 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 22393dc1-f71d-4c22-9057-03fb9e6e359e in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 unbound from our chassis
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.463 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.465 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96bdf514-b981-4bd5-bedb-e5629e70e8bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.466 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace which is not needed anymore
Nov 29 08:08:55 compute-2 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 29 08:08:55 compute-2 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000059.scope: Consumed 4.134s CPU time.
Nov 29 08:08:55 compute-2 systemd-machined[194747]: Machine qemu-38-instance-00000059 terminated.
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.612 232432 INFO nova.virt.libvirt.driver [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Instance destroyed successfully.
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.613 232432 DEBUG nova.objects.instance [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'resources' on Instance uuid 7c7c7782-a5d2-4ccb-8023-a36759a6da8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.627 232432 DEBUG nova.virt.libvirt.vif [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-1',id=89,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:52Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=7c7c7782-a5d2-4ccb-8023-a36759a6da8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.628 232432 DEBUG nova.network.os_vif_util [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "address": "fa:16:3e:df:5e:3d", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22393dc1-f7", "ovs_interfaceid": "22393dc1-f71d-4c22-9057-03fb9e6e359e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.628 232432 DEBUG nova.network.os_vif_util [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.629 232432 DEBUG os_vif [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.631 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22393dc1-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.640 232432 INFO os_vif [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:5e:3d,bridge_name='br-int',has_traffic_filtering=True,id=22393dc1-f71d-4c22-9057-03fb9e6e359e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22393dc1-f7')
Nov 29 08:08:55 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [NOTICE]   (271571) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:55 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [NOTICE]   (271571) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:55 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [WARNING]  (271571) : Exiting Master process...
Nov 29 08:08:55 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [ALERT]    (271571) : Current worker (271573) exited with code 143 (Terminated)
Nov 29 08:08:55 compute-2 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[271567]: [WARNING]  (271571) : All workers exited. Exiting... (0)
Nov 29 08:08:55 compute-2 systemd[1]: libpod-b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b.scope: Deactivated successfully.
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.685 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 podman[271765]: 2025-11-29 08:08:55.685613533 +0000 UTC m=+0.087383054 container died b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:08:55 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-59aec3c085861b63b1b3107b3b6588ffca873df679397edc02d2a4fb827aeb42-merged.mount: Deactivated successfully.
Nov 29 08:08:55 compute-2 podman[271765]: 2025-11-29 08:08:55.742882909 +0000 UTC m=+0.144652470 container cleanup b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:08:55 compute-2 systemd[1]: libpod-conmon-b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b.scope: Deactivated successfully.
Nov 29 08:08:55 compute-2 podman[271818]: 2025-11-29 08:08:55.81768805 +0000 UTC m=+0.048843583 container remove b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.828 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3519e789-ba38-41e7-8bf3-16695cdd1318]: (4, ('Sat Nov 29 08:08:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b)\nb04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b\nSat Nov 29 08:08:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (b04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b)\nb04fe5ab3db5b4fa10056ce1d1f2dcfeffd58956a7dd5eafb5050a608d53cf9b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.831 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a973e9a5-a39b-466c-803f-8b3301a1f72b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.832 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:55 compute-2 kernel: tap6a4a6f7c-90: left promiscuous mode
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.835 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 nova_compute[232428]: 2025-11-29 08:08:55.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.854 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bd58f10a-9b9d-4fe0-b2b8-062f64246087]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.866 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7b32bf5d-ea34-43f2-b0d9-12d3ebe771b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.867 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[35cb670e-8dba-4f2c-990b-8be06986babb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.885 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e197cadd-0127-4a8b-bb7a-e64bd3a6f654]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669763, 'reachable_time': 44266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271833, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-2 systemd[1]: run-netns-ovnmeta\x2d6a4a6f7c\x2d9da4\x2d4d0a\x2db32b\x2d578ab4776e05.mount: Deactivated successfully.
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.889 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:08:55.890 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4dce4b7c-2f44-4b4c-b212-3a0e5a5af7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:56 compute-2 podman[271831]: 2025-11-29 08:08:56.01529593 +0000 UTC m=+0.129857709 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.213 232432 INFO nova.virt.libvirt.driver [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Deleting instance files /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e_del
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.215 232432 INFO nova.virt.libvirt.driver [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Deletion of /var/lib/nova/instances/7c7c7782-a5d2-4ccb-8023-a36759a6da8e_del complete
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.322 232432 DEBUG nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-unplugged-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.323 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.323 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.324 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.324 232432 DEBUG nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] No waiting events found dispatching network-vif-unplugged-22393dc1-f71d-4c22-9057-03fb9e6e359e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.325 232432 DEBUG nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-unplugged-22393dc1-f71d-4c22-9057-03fb9e6e359e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.325 232432 DEBUG nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.326 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.326 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.326 232432 DEBUG oslo_concurrency.lockutils [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.327 232432 DEBUG nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] No waiting events found dispatching network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.327 232432 WARNING nova.compute.manager [req-03ab5d4b-3488-40d6-bfa3-792d60167c52 req-91c9346b-8b21-420d-9d08-bc6d4a40f6f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received unexpected event network-vif-plugged-22393dc1-f71d-4c22-9057-03fb9e6e359e for instance with vm_state active and task_state deleting.
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.372 232432 INFO nova.compute.manager [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.373 232432 DEBUG oslo.service.loopingcall [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.373 232432 DEBUG nova.compute.manager [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:56 compute-2 nova_compute[232428]: 2025-11-29 08:08:56.374 232432 DEBUG nova.network.neutron [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:57.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:57 compute-2 ceph-mon[77138]: pgmap v2047: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.0 MiB/s wr, 256 op/s
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.499 232432 DEBUG nova.network.neutron [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.521 232432 INFO nova.compute.manager [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Took 1.15 seconds to deallocate network for instance.
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.578 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.578 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.658 232432 DEBUG oslo_concurrency.processutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:57 compute-2 nova_compute[232428]: 2025-11-29 08:08:57.707 232432 DEBUG nova.compute.manager [req-13a05cc7-de5e-42ea-8988-f286b104f770 req-61781832-2304-41e5-9a54-e0c529cb44f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Received event network-vif-deleted-22393dc1-f71d-4c22-9057-03fb9e6e359e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1249892310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.112 232432 DEBUG oslo_concurrency.processutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.118 232432 DEBUG nova.compute.provider_tree [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.157 232432 DEBUG nova.scheduler.client.report [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1249892310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.197 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.246 232432 INFO nova.scheduler.client.report [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Deleted allocations for instance 7c7c7782-a5d2-4ccb-8023-a36759a6da8e
Nov 29 08:08:58 compute-2 nova_compute[232428]: 2025-11-29 08:08:58.310 232432 DEBUG oslo_concurrency.lockutils [None req-d538c4b7-4b85-4b56-921e-6b97ebca3738 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "7c7c7782-a5d2-4ccb-8023-a36759a6da8e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:08:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:58.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:08:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:08:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:08:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:59.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:08:59 compute-2 ceph-mon[77138]: pgmap v2048: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.0 MiB/s wr, 256 op/s
Nov 29 08:08:59 compute-2 ceph-mon[77138]: osdmap e264: 3 total, 3 up, 3 in
Nov 29 08:08:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3954315581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2440657159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:59 compute-2 sudo[271883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:59 compute-2 sudo[271883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-2 sudo[271883]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-2 sudo[271908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:59 compute-2 sudo[271908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-2 sudo[271908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-2 sudo[271933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:59 compute-2 sudo[271933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-2 sudo[271933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-2 sudo[271959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:08:59 compute-2 sudo[271959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:00 compute-2 sudo[271959]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:00 compute-2 nova_compute[232428]: 2025-11-29 08:09:00.673 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-2 nova_compute[232428]: 2025-11-29 08:09:00.686 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:01 compute-2 ceph-mon[77138]: pgmap v2050: 305 pgs: 305 active+clean; 191 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 18 KiB/s wr, 309 op/s
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:09:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.002 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.005 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.007 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.007 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.008 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.010 232432 INFO nova.compute.manager [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Terminating instance
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.012 232432 DEBUG nova.compute.manager [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:02 compute-2 kernel: tapccf625f8-47 (unregistering): left promiscuous mode
Nov 29 08:09:02 compute-2 NetworkManager[48993]: <info>  [1764403742.0772] device (tapccf625f8-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.100 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 ovn_controller[134375]: 2025-11-29T08:09:02Z|00437|binding|INFO|Releasing lport ccf625f8-471d-4406-9844-a3872b34137c from this chassis (sb_readonly=0)
Nov 29 08:09:02 compute-2 ovn_controller[134375]: 2025-11-29T08:09:02Z|00438|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c down in Southbound
Nov 29 08:09:02 compute-2 ovn_controller[134375]: 2025-11-29T08:09:02Z|00439|binding|INFO|Removing iface tapccf625f8-47 ovn-installed in OVS
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.107 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.111 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:02:a4 10.100.0.9'], port_security=['fa:16:3e:89:02:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b9952a8-61d7-410f-9f29-081ff912c4cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ccf625f8-471d-4406-9844-a3872b34137c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.113 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ccf625f8-471d-4406-9844-a3872b34137c in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.116 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.118 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b5900844-7b1b-41aa-bbee-fc88fedcbb5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.119 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.139 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 29 08:09:02 compute-2 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Consumed 9.568s CPU time.
Nov 29 08:09:02 compute-2 systemd-machined[194747]: Machine qemu-39-instance-00000058 terminated.
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.264 232432 INFO nova.virt.libvirt.driver [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance destroyed successfully.
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.265 232432 DEBUG nova.objects.instance [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.283 232432 DEBUG nova.virt.libvirt.vif [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:59Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.284 232432 DEBUG nova.network.os_vif_util [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.285 232432 DEBUG nova.network.os_vif_util [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.286 232432 DEBUG os_vif [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.289 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.289 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccf625f8-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.302 232432 INFO os_vif [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47')
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [NOTICE]   (271729) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [NOTICE]   (271729) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [WARNING]  (271729) : Exiting Master process...
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [WARNING]  (271729) : Exiting Master process...
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [ALERT]    (271729) : Current worker (271731) exited with code 143 (Terminated)
Nov 29 08:09:02 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[271725]: [WARNING]  (271729) : All workers exited. Exiting... (0)
Nov 29 08:09:02 compute-2 systemd[1]: libpod-c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f.scope: Deactivated successfully.
Nov 29 08:09:02 compute-2 podman[272052]: 2025-11-29 08:09:02.348381184 +0000 UTC m=+0.063090337 container died c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:09:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-1b07e05943bb7a34baf6e9841da21c702be1a29292f90154aede013e5f25f1dc-merged.mount: Deactivated successfully.
Nov 29 08:09:02 compute-2 podman[272052]: 2025-11-29 08:09:02.398678942 +0000 UTC m=+0.113388095 container cleanup c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.410 232432 DEBUG nova.compute.manager [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.411 232432 DEBUG oslo_concurrency.lockutils [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.412 232432 DEBUG oslo_concurrency.lockutils [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.413 232432 DEBUG oslo_concurrency.lockutils [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.413 232432 DEBUG nova.compute.manager [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.414 232432 DEBUG nova.compute.manager [req-34aee2fc-8fb2-4e0c-835d-f85d62d7d394 req-f6ca1018-e948-4fa3-9b6c-ab597f52e548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:02 compute-2 systemd[1]: libpod-conmon-c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f.scope: Deactivated successfully.
Nov 29 08:09:02 compute-2 podman[272101]: 2025-11-29 08:09:02.506861445 +0000 UTC m=+0.073201023 container remove c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.517 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[579fe36a-85ee-4842-a8df-d70639d03d3e]: (4, ('Sat Nov 29 08:09:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f)\nc31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f\nSat Nov 29 08:09:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (c31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f)\nc31c14308c20b1b50a469469dc35b5203ee4c6af5c26e041465c585fa55dde8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.519 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[202df6aa-6854-4649-833c-e92be9d47bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.520 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.540 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc97125b-c958-4026-8348-d9fa7108e3e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.555 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[64fbe59a-9978-450a-8bb2-e74773c6c8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.557 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1044a12b-bdce-46e4-b0b3-74a33bae056c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.578 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[95bbbe5d-5c3c-475d-95ff-911f4a5488ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669900, 'reachable_time': 23697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272116, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.582 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:02.583 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f89abda3-45c5-4d21-aec4-b9eda09f69d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:02.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.788 232432 INFO nova.virt.libvirt.driver [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Deleting instance files /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb_del
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.790 232432 INFO nova.virt.libvirt.driver [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Deletion of /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb_del complete
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.882 232432 INFO nova.compute.manager [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.883 232432 DEBUG oslo.service.loopingcall [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.884 232432 DEBUG nova.compute.manager [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:02 compute-2 nova_compute[232428]: 2025-11-29 08:09:02.884 232432 DEBUG nova.network.neutron [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:03 compute-2 ceph-mon[77138]: pgmap v2051: 305 pgs: 305 active+clean; 121 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 19 KiB/s wr, 348 op/s
Nov 29 08:09:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2878870364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2716884571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:03.314 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:03.315 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:03.315 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:03 compute-2 nova_compute[232428]: 2025-11-29 08:09:03.971 232432 DEBUG nova.network.neutron [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:03 compute-2 nova_compute[232428]: 2025-11-29 08:09:03.998 232432 INFO nova.compute.manager [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Took 1.11 seconds to deallocate network for instance.
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.016 232432 DEBUG nova.compute.manager [req-079c3dc3-082e-42c4-8566-a3767e895abc req-b67393cd-a2da-4abd-855f-de963d97a801 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-deleted-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.058 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.059 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.133 232432 DEBUG oslo_concurrency.processutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1084199128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/825734957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.586 232432 DEBUG nova.compute.manager [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.588 232432 DEBUG oslo_concurrency.lockutils [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.590 232432 DEBUG oslo_concurrency.lockutils [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.590 232432 DEBUG oslo_concurrency.lockutils [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.591 232432 DEBUG nova.compute.manager [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.592 232432 WARNING nova.compute.manager [req-3093d944-561b-4f67-9028-dc3ff1436874 req-131fdb7b-a87f-475d-abaf-f2de314cd165 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state deleted and task_state None.
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.607 232432 DEBUG oslo_concurrency.processutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.616 232432 DEBUG nova.compute.provider_tree [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.659 232432 DEBUG nova.scheduler.client.report [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.688 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.728 232432 INFO nova.scheduler.client.report [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocations for instance 9b9952a8-61d7-410f-9f29-081ff912c4cb
Nov 29 08:09:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:04 compute-2 nova_compute[232428]: 2025-11-29 08:09:04.826 232432 DEBUG oslo_concurrency.lockutils [None req-38c43221-d637-47f2-8bbc-969b431bf255 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:05.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:05 compute-2 ceph-mon[77138]: pgmap v2052: 305 pgs: 305 active+clean; 102 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.3 KiB/s wr, 325 op/s
Nov 29 08:09:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/825734957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.322 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.324 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.359 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.476 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.477 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.484 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.485 232432 INFO nova.compute.claims [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.579 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:05 compute-2 nova_compute[232428]: 2025-11-29 08:09:05.729 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1636289547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.080 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.088 232432 DEBUG nova.compute.provider_tree [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.112 232432 DEBUG nova.scheduler.client.report [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.140 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.141 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.184 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.184 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.208 232432 INFO nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.224 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1636289547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:09:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.337 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.339 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.340 232432 INFO nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Creating image(s)
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.379 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.420 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.462 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.468 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:06 compute-2 sudo[272201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:06 compute-2 sudo[272201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:06 compute-2 sudo[272201]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.569 232432 DEBUG nova.policy [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a7b61623f854cf59636f192ab8af005', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.575 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.576 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.577 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.578 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.626 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:06 compute-2 sudo[272244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:09:06 compute-2 sudo[272244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.634 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 10d72f98-ba8e-47e8-8d33-667ee034d650_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:06 compute-2 sudo[272244]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:06.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:06 compute-2 nova_compute[232428]: 2025-11-29 08:09:06.978 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 10d72f98-ba8e-47e8-8d33-667ee034d650_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.091 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.257 232432 DEBUG nova.objects.instance [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 10d72f98-ba8e-47e8-8d33-667ee034d650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.278 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.279 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Ensure instance console log exists: /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.280 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.281 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.282 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.293 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:07 compute-2 ceph-mon[77138]: pgmap v2053: 305 pgs: 305 active+clean; 148 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.6 MiB/s wr, 218 op/s
Nov 29 08:09:07 compute-2 ceph-mon[77138]: osdmap e265: 3 total, 3 up, 3 in
Nov 29 08:09:07 compute-2 nova_compute[232428]: 2025-11-29 08:09:07.822 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Successfully created port: e282d1f2-9089-40ae-8d7a-bb04a2218ae0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:07 compute-2 sudo[272381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:07 compute-2 sudo[272381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:07 compute-2 sudo[272381]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:08 compute-2 sudo[272406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:08 compute-2 sudo[272406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:08 compute-2 sudo[272406]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1804108564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4049388428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:08.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:09.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:09 compute-2 nova_compute[232428]: 2025-11-29 08:09:09.427 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Successfully updated port: e282d1f2-9089-40ae-8d7a-bb04a2218ae0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:09 compute-2 nova_compute[232428]: 2025-11-29 08:09:09.439 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:09 compute-2 nova_compute[232428]: 2025-11-29 08:09:09.439 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:09 compute-2 nova_compute[232428]: 2025-11-29 08:09:09.439 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:09 compute-2 ceph-mon[77138]: pgmap v2055: 305 pgs: 305 active+clean; 148 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 4.6 MiB/s wr, 186 op/s
Nov 29 08:09:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/450969542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:09 compute-2 nova_compute[232428]: 2025-11-29 08:09:09.782 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.610 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403735.608993, 7c7c7782-a5d2-4ccb-8023-a36759a6da8e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.611 232432 INFO nova.compute.manager [-] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] VM Stopped (Lifecycle Event)
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.635 232432 DEBUG nova.compute.manager [None req-40c934ab-b5d8-4750-8d16-2ea5b12118b9 - - - - - -] [instance: 7c7c7782-a5d2-4ccb-8023-a36759a6da8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/288648728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1963408332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:10.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.964 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.964 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.981 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:10 compute-2 nova_compute[232428]: 2025-11-29 08:09:10.997 232432 DEBUG nova.network.neutron [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Updating instance_info_cache with network_info: [{"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.032 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.032 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Instance network_info: |[{"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.037 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Start _get_guest_xml network_info=[{"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.061 232432 WARNING nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.069 232432 DEBUG nova.virt.libvirt.host [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.070 232432 DEBUG nova.virt.libvirt.host [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.074 232432 DEBUG nova.virt.libvirt.host [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.075 232432 DEBUG nova.virt.libvirt.host [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.077 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.078 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.079 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.079 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.080 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.080 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.081 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.081 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.082 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.082 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.083 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.084 232432 DEBUG nova.virt.hardware [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.090 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.138 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.139 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.150 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.151 232432 INFO nova.compute.claims [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.328 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2538271181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.576 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.629 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.643 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:11 compute-2 podman[272475]: 2025-11-29 08:09:11.70334894 +0000 UTC m=+0.101154134 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 08:09:11 compute-2 ceph-mon[77138]: pgmap v2056: 305 pgs: 305 active+clean; 186 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 6.9 MiB/s wr, 199 op/s
Nov 29 08:09:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/801509104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2538271181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.793 232432 DEBUG nova.compute.manager [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-changed-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.795 232432 DEBUG nova.compute.manager [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Refreshing instance network info cache due to event network-changed-e282d1f2-9089-40ae-8d7a-bb04a2218ae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.796 232432 DEBUG oslo_concurrency.lockutils [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.797 232432 DEBUG oslo_concurrency.lockutils [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.798 232432 DEBUG nova.network.neutron [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Refreshing network info cache for port e282d1f2-9089-40ae-8d7a-bb04a2218ae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4111641364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.831 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.845 232432 DEBUG nova.compute.provider_tree [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.860 232432 DEBUG nova.scheduler.client.report [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.900 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.901 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.963 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.964 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:11 compute-2 nova_compute[232428]: 2025-11-29 08:09:11.994 232432 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.015 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.243 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.246 232432 DEBUG nova.virt.libvirt.vif [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1598576635',display_name='tempest-ServerDiskConfigTestJSON-server-1598576635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1598576635',id=94,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-cyhnmrbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:06Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=10d72f98-ba8e-47e8-8d33-667ee034d650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.247 232432 DEBUG nova.network.os_vif_util [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.248 232432 DEBUG nova.network.os_vif_util [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.251 232432 DEBUG nova.objects.instance [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 10d72f98-ba8e-47e8-8d33-667ee034d650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.279 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <uuid>10d72f98-ba8e-47e8-8d33-667ee034d650</uuid>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <name>instance-0000005e</name>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1598576635</nova:name>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:09:11</nova:creationTime>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <nova:port uuid="e282d1f2-9089-40ae-8d7a-bb04a2218ae0">
Nov 29 08:09:12 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <system>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="serial">10d72f98-ba8e-47e8-8d33-667ee034d650</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="uuid">10d72f98-ba8e-47e8-8d33-667ee034d650</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </system>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <os>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </os>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <features>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </features>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/10d72f98-ba8e-47e8-8d33-667ee034d650_disk">
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config">
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:24:0a:6b"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <target dev="tape282d1f2-90"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/console.log" append="off"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <video>
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </video>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:09:12 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:09:12 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:09:12 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:09:12 compute-2 nova_compute[232428]: </domain>
Nov 29 08:09:12 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.282 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Preparing to wait for external event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.283 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.283 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.283 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.284 232432 DEBUG nova.virt.libvirt.vif [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1598576635',display_name='tempest-ServerDiskConfigTestJSON-server-1598576635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1598576635',id=94,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-cyhnmrbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:06Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=10d72f98-ba8e-47e8-8d33-667ee034d650,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.284 232432 DEBUG nova.network.os_vif_util [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.285 232432 DEBUG nova.network.os_vif_util [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.286 232432 DEBUG os_vif [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.287 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.288 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.292 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.293 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape282d1f2-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.294 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape282d1f2-90, col_values=(('external_ids', {'iface-id': 'e282d1f2-9089-40ae-8d7a-bb04a2218ae0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:0a:6b', 'vm-uuid': '10d72f98-ba8e-47e8-8d33-667ee034d650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.297 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-2 NetworkManager[48993]: <info>  [1764403752.2984] manager: (tape282d1f2-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.300 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.312 232432 INFO os_vif [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90')
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.316 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.318 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.319 232432 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Creating image(s)
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.362 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.406 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.450 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.457 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.553 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.555 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.555 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.556 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.585 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.590 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.656 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.658 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.659 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:24:0a:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.660 232432 INFO nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Using config drive
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.697 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3311797751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4111641364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/74092610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3692096887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:12.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:12 compute-2 nova_compute[232428]: 2025-11-29 08:09:12.880 232432 DEBUG nova.policy [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95361d3a276f4d7f81e9f9a4bcafd2ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:13.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.494 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.904s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.607 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] resizing rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:09:13 compute-2 sshd-session[272511]: Invalid user sol from 45.148.10.240 port 35034
Nov 29 08:09:13 compute-2 ceph-mon[77138]: pgmap v2057: 305 pgs: 305 active+clean; 227 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 8.5 MiB/s wr, 173 op/s
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.861 232432 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.887 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.888 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Ensure instance console log exists: /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.889 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.889 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:13 compute-2 nova_compute[232428]: 2025-11-29 08:09:13.889 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:14 compute-2 sshd-session[272511]: Connection closed by invalid user sol 45.148.10.240 port 35034 [preauth]
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.150 232432 INFO nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Creating config drive at /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.156 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps5ccgtft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.303 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps5ccgtft" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.343 232432 DEBUG nova.storage.rbd_utils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.349 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config 10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.398 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Successfully created port: a6969fcb-65bc-4364-af84-9e911a2205cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.647 232432 DEBUG oslo_concurrency.processutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config 10d72f98-ba8e-47e8-8d33-667ee034d650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.648 232432 INFO nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Deleting local config drive /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650/disk.config because it was imported into RBD.
Nov 29 08:09:14 compute-2 kernel: tape282d1f2-90: entered promiscuous mode
Nov 29 08:09:14 compute-2 NetworkManager[48993]: <info>  [1764403754.7425] manager: (tape282d1f2-90): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 08:09:14 compute-2 ovn_controller[134375]: 2025-11-29T08:09:14Z|00440|binding|INFO|Claiming lport e282d1f2-9089-40ae-8d7a-bb04a2218ae0 for this chassis.
Nov 29 08:09:14 compute-2 ovn_controller[134375]: 2025-11-29T08:09:14Z|00441|binding|INFO|e282d1f2-9089-40ae-8d7a-bb04a2218ae0: Claiming fa:16:3e:24:0a:6b 10.100.0.8
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.753 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:0a:6b 10.100.0.8'], port_security=['fa:16:3e:24:0a:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '10d72f98-ba8e-47e8-8d33-667ee034d650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e282d1f2-9089-40ae-8d7a-bb04a2218ae0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.755 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e282d1f2-9089-40ae-8d7a-bb04a2218ae0 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.758 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:09:14 compute-2 ovn_controller[134375]: 2025-11-29T08:09:14Z|00442|binding|INFO|Setting lport e282d1f2-9089-40ae-8d7a-bb04a2218ae0 ovn-installed in OVS
Nov 29 08:09:14 compute-2 ovn_controller[134375]: 2025-11-29T08:09:14Z|00443|binding|INFO|Setting lport e282d1f2-9089-40ae-8d7a-bb04a2218ae0 up in Southbound
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.778 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.781 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a665f49-dad5-4760-be2f-e9256f885aa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.784 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:14 compute-2 nova_compute[232428]: 2025-11-29 08:09:14.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.787 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.787 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa7c106-97a3-4ff7-80dc-d78a4a6cb61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.788 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[56570b90-3c50-4178-8b81-8c9609c8267b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:14 compute-2 systemd-machined[194747]: New machine qemu-40-instance-0000005e.
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.805 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[9a390e25-6c1d-4468-a41e-8642605e70d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 systemd-udevd[272779]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:14 compute-2 systemd[1]: Started Virtual Machine qemu-40-instance-0000005e.
Nov 29 08:09:14 compute-2 NetworkManager[48993]: <info>  [1764403754.8369] device (tape282d1f2-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:14 compute-2 NetworkManager[48993]: <info>  [1764403754.8384] device (tape282d1f2-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.843 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a17798c-582d-4f29-b8c8-68b49eb6f10b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.893 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e9591c11-ffc5-46a7-b146-f330a9ea63a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.903 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[416f972b-5226-4bf2-ba4f-fac7cbdf4506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 NetworkManager[48993]: <info>  [1764403754.9050] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.961 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4f6eb3-393f-465f-912f-925b081229c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:14.967 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8a56be51-c801-4efc-aaac-83535d1e249c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 NetworkManager[48993]: <info>  [1764403755.0026] device (tap8665acc6-10): carrier: link connected
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.014 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0152bc19-27c9-4616-9819-35cf35bc057a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.044 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d54122-3b79-45d4-9f3f-d857b55cc760]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672127, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272810, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.080 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de32affd-9ee2-4afc-aaba-a2315bc5008f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672127, 'tstamp': 672127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272811, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.112 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0de55cf3-3e72-4557-85fb-f0176d5e8802]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672127, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272812, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.174 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10c6d92a-5590-4557-91d7-f88a7c50bb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.287 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b031b70-e35b-4621-8991-ce0e97e882e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.292 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.292 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.293 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:15 compute-2 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 08:09:15 compute-2 NetworkManager[48993]: <info>  [1764403755.2981] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.303 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:15 compute-2 ovn_controller[134375]: 2025-11-29T08:09:15Z|00444|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.304 232432 DEBUG nova.network.neutron [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Updated VIF entry in instance network info cache for port e282d1f2-9089-40ae-8d7a-bb04a2218ae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.306 232432 DEBUG nova.network.neutron [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Updating instance_info_cache with network_info: [{"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.315 232432 DEBUG nova.compute.manager [req-43706cea-c947-4184-ac7b-c4443a47eaeb req-5a6e549b-8041-4feb-91a4-70b54aee7d35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.317 232432 DEBUG oslo_concurrency.lockutils [req-43706cea-c947-4184-ac7b-c4443a47eaeb req-5a6e549b-8041-4feb-91a4-70b54aee7d35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.317 232432 DEBUG oslo_concurrency.lockutils [req-43706cea-c947-4184-ac7b-c4443a47eaeb req-5a6e549b-8041-4feb-91a4-70b54aee7d35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.318 232432 DEBUG oslo_concurrency.lockutils [req-43706cea-c947-4184-ac7b-c4443a47eaeb req-5a6e549b-8041-4feb-91a4-70b54aee7d35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.318 232432 DEBUG nova.compute.manager [req-43706cea-c947-4184-ac7b-c4443a47eaeb req-5a6e549b-8041-4feb-91a4-70b54aee7d35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Processing event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.328 232432 DEBUG oslo_concurrency.lockutils [req-777ee79e-532d-460d-9961-0651b696f5ff req-989756e0-2824-450e-822b-95617671a4c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-10d72f98-ba8e-47e8-8d33-667ee034d650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.336 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc7f5c9-e5b4-417d-874e-7ce411f67820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.340 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:15.341 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.479 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Successfully updated port: a6969fcb-65bc-4364-af84-9e911a2205cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.499 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.501 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquired lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.501 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.735 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:15 compute-2 nova_compute[232428]: 2025-11-29 08:09:15.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:15 compute-2 ceph-mon[77138]: pgmap v2058: 305 pgs: 305 active+clean; 259 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 762 KiB/s rd, 9.5 MiB/s wr, 205 op/s
Nov 29 08:09:15 compute-2 podman[272845]: 2025-11-29 08:09:15.969279614 +0000 UTC m=+0.091233344 container create 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:09:16 compute-2 podman[272845]: 2025-11-29 08:09:15.928012118 +0000 UTC m=+0.049965898 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:16 compute-2 systemd[1]: Started libpod-conmon-3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41.scope.
Nov 29 08:09:16 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:09:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ddc3a85d85e9deb26b32fd29228dc420b0a731ca2b611e0093bd52aee7046b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:16 compute-2 podman[272845]: 2025-11-29 08:09:16.101220717 +0000 UTC m=+0.223174497 container init 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:16 compute-2 podman[272845]: 2025-11-29 08:09:16.120082424 +0000 UTC m=+0.242036164 container start 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:09:16 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [NOTICE]   (272923) : New worker (272928) forked
Nov 29 08:09:16 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [NOTICE]   (272923) : Loading success.
Nov 29 08:09:16 compute-2 podman[272882]: 2025-11-29 08:09:16.16836806 +0000 UTC m=+0.135359501 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.214 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.216 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403756.2130911, 10d72f98-ba8e-47e8-8d33-667ee034d650 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.216 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] VM Started (Lifecycle Event)
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.223 232432 DEBUG nova.compute.manager [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-changed-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.224 232432 DEBUG nova.compute.manager [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Refreshing instance network info cache due to event network-changed-a6969fcb-65bc-4364-af84-9e911a2205cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.225 232432 DEBUG oslo_concurrency.lockutils [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.231 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.240 232432 INFO nova.virt.libvirt.driver [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Instance spawned successfully.
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.241 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.251 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.264 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.273 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.275 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.276 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.276 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.277 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.277 232432 DEBUG nova.virt.libvirt.driver [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.291 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.292 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403756.2148905, 10d72f98-ba8e-47e8-8d33-667ee034d650 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.292 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] VM Paused (Lifecycle Event)
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.339 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.347 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403756.230279, 10d72f98-ba8e-47e8-8d33-667ee034d650 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.347 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] VM Resumed (Lifecycle Event)
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.413 232432 INFO nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Took 10.08 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.414 232432 DEBUG nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.416 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.427 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.460 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.521 232432 INFO nova.compute.manager [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Took 11.06 seconds to build instance.
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.544 232432 DEBUG oslo_concurrency.lockutils [None req-56ae5f22-2f66-4517-a25d-a3edbbed57f2 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.951 232432 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Updating instance_info_cache with network_info: [{"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.977 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Releasing lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.978 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Instance network_info: |[{"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.979 232432 DEBUG oslo_concurrency.lockutils [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.980 232432 DEBUG nova.network.neutron [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Refreshing network info cache for port a6969fcb-65bc-4364-af84-9e911a2205cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.986 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Start _get_guest_xml network_info=[{"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:16 compute-2 nova_compute[232428]: 2025-11-29 08:09:16.996 232432 WARNING nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.005 232432 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.006 232432 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.017 232432 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.019 232432 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.022 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.023 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.024 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.024 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.025 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.026 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.026 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.027 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.028 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.028 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.029 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.030 232432 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.036 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:17.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.261 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403742.2591717, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.261 232432 INFO nova.compute.manager [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Stopped (Lifecycle Event)
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.286 232432 DEBUG nova.compute.manager [None req-cdb94ed4-4804-47d4-8e5d-02c6fef54045 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.297 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4170009730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.543 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.590 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.599 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.748 232432 DEBUG nova.compute.manager [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.749 232432 DEBUG oslo_concurrency.lockutils [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.750 232432 DEBUG oslo_concurrency.lockutils [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.751 232432 DEBUG oslo_concurrency.lockutils [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.752 232432 DEBUG nova.compute.manager [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] No waiting events found dispatching network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:17 compute-2 nova_compute[232428]: 2025-11-29 08:09:17.753 232432 WARNING nova.compute.manager [req-a74b6d12-8676-4874-a756-e9def2ecdace req-4e0faa48-d041-438d-a63a-cad5ddbfbeda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received unexpected event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 for instance with vm_state active and task_state None.
Nov 29 08:09:17 compute-2 ceph-mon[77138]: pgmap v2059: 305 pgs: 305 active+clean; 354 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 10 MiB/s wr, 314 op/s
Nov 29 08:09:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4170009730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4249565174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.046 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.049 232432 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-1',id=95,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=9a2b65ee-1e3f-4cb5-a593-53aab714cca6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.050 232432 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.052 232432 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.055 232432 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.078 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <uuid>9a2b65ee-1e3f-4cb5-a593-53aab714cca6</uuid>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <name>instance-0000005f</name>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:name>tempest-ListServersNegativeTestJSON-server-313770805-1</nova:name>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:09:16</nova:creationTime>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:user uuid="95361d3a276f4d7f81e9f9a4bcafd2ea">tempest-ListServersNegativeTestJSON-1935238201-project-member</nova:user>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:project uuid="e3e18973b82a4071bdc187ede8c1afb8">tempest-ListServersNegativeTestJSON-1935238201</nova:project>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <nova:port uuid="a6969fcb-65bc-4364-af84-9e911a2205cb">
Nov 29 08:09:18 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <system>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="serial">9a2b65ee-1e3f-4cb5-a593-53aab714cca6</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="uuid">9a2b65ee-1e3f-4cb5-a593-53aab714cca6</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </system>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <os>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </os>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <features>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </features>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk">
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config">
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:18 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:27:d0:da"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <target dev="tapa6969fcb-65"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/console.log" append="off"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <video>
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </video>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:09:18 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:09:18 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:09:18 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:09:18 compute-2 nova_compute[232428]: </domain>
Nov 29 08:09:18 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.091 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Preparing to wait for external event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.092 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.092 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.093 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.094 232432 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-1',id=95,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=9a2b65ee-1e3f-4cb5-a593-53aab714cca6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.095 232432 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.096 232432 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.098 232432 DEBUG os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.101 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.102 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.109 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.110 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6969fcb-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.111 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6969fcb-65, col_values=(('external_ids', {'iface-id': 'a6969fcb-65bc-4364-af84-9e911a2205cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:d0:da', 'vm-uuid': '9a2b65ee-1e3f-4cb5-a593-53aab714cca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.114 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:18 compute-2 NetworkManager[48993]: <info>  [1764403758.1162] manager: (tapa6969fcb-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.125 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.126 232432 INFO os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65')
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.202 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.204 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.205 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No VIF found with MAC fa:16:3e:27:d0:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.206 232432 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Using config drive
Nov 29 08:09:18 compute-2 nova_compute[232428]: 2025-11-29 08:09:18.251 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:18.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4249565174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1077723992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/621329738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:20 compute-2 ceph-mon[77138]: pgmap v2060: 305 pgs: 305 active+clean; 354 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.9 MiB/s wr, 273 op/s
Nov 29 08:09:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4044653041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2347223908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.057 232432 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Creating config drive at /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.062 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzy24vv_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.211 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzy24vv_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.245 232432 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.250 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.372 232432 DEBUG nova.network.neutron [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Updated VIF entry in instance network info cache for port a6969fcb-65bc-4364-af84-9e911a2205cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.373 232432 DEBUG nova.network.neutron [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Updating instance_info_cache with network_info: [{"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.407 232432 DEBUG oslo_concurrency.lockutils [req-917cc61e-3924-4b80-9139-f948c304db59 req-7380101e-aca3-4f49-a30d-93d8d7702d3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9a2b65ee-1e3f-4cb5-a593-53aab714cca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.428 232432 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config 9a2b65ee-1e3f-4cb5-a593-53aab714cca6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.429 232432 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Deleting local config drive /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6/disk.config because it was imported into RBD.
Nov 29 08:09:20 compute-2 kernel: tapa6969fcb-65: entered promiscuous mode
Nov 29 08:09:20 compute-2 ovn_controller[134375]: 2025-11-29T08:09:20Z|00445|binding|INFO|Claiming lport a6969fcb-65bc-4364-af84-9e911a2205cb for this chassis.
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.4878] manager: (tapa6969fcb-65): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 08:09:20 compute-2 ovn_controller[134375]: 2025-11-29T08:09:20Z|00446|binding|INFO|a6969fcb-65bc-4364-af84-9e911a2205cb: Claiming fa:16:3e:27:d0:da 10.100.0.14
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.486 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.493 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.502 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d0:da 10.100.0.14'], port_security=['fa:16:3e:27:d0:da 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9a2b65ee-1e3f-4cb5-a593-53aab714cca6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=a6969fcb-65bc-4364-af84-9e911a2205cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.503 143801 INFO neutron.agent.ovn.metadata.agent [-] Port a6969fcb-65bc-4364-af84-9e911a2205cb in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 bound to our chassis
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.505 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 08:09:20 compute-2 systemd-udevd[273078]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.523 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae91c7d-7453-484f-84a8-a4bba62644f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.524 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c0a06e3-81 in ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.527 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c0a06e3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.527 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7a85e1-de3f-4e7f-8f24-726d02ddd8de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.528 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb48b01-c28c-4714-97da-0b55cd970eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 systemd-machined[194747]: New machine qemu-41-instance-0000005f.
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.5443] device (tapa6969fcb-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.5454] device (tapa6969fcb-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.544 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ffad7fcc-c01c-4673-bcf7-b10aced7d9d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 systemd[1]: Started Virtual Machine qemu-41-instance-0000005f.
Nov 29 08:09:20 compute-2 ovn_controller[134375]: 2025-11-29T08:09:20Z|00447|binding|INFO|Setting lport a6969fcb-65bc-4364-af84-9e911a2205cb ovn-installed in OVS
Nov 29 08:09:20 compute-2 ovn_controller[134375]: 2025-11-29T08:09:20Z|00448|binding|INFO|Setting lport a6969fcb-65bc-4364-af84-9e911a2205cb up in Southbound
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.573 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe2fe38-2e0e-4a68-830e-f11233bcf360]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.605 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[668349f2-b539-4d58-a71e-70b044240e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.612 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[05a48ba5-2877-4e8d-9e42-cb2a96b4b707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.6140] manager: (tap4c0a06e3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 08:09:20 compute-2 systemd-udevd[273082]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.614 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.665 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d4203197-050d-4341-8862-b29f79fd7f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.669 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2b381b-b5c3-43da-b416-16c04270a0b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.6944] device (tap4c0a06e3-80): carrier: link connected
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.701 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b931f2-f502-4475-be4e-e1c81f66bdbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.720 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f570e034-97ad-4d22-8a45-2655e1d36d7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672696, 'reachable_time': 35555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273111, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.744 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[65b22c2c-5f7e-4625-a11b-4fcfb24839a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:42d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672696, 'tstamp': 672696}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273112, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.761 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.766 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3cbedc-2257-499a-bd2c-cc479899f615]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672696, 'reachable_time': 35555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273113, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:20.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.804 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[995fea3b-e429-4d5e-aade-d50f0458d424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.880 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5e97c6-1533-4c73-b08d-4cd7b6fa7d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.881 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.882 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.882 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c0a06e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 NetworkManager[48993]: <info>  [1764403760.8846] manager: (tap4c0a06e3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 08:09:20 compute-2 kernel: tap4c0a06e3-80: entered promiscuous mode
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.887 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.889 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c0a06e3-80, col_values=(('external_ids', {'iface-id': '25db3838-7764-409c-8606-f0c90f681664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.890 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_controller[134375]: 2025-11-29T08:09:20Z|00449|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 08:09:20 compute-2 nova_compute[232428]: 2025-11-29 08:09:20.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.907 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.908 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5f217eed-2b8a-4a53-aafb-79858bdb1da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.909 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:20.910 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'env', 'PROCESS_TAG=haproxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c0a06e3-8d77-4f81-85b4-47e57dafff04.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3188377218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:21 compute-2 ceph-mon[77138]: pgmap v2061: 305 pgs: 305 active+clean; 344 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 8.6 MiB/s wr, 335 op/s
Nov 29 08:09:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3798714948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:21.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:21 compute-2 podman[273144]: 2025-11-29 08:09:21.317262065 +0000 UTC m=+0.056259935 container create ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.323 232432 DEBUG nova.compute.manager [req-5f50aef7-854e-4f60-b428-1d5830c42634 req-47b78da7-e59b-40b9-b3ff-7c7c559be8b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.325 232432 DEBUG oslo_concurrency.lockutils [req-5f50aef7-854e-4f60-b428-1d5830c42634 req-47b78da7-e59b-40b9-b3ff-7c7c559be8b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.326 232432 DEBUG oslo_concurrency.lockutils [req-5f50aef7-854e-4f60-b428-1d5830c42634 req-47b78da7-e59b-40b9-b3ff-7c7c559be8b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.326 232432 DEBUG oslo_concurrency.lockutils [req-5f50aef7-854e-4f60-b428-1d5830c42634 req-47b78da7-e59b-40b9-b3ff-7c7c559be8b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.327 232432 DEBUG nova.compute.manager [req-5f50aef7-854e-4f60-b428-1d5830c42634 req-47b78da7-e59b-40b9-b3ff-7c7c559be8b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Processing event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:21 compute-2 systemd[1]: Started libpod-conmon-ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3.scope.
Nov 29 08:09:21 compute-2 podman[273144]: 2025-11-29 08:09:21.28886166 +0000 UTC m=+0.027859550 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:21 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:09:21 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18dc6e097f33b0f8631cbea436fed857d59ec0d87f97e0124ec6f2572ff5d65/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:21 compute-2 podman[273144]: 2025-11-29 08:09:21.426879012 +0000 UTC m=+0.165876912 container init ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:09:21 compute-2 podman[273144]: 2025-11-29 08:09:21.434820149 +0000 UTC m=+0.173818019 container start ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:09:21 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [NOTICE]   (273165) : New worker (273167) forked
Nov 29 08:09:21 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [NOTICE]   (273165) : Loading success.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.708 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403761.7083442, 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.709 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] VM Started (Lifecycle Event)
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.712 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.717 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.721 232432 INFO nova.virt.libvirt.driver [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Instance spawned successfully.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.721 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.739 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.741 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.759 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.760 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.761 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.762 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.763 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.764 232432 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.769 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.770 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403761.7085402, 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.770 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] VM Paused (Lifecycle Event)
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.797 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.800 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403761.7141356, 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.801 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] VM Resumed (Lifecycle Event)
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.829 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.833 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.837 232432 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Took 9.52 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.837 232432 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.862 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.905 232432 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Took 10.86 seconds to build instance.
Nov 29 08:09:21 compute-2 nova_compute[232428]: 2025-11-29 08:09:21.924 232432 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4168310815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3848419887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:23 compute-2 ceph-mon[77138]: pgmap v2062: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.7 MiB/s wr, 458 op/s
Nov 29 08:09:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:23.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.141 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.141 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.142 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.142 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.142 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.144 232432 INFO nova.compute.manager [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Terminating instance
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.145 232432 DEBUG nova.compute.manager [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:23 compute-2 kernel: tape282d1f2-90 (unregistering): left promiscuous mode
Nov 29 08:09:23 compute-2 NetworkManager[48993]: <info>  [1764403763.1888] device (tape282d1f2-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.196 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 ovn_controller[134375]: 2025-11-29T08:09:23Z|00450|binding|INFO|Releasing lport e282d1f2-9089-40ae-8d7a-bb04a2218ae0 from this chassis (sb_readonly=0)
Nov 29 08:09:23 compute-2 ovn_controller[134375]: 2025-11-29T08:09:23Z|00451|binding|INFO|Setting lport e282d1f2-9089-40ae-8d7a-bb04a2218ae0 down in Southbound
Nov 29 08:09:23 compute-2 ovn_controller[134375]: 2025-11-29T08:09:23Z|00452|binding|INFO|Removing iface tape282d1f2-90 ovn-installed in OVS
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.211 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:0a:6b 10.100.0.8'], port_security=['fa:16:3e:24:0a:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '10d72f98-ba8e-47e8-8d33-667ee034d650', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e282d1f2-9089-40ae-8d7a-bb04a2218ae0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.212 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e282d1f2-9089-40ae-8d7a-bb04a2218ae0 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.213 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0073c137-2539-4cad-ad58-1f071e78feda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.223 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.225 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore
Nov 29 08:09:23 compute-2 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 29 08:09:23 compute-2 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005e.scope: Consumed 8.505s CPU time.
Nov 29 08:09:23 compute-2 systemd-machined[194747]: Machine qemu-40-instance-0000005e terminated.
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.394 232432 INFO nova.virt.libvirt.driver [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Instance destroyed successfully.
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.397 232432 DEBUG nova.objects.instance [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 10d72f98-ba8e-47e8-8d33-667ee034d650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.433 232432 DEBUG nova.compute.manager [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.434 232432 DEBUG oslo_concurrency.lockutils [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.435 232432 DEBUG oslo_concurrency.lockutils [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.436 232432 DEBUG oslo_concurrency.lockutils [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.436 232432 DEBUG nova.compute.manager [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] No waiting events found dispatching network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.437 232432 WARNING nova.compute.manager [req-3ecda78a-68d0-4225-a49c-5c94f4fdd520 req-1db55455-29da-4099-982c-dd2520c53eab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received unexpected event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb for instance with vm_state active and task_state None.
Nov 29 08:09:23 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [NOTICE]   (272923) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:23 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [NOTICE]   (272923) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:23 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [WARNING]  (272923) : Exiting Master process...
Nov 29 08:09:23 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [ALERT]    (272923) : Current worker (272928) exited with code 143 (Terminated)
Nov 29 08:09:23 compute-2 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[272899]: [WARNING]  (272923) : All workers exited. Exiting... (0)
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.442 232432 DEBUG nova.virt.libvirt.vif [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1598576635',display_name='tempest-ServerDiskConfigTestJSON-server-1598576635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1598576635',id=94,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-cyhnmrbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:21Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=10d72f98-ba8e-47e8-8d33-667ee034d650,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.443 232432 DEBUG nova.network.os_vif_util [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "address": "fa:16:3e:24:0a:6b", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape282d1f2-90", "ovs_interfaceid": "e282d1f2-9089-40ae-8d7a-bb04a2218ae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:23 compute-2 systemd[1]: libpod-3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41.scope: Deactivated successfully.
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.444 232432 DEBUG nova.network.os_vif_util [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.445 232432 DEBUG os_vif [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:23 compute-2 conmon[272899]: conmon 3b271904940d415bc571 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41.scope/container/memory.events
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.449 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.450 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape282d1f2-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:23 compute-2 podman[273240]: 2025-11-29 08:09:23.45193233 +0000 UTC m=+0.089446089 container died 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.453 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.460 232432 INFO os_vif [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:0a:6b,bridge_name='br-int',has_traffic_filtering=True,id=e282d1f2-9089-40ae-8d7a-bb04a2218ae0,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape282d1f2-90')
Nov 29 08:09:23 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-94ddc3a85d85e9deb26b32fd29228dc420b0a731ca2b611e0093bd52aee7046b-merged.mount: Deactivated successfully.
Nov 29 08:09:23 compute-2 podman[273240]: 2025-11-29 08:09:23.503909851 +0000 UTC m=+0.141423580 container cleanup 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:09:23 compute-2 systemd[1]: libpod-conmon-3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41.scope: Deactivated successfully.
Nov 29 08:09:23 compute-2 podman[273296]: 2025-11-29 08:09:23.59950783 +0000 UTC m=+0.055917904 container remove 3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.608 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ba9735-c2f1-413e-bef5-c9a5c76ac34f]: (4, ('Sat Nov 29 08:09:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41)\n3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41\nSat Nov 29 08:09:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41)\n3b271904940d415bc5713072479914132427ebcef7a3333897ecc1b171db5a41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.611 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d9012f48-5ed8-4f7b-9f59-f1d09100800e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.613 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:23 compute-2 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.616 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.650 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca0aa69-6aa4-49dd-aace-ba190f5bb35e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.661 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[db27bdf0-5555-489d-b8c8-e2b9e1fc646a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.665 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5f0c04-02c0-47fb-8abb-3a6e93310be0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.693 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4883f2f3-0b76-4d6a-804b-7a8471263537]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672114, 'reachable_time': 32818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273309, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.698 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:23.698 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[caf0f268-d55e-43b9-a9fe-d137114a9fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:23 compute-2 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.934 232432 INFO nova.virt.libvirt.driver [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Deleting instance files /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650_del
Nov 29 08:09:23 compute-2 nova_compute[232428]: 2025-11-29 08:09:23.936 232432 INFO nova.virt.libvirt.driver [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Deletion of /var/lib/nova/instances/10d72f98-ba8e-47e8-8d33-667ee034d650_del complete
Nov 29 08:09:24 compute-2 nova_compute[232428]: 2025-11-29 08:09:24.004 232432 INFO nova.compute.manager [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:24 compute-2 nova_compute[232428]: 2025-11-29 08:09:24.005 232432 DEBUG oslo.service.loopingcall [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:24 compute-2 nova_compute[232428]: 2025-11-29 08:09:24.005 232432 DEBUG nova.compute.manager [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:24 compute-2 nova_compute[232428]: 2025-11-29 08:09:24.006 232432 DEBUG nova.network.neutron [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:24.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:25.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:25 compute-2 ceph-mon[77138]: pgmap v2063: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.4 MiB/s wr, 450 op/s
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.264 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.265 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.266 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.266 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.266 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.269 232432 INFO nova.compute.manager [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Terminating instance
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.270 232432 DEBUG nova.compute.manager [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.309 232432 DEBUG nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-unplugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.309 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.312 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.312 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.312 232432 DEBUG nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] No waiting events found dispatching network-vif-unplugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.313 232432 DEBUG nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-unplugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.313 232432 DEBUG nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.314 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.314 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.314 232432 DEBUG oslo_concurrency.lockutils [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.314 232432 DEBUG nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] No waiting events found dispatching network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.315 232432 WARNING nova.compute.manager [req-fb595620-94e1-4439-8789-cf432b50a164 req-0ee2364a-9487-4e9b-94a5-448151032ea0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received unexpected event network-vif-plugged-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 for instance with vm_state active and task_state deleting.
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.315 232432 DEBUG nova.network.neutron [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:25 compute-2 kernel: tapa6969fcb-65 (unregistering): left promiscuous mode
Nov 29 08:09:25 compute-2 NetworkManager[48993]: <info>  [1764403765.3286] device (tapa6969fcb-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:25 compute-2 ovn_controller[134375]: 2025-11-29T08:09:25Z|00453|binding|INFO|Releasing lport a6969fcb-65bc-4364-af84-9e911a2205cb from this chassis (sb_readonly=0)
Nov 29 08:09:25 compute-2 ovn_controller[134375]: 2025-11-29T08:09:25Z|00454|binding|INFO|Setting lport a6969fcb-65bc-4364-af84-9e911a2205cb down in Southbound
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.339 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 ovn_controller[134375]: 2025-11-29T08:09:25Z|00455|binding|INFO|Removing iface tapa6969fcb-65 ovn-installed in OVS
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.343 232432 INFO nova.compute.manager [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Took 1.34 seconds to deallocate network for instance.
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.349 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d0:da 10.100.0.14'], port_security=['fa:16:3e:27:d0:da 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9a2b65ee-1e3f-4cb5-a593-53aab714cca6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=a6969fcb-65bc-4364-af84-9e911a2205cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.351 143801 INFO neutron.agent.ovn.metadata.agent [-] Port a6969fcb-65bc-4364-af84-9e911a2205cb in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 unbound from our chassis
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.354 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.356 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c88727bb-281a-4512-be6e-e16defe1fb66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.357 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace which is not needed anymore
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.402 232432 DEBUG nova.compute.manager [req-ab250fd0-dafa-4e12-a510-ed48b67b3594 req-21d59fdf-ae2f-4704-8d8e-76e6bae7abcd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Received event network-vif-deleted-e282d1f2-9089-40ae-8d7a-bb04a2218ae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:25 compute-2 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 29 08:09:25 compute-2 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005f.scope: Consumed 4.819s CPU time.
Nov 29 08:09:25 compute-2 systemd-machined[194747]: Machine qemu-41-instance-0000005f terminated.
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.449 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.450 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 NetworkManager[48993]: <info>  [1764403765.4957] manager: (tapa6969fcb-65): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.497 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.503 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.511 232432 INFO nova.virt.libvirt.driver [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Instance destroyed successfully.
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.512 232432 DEBUG nova.objects.instance [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'resources' on Instance uuid 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.527 232432 DEBUG nova.virt.libvirt.vif [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-1',id=95,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:21Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=9a2b65ee-1e3f-4cb5-a593-53aab714cca6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.527 232432 DEBUG nova.network.os_vif_util [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "a6969fcb-65bc-4364-af84-9e911a2205cb", "address": "fa:16:3e:27:d0:da", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6969fcb-65", "ovs_interfaceid": "a6969fcb-65bc-4364-af84-9e911a2205cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.528 232432 DEBUG nova.network.os_vif_util [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.529 232432 DEBUG os_vif [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.531 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6969fcb-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.534 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.540 232432 INFO os_vif [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d0:da,bridge_name='br-int',has_traffic_filtering=True,id=a6969fcb-65bc-4364-af84-9e911a2205cb,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6969fcb-65')
Nov 29 08:09:25 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [NOTICE]   (273165) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:25 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [NOTICE]   (273165) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:25 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [WARNING]  (273165) : Exiting Master process...
Nov 29 08:09:25 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [ALERT]    (273165) : Current worker (273167) exited with code 143 (Terminated)
Nov 29 08:09:25 compute-2 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[273160]: [WARNING]  (273165) : All workers exited. Exiting... (0)
Nov 29 08:09:25 compute-2 systemd[1]: libpod-ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3.scope: Deactivated successfully.
Nov 29 08:09:25 compute-2 podman[273339]: 2025-11-29 08:09:25.578717571 +0000 UTC m=+0.060637362 container died ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.615 232432 DEBUG nova.compute.manager [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-unplugged-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.616 232432 DEBUG oslo_concurrency.lockutils [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.616 232432 DEBUG oslo_concurrency.lockutils [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:25 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.616 232432 DEBUG oslo_concurrency.lockutils [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.617 232432 DEBUG nova.compute.manager [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] No waiting events found dispatching network-vif-unplugged-a6969fcb-65bc-4364-af84-9e911a2205cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.617 232432 DEBUG nova.compute.manager [req-90af166c-474e-4fc7-bc44-7b3ed3ac2f6b req-29dfbcca-50fe-4e27-8c08-8afe76ce4eb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-unplugged-a6969fcb-65bc-4364-af84-9e911a2205cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.619 232432 DEBUG oslo_concurrency.processutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-f18dc6e097f33b0f8631cbea436fed857d59ec0d87f97e0124ec6f2572ff5d65-merged.mount: Deactivated successfully.
Nov 29 08:09:25 compute-2 podman[273339]: 2025-11-29 08:09:25.648508576 +0000 UTC m=+0.130428367 container cleanup ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:25 compute-2 systemd[1]: libpod-conmon-ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3.scope: Deactivated successfully.
Nov 29 08:09:25 compute-2 ovn_controller[134375]: 2025-11-29T08:09:25Z|00456|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 08:09:25 compute-2 podman[273397]: 2025-11-29 08:09:25.77408913 +0000 UTC m=+0.089220182 container remove ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.783 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8f80ae-b2b5-47c6-ad9c-233fb490946b]: (4, ('Sat Nov 29 08:09:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3)\nca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3\nSat Nov 29 08:09:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (ca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3)\nca995cb53901bee786ac89c9c939fb763fd9da7a91373ad64c7cc744fc7f1ec3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.787 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[11b449fc-d6e2-4c3d-a2c9-408442e96f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.788 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 kernel: tap4c0a06e3-80: left promiscuous mode
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.917 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.921 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b29c29-2d69-457e-94ca-41c4e221923b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.937 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.939 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[46736fb3-2e26-4086-b747-efdbee899056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.940 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d93f7a6c-5bad-4df9-93b7-a5f16d9122ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 nova_compute[232428]: 2025-11-29 08:09:25.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.966 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[07ff74a7-6326-4220-bc82-0bc1c3873ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672686, 'reachable_time': 41007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273430, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.970 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:25.970 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[dae3df53-dc70-4db0-b743-e65baa1f68f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:25 compute-2 systemd[1]: run-netns-ovnmeta\x2d4c0a06e3\x2d8d77\x2d4f81\x2d85b4\x2d47e57dafff04.mount: Deactivated successfully.
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.049 232432 INFO nova.virt.libvirt.driver [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Deleting instance files /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6_del
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.050 232432 INFO nova.virt.libvirt.driver [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Deletion of /var/lib/nova/instances/9a2b65ee-1e3f-4cb5-a593-53aab714cca6_del complete
Nov 29 08:09:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2416894743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.111 232432 DEBUG oslo_concurrency.processutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.123 232432 DEBUG nova.compute.provider_tree [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.128 232432 INFO nova.compute.manager [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.129 232432 DEBUG oslo.service.loopingcall [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.130 232432 DEBUG nova.compute.manager [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.130 232432 DEBUG nova.network.neutron [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.138 232432 DEBUG nova.scheduler.client.report [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.161 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.187 232432 INFO nova.scheduler.client.report [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocations for instance 10d72f98-ba8e-47e8-8d33-667ee034d650
Nov 29 08:09:26 compute-2 nova_compute[232428]: 2025-11-29 08:09:26.251 232432 DEBUG oslo_concurrency.lockutils [None req-3f35af4a-0d57-4640-9e8e-87ff859933e5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "10d72f98-ba8e-47e8-8d33-667ee034d650" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2416894743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:26 compute-2 podman[273433]: 2025-11-29 08:09:26.834569235 +0000 UTC m=+0.221675881 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 08:09:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.456 232432 DEBUG nova.network.neutron [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.479 232432 INFO nova.compute.manager [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Took 1.35 seconds to deallocate network for instance.
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.527 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.528 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.581 232432 DEBUG oslo_concurrency.processutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:27 compute-2 ceph-mon[77138]: pgmap v2064: 305 pgs: 305 active+clean; 233 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.6 MiB/s wr, 680 op/s
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.779 232432 DEBUG nova.compute.manager [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.779 232432 DEBUG oslo_concurrency.lockutils [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.780 232432 DEBUG oslo_concurrency.lockutils [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.780 232432 DEBUG oslo_concurrency.lockutils [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.781 232432 DEBUG nova.compute.manager [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] No waiting events found dispatching network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:27 compute-2 nova_compute[232428]: 2025-11-29 08:09:27.781 232432 WARNING nova.compute.manager [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received unexpected event network-vif-plugged-a6969fcb-65bc-4364-af84-9e911a2205cb for instance with vm_state deleted and task_state None.
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.093 232432 DEBUG nova.compute.manager [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Received event network-vif-deleted-a6969fcb-65bc-4364-af84-9e911a2205cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:28 compute-2 sudo[273480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:28 compute-2 sudo[273480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:28 compute-2 sudo[273480]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/410455556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.208 232432 DEBUG oslo_concurrency.processutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.216 232432 DEBUG nova.compute.provider_tree [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.238 232432 DEBUG nova.scheduler.client.report [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:28 compute-2 sudo[273505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:28 compute-2 sudo[273505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:28 compute-2 sudo[273505]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.265 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.303 232432 INFO nova.scheduler.client.report [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Deleted allocations for instance 9a2b65ee-1e3f-4cb5-a593-53aab714cca6
Nov 29 08:09:28 compute-2 nova_compute[232428]: 2025-11-29 08:09:28.407 232432 DEBUG oslo_concurrency.lockutils [None req-96d0d71f-b079-4f72-acc0-0e25033ef8b6 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9a2b65ee-1e3f-4cb5-a593-53aab714cca6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/410455556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1868038605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:28.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:29.321 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:29.322 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:09:29 compute-2 nova_compute[232428]: 2025-11-29 08:09:29.321 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-2 ceph-mon[77138]: pgmap v2065: 305 pgs: 305 active+clean; 233 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 134 KiB/s wr, 496 op/s
Nov 29 08:09:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:30.324 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:30 compute-2 nova_compute[232428]: 2025-11-29 08:09:30.535 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:30 compute-2 nova_compute[232428]: 2025-11-29 08:09:30.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:31 compute-2 ceph-mon[77138]: pgmap v2066: 305 pgs: 305 active+clean; 205 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 135 KiB/s wr, 538 op/s
Nov 29 08:09:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:33.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:33 compute-2 ceph-mon[77138]: pgmap v2067: 305 pgs: 305 active+clean; 134 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 43 KiB/s wr, 503 op/s
Nov 29 08:09:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:35 compute-2 nova_compute[232428]: 2025-11-29 08:09:35.538 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-2 ceph-mon[77138]: pgmap v2068: 305 pgs: 305 active+clean; 134 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 41 KiB/s wr, 368 op/s
Nov 29 08:09:35 compute-2 nova_compute[232428]: 2025-11-29 08:09:35.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1968305457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3874596961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:38 compute-2 ceph-mon[77138]: pgmap v2069: 305 pgs: 305 active+clean; 74 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 43 KiB/s wr, 392 op/s
Nov 29 08:09:38 compute-2 nova_compute[232428]: 2025-11-29 08:09:38.390 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403763.388448, 10d72f98-ba8e-47e8-8d33-667ee034d650 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:38 compute-2 nova_compute[232428]: 2025-11-29 08:09:38.390 232432 INFO nova.compute.manager [-] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] VM Stopped (Lifecycle Event)
Nov 29 08:09:38 compute-2 nova_compute[232428]: 2025-11-29 08:09:38.423 232432 DEBUG nova.compute.manager [None req-113d8a79-1f97-45bc-aec0-88aa5bff7b0f - - - - - -] [instance: 10d72f98-ba8e-47e8-8d33-667ee034d650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:38.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:39 compute-2 ceph-mon[77138]: pgmap v2070: 305 pgs: 305 active+clean; 74 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 5.0 KiB/s wr, 131 op/s
Nov 29 08:09:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:40 compute-2 nova_compute[232428]: 2025-11-29 08:09:40.508 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403765.5076761, 9a2b65ee-1e3f-4cb5-a593-53aab714cca6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:40 compute-2 nova_compute[232428]: 2025-11-29 08:09:40.509 232432 INFO nova.compute.manager [-] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] VM Stopped (Lifecycle Event)
Nov 29 08:09:40 compute-2 nova_compute[232428]: 2025-11-29 08:09:40.533 232432 DEBUG nova.compute.manager [None req-02263fef-ce44-446d-a22a-5c53a098e172 - - - - - -] [instance: 9a2b65ee-1e3f-4cb5-a593-53aab714cca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:40 compute-2 nova_compute[232428]: 2025-11-29 08:09:40.541 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:09:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:40.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:09:40 compute-2 nova_compute[232428]: 2025-11-29 08:09:40.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:41 compute-2 nova_compute[232428]: 2025-11-29 08:09:41.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:41 compute-2 nova_compute[232428]: 2025-11-29 08:09:41.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:41 compute-2 ceph-mon[77138]: pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 5.0 KiB/s wr, 132 op/s
Nov 29 08:09:42 compute-2 nova_compute[232428]: 2025-11-29 08:09:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:42 compute-2 podman[273539]: 2025-11-29 08:09:42.688382538 +0000 UTC m=+0.079733216 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:42.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:43 compute-2 nova_compute[232428]: 2025-11-29 08:09:43.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:43 compute-2 ceph-mon[77138]: pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 4.0 KiB/s wr, 91 op/s
Nov 29 08:09:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/991044372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:44.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:45.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:45 compute-2 ceph-mon[77138]: pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 29 08:09:45 compute-2 nova_compute[232428]: 2025-11-29 08:09:45.543 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-2 nova_compute[232428]: 2025-11-29 08:09:45.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:46 compute-2 nova_compute[232428]: 2025-11-29 08:09:46.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:46 compute-2 nova_compute[232428]: 2025-11-29 08:09:46.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:09:46 compute-2 nova_compute[232428]: 2025-11-29 08:09:46.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:09:46 compute-2 nova_compute[232428]: 2025-11-29 08:09:46.224 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:09:46 compute-2 nova_compute[232428]: 2025-11-29 08:09:46.224 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1167156216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.680001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786680101, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2366, "num_deletes": 254, "total_data_size": 5241620, "memory_usage": 5320520, "flush_reason": "Manual Compaction"}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 29 08:09:46 compute-2 podman[273561]: 2025-11-29 08:09:46.696456385 +0000 UTC m=+0.086553688 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786697869, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2072046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40267, "largest_seqno": 42628, "table_properties": {"data_size": 2064997, "index_size": 3675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19650, "raw_average_key_size": 21, "raw_value_size": 2048914, "raw_average_value_size": 2234, "num_data_blocks": 163, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403598, "oldest_key_time": 1764403598, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17979 microseconds, and 12212 cpu microseconds.
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.697965) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2072046 bytes OK
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.698014) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.699826) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.699882) EVENT_LOG_v1 {"time_micros": 1764403786699870, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.699911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5231128, prev total WAL file size 5231128, number of live WAL files 2.
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.701392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323630' seq:72057594037927935, type:22 .. '6D6772737461740031353132' seq:0, type:0; will stop at (end)
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2023KB)], [75(10MB)]
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786701530, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13093128, "oldest_snapshot_seqno": -1}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7282 keys, 10600835 bytes, temperature: kUnknown
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786789202, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 10600835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10553188, "index_size": 28335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187374, "raw_average_key_size": 25, "raw_value_size": 10424086, "raw_average_value_size": 1431, "num_data_blocks": 1125, "num_entries": 7282, "num_filter_entries": 7282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.789512) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10600835 bytes
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.791217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.2 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.4) write-amplify(5.1) OK, records in: 7717, records dropped: 435 output_compression: NoCompression
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.791239) EVENT_LOG_v1 {"time_micros": 1764403786791228, "job": 46, "event": "compaction_finished", "compaction_time_micros": 87750, "compaction_time_cpu_micros": 39408, "output_level": 6, "num_output_files": 1, "total_output_size": 10600835, "num_input_records": 7717, "num_output_records": 7282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786791848, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786794893, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.701273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.794989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.794996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.794997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.794999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:46.795000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:46.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:47.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:47 compute-2 ceph-mon[77138]: pgmap v2074: 305 pgs: 305 active+clean; 69 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 979 KiB/s wr, 78 op/s
Nov 29 08:09:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/923109898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2493137400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:48 compute-2 sudo[273582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:48 compute-2 sudo[273582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:48 compute-2 sudo[273582]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.435 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.436 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:48 compute-2 sudo[273607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:48 compute-2 sudo[273607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:48 compute-2 sudo[273607]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.462 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.533 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.534 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.540 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.540 232432 INFO nova.compute.claims [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:09:48 compute-2 nova_compute[232428]: 2025-11-29 08:09:48.661 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:48.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/235047843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.123 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.132 232432 DEBUG nova.compute.provider_tree [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:49.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.154 232432 DEBUG nova.scheduler.client.report [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.180 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.181 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.267 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.268 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.342 232432 INFO nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.372 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.478 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.481 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.482 232432 INFO nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Creating image(s)
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.528 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.572 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.604 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.608 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.653 232432 DEBUG nova.policy [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '661b6600a32b40d8a48db16cb71c7e75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd72b5448be0e463f80dca118feb42d3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.664 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.665 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.690 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.714 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.715 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.716 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.716 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.746 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.750 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.811 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.811 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.818 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.818 232432 INFO nova.compute.claims [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:09:49 compute-2 nova_compute[232428]: 2025-11-29 08:09:49.998 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1230306831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.438 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.446 232432 DEBUG nova.compute.provider_tree [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.465 232432 DEBUG nova.scheduler.client.report [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.495 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.496 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.545 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:50 compute-2 ceph-mon[77138]: pgmap v2075: 305 pgs: 305 active+clean; 88 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:09:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/235047843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1315773696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.562 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.562 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.600 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.850s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.643 232432 INFO nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.693 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.701 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] resizing rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.746 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Successfully created port: 25f618be-492d-4ac9-9c9c-6583e0402572 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.814 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.816 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.816 232432 INFO nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Creating image(s)
Nov 29 08:09:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:50.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.844 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.875 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.907 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.911 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.955 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.961 232432 DEBUG nova.policy [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c90fe1780904a6098015abc66b38d9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.973 232432 DEBUG nova.objects.instance [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.994 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.994 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Ensure instance console log exists: /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.995 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.995 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:50 compute-2 nova_compute[232428]: 2025-11-29 08:09:50.996 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.014 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.015 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.016 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.016 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.049 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.053 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:51.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.401 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.528 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] resizing rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:09:51 compute-2 ceph-mon[77138]: pgmap v2076: 305 pgs: 305 active+clean; 88 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:09:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1230306831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3312907837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4061477850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/5507704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.730 232432 DEBUG nova.objects.instance [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1df8ad18-c052-4ab6-9941-d61ae842ea2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.750 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.750 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Ensure instance console log exists: /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.751 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.751 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:51 compute-2 nova_compute[232428]: 2025-11-29 08:09:51.752 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.009 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Successfully updated port: 25f618be-492d-4ac9-9c9c-6583e0402572 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.044 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.044 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.045 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.133 232432 DEBUG nova.compute.manager [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-changed-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.134 232432 DEBUG nova.compute.manager [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Refreshing instance network info cache due to event network-changed-25f618be-492d-4ac9-9c9c-6583e0402572. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.135 232432 DEBUG oslo_concurrency.lockutils [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.225 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.225 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.269 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4091256513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3349090069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.703 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:52.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.896 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.897 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4508MB free_disk=20.929569244384766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.897 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:52 compute-2 nova_compute[232428]: 2025-11-29 08:09:52.898 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.000 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ed45397-ad95-4437-a0df-a49849d1d9bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.000 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 1df8ad18-c052-4ab6-9941-d61ae842ea2d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.000 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.001 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.015 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Successfully created port: 43f2b65c-3445-4ada-b1cf-f8725a3e53db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.073 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:53.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.474 232432 DEBUG nova.network.neutron [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.493 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.493 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance network_info: |[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.494 232432 DEBUG oslo_concurrency.lockutils [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.494 232432 DEBUG nova.network.neutron [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Refreshing network info cache for port 25f618be-492d-4ac9-9c9c-6583e0402572 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.498 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start _get_guest_xml network_info=[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.503 232432 WARNING nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.507 232432 DEBUG nova.virt.libvirt.host [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.508 232432 DEBUG nova.virt.libvirt.host [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.519 232432 DEBUG nova.virt.libvirt.host [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.520 232432 DEBUG nova.virt.libvirt.host [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.522 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.522 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.523 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.523 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.523 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.524 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.524 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.524 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.524 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.525 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.525 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.525 232432 DEBUG nova.virt.hardware [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.530 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1307490669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.570 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.580 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:53 compute-2 ceph-mon[77138]: pgmap v2077: 305 pgs: 305 active+clean; 169 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.0 MiB/s wr, 67 op/s
Nov 29 08:09:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3349090069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1307490669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.614 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.651 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.652 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1435633006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:53 compute-2 nova_compute[232428]: 2025-11-29 08:09:53.995 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.048 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.056 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/605426536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.580 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.583 232432 DEBUG nova.virt.libvirt.vif [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.584 232432 DEBUG nova.network.os_vif_util [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.585 232432 DEBUG nova.network.os_vif_util [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.587 232432 DEBUG nova.objects.instance [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.611 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <uuid>2ed45397-ad95-4437-a0df-a49849d1d9bf</uuid>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <name>instance-00000064</name>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1161621840</nova:name>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:09:53</nova:creationTime>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <nova:port uuid="25f618be-492d-4ac9-9c9c-6583e0402572">
Nov 29 08:09:54 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <system>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="serial">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="uuid">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </system>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <os>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </os>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <features>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </features>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk">
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config">
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:54 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e8:62:3f"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <target dev="tap25f618be-49"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log" append="off"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <video>
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </video>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:09:54 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:09:54 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:09:54 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:09:54 compute-2 nova_compute[232428]: </domain>
Nov 29 08:09:54 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1886974198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1435633006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1613395341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/605426536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.613 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Preparing to wait for external event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.614 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.615 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.616 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.617 232432 DEBUG nova.virt.libvirt.vif [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.618 232432 DEBUG nova.network.os_vif_util [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.619 232432 DEBUG nova.network.os_vif_util [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.620 232432 DEBUG os_vif [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.623 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.625 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.630679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794630733, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 346, "num_deletes": 251, "total_data_size": 240521, "memory_usage": 248552, "flush_reason": "Manual Compaction"}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794633844, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 158259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42633, "largest_seqno": 42974, "table_properties": {"data_size": 156127, "index_size": 296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5619, "raw_average_key_size": 18, "raw_value_size": 151842, "raw_average_value_size": 509, "num_data_blocks": 13, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403787, "oldest_key_time": 1764403787, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3214 microseconds, and 1099 cpu microseconds.
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.633896) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 158259 bytes OK
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.633914) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635024) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635037) EVENT_LOG_v1 {"time_micros": 1764403794635032, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 238100, prev total WAL file size 238100, number of live WAL files 2.
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635535) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(154KB)], [78(10MB)]
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794635624, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10759094, "oldest_snapshot_seqno": -1}
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.638 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f618be-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.639 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f618be-49, col_values=(('external_ids', {'iface-id': '25f618be-492d-4ac9-9c9c-6583e0402572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:62:3f', 'vm-uuid': '2ed45397-ad95-4437-a0df-a49849d1d9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-2 NetworkManager[48993]: <info>  [1764403794.6438] manager: (tap25f618be-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.648 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.651 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.653 232432 INFO os_vif [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7070 keys, 8781012 bytes, temperature: kUnknown
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794714468, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8781012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8736491, "index_size": 25721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183739, "raw_average_key_size": 25, "raw_value_size": 8612684, "raw_average_value_size": 1218, "num_data_blocks": 1007, "num_entries": 7070, "num_filter_entries": 7070, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.714853) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8781012 bytes
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.716430) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.2 rd, 111.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(123.5) write-amplify(55.5) OK, records in: 7580, records dropped: 510 output_compression: NoCompression
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.716463) EVENT_LOG_v1 {"time_micros": 1764403794716449, "job": 48, "event": "compaction_finished", "compaction_time_micros": 78966, "compaction_time_cpu_micros": 37040, "output_level": 6, "num_output_files": 1, "total_output_size": 8781012, "num_input_records": 7580, "num_output_records": 7070, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794716678, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794720431, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.720480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.720488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.720491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.720494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:09:54.720497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.779 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.780 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.780 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:e8:62:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.781 232432 INFO nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Using config drive
Nov 29 08:09:54 compute-2 nova_compute[232428]: 2025-11-29 08:09:54.826 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:09:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:54.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:09:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.095 232432 DEBUG nova.network.neutron [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updated VIF entry in instance network info cache for port 25f618be-492d-4ac9-9c9c-6583e0402572. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.097 232432 DEBUG nova.network.neutron [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.137 232432 DEBUG oslo_concurrency.lockutils [req-3f4f455b-782e-4884-8aaf-008c565f7524 req-95937dee-c787-42c9-9981-2c683dee3fd1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.355 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Successfully updated port: 43f2b65c-3445-4ada-b1cf-f8725a3e53db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.380 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.380 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquired lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.381 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.399 232432 INFO nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Creating config drive at /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.408 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppoibh6qm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.552 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppoibh6qm" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.599 232432 DEBUG nova.storage.rbd_utils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.605 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:55 compute-2 ceph-mon[77138]: pgmap v2078: 305 pgs: 305 active+clean; 201 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 6.2 MiB/s wr, 98 op/s
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.790 232432 DEBUG oslo_concurrency.processutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config 2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.792 232432 INFO nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Deleting local config drive /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/disk.config because it was imported into RBD.
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.908 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:55 compute-2 kernel: tap25f618be-49: entered promiscuous mode
Nov 29 08:09:55 compute-2 NetworkManager[48993]: <info>  [1764403795.9222] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:55 compute-2 ovn_controller[134375]: 2025-11-29T08:09:55Z|00457|binding|INFO|Claiming lport 25f618be-492d-4ac9-9c9c-6583e0402572 for this chassis.
Nov 29 08:09:55 compute-2 ovn_controller[134375]: 2025-11-29T08:09:55Z|00458|binding|INFO|25f618be-492d-4ac9-9c9c-6583e0402572: Claiming fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.930 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:55 compute-2 nova_compute[232428]: 2025-11-29 08:09:55.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.944 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.945 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.947 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f3074e0a-8f0c-45be-9dc9-dddb396befea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.966 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.969 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.969 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4480ad68-4917-47fb-b3c9-26039d08a691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.971 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ffc7cc-520e-4162-b2d2-8d94c3304c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:55 compute-2 systemd-udevd[274193]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:55 compute-2 systemd-machined[194747]: New machine qemu-42-instance-00000064.
Nov 29 08:09:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:55.990 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[7d299a98-71ee-4bff-bea2-1e99a6277516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:55 compute-2 systemd[1]: Started Virtual Machine qemu-42-instance-00000064.
Nov 29 08:09:55 compute-2 NetworkManager[48993]: <info>  [1764403795.9968] device (tap25f618be-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:55 compute-2 NetworkManager[48993]: <info>  [1764403795.9992] device (tap25f618be-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.026 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d99772f8-6c36-452d-ad4b-ea42a517b0cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.042 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 ovn_controller[134375]: 2025-11-29T08:09:56Z|00459|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 ovn-installed in OVS
Nov 29 08:09:56 compute-2 ovn_controller[134375]: 2025-11-29T08:09:56Z|00460|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 up in Southbound
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.073 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0de7d4-9a58-4d18-bfe2-56052d2d6aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 systemd-udevd[274196]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:56 compute-2 NetworkManager[48993]: <info>  [1764403796.0829] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.083 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1be61460-6908-43d8-9ee5-9039ebbc239b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.141 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f1022deb-b466-41a7-b31f-cf4ace8e7cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.149 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b33c8d3d-3b34-4169-afae-df576d630338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 NetworkManager[48993]: <info>  [1764403796.1847] device (tap988c10fa-90): carrier: link connected
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.193 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[20351848-4173-41e9-8260-de41cd55e069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.217 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4e6e87-6458-45e4-bb7e-869fcb6969b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676245, 'reachable_time': 19865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274225, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.243 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1b269027-8135-4774-a5bd-51df8e19d6fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676245, 'tstamp': 676245}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274226, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.266 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3837df31-5fc8-4622-8b36-eedf49a7c17a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676245, 'reachable_time': 19865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274227, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.306 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4547af-4298-46cd-a59d-baebfc2d36eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.395 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3170497b-090d-45fb-a084-3e05b3c89b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.399 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.400 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.401 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:56 compute-2 NetworkManager[48993]: <info>  [1764403796.4049] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 08:09:56 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.404 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.410 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:56 compute-2 ovn_controller[134375]: 2025-11-29T08:09:56Z|00461|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.412 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.447 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.448 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b416792-c8ac-44d2-b2b4-a672c6751b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.450 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:56.451 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.554 232432 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.555 232432 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.556 232432 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.557 232432 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.557 232432 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Processing event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.636 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.637 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403796.6371193, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.638 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Started (Lifecycle Event)
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.643 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3309264447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.648 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance spawned successfully.
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.649 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.653 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.703 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.730 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.739 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.740 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.741 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.742 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.742 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.743 232432 DEBUG nova.virt.libvirt.driver [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.794 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.794 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403796.6383283, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.795 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Paused (Lifecycle Event)
Nov 29 08:09:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:56.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.861 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.866 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403796.6432042, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.866 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.886 232432 INFO nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Took 7.41 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.886 232432 DEBUG nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.888 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.894 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.934 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.940 232432 DEBUG nova.compute.manager [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Received event network-changed-43f2b65c-3445-4ada-b1cf-f8725a3e53db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.940 232432 DEBUG nova.compute.manager [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Refreshing instance network info cache due to event network-changed-43f2b65c-3445-4ada-b1cf-f8725a3e53db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.941 232432 DEBUG oslo_concurrency.lockutils [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:56 compute-2 podman[274302]: 2025-11-29 08:09:56.96127819 +0000 UTC m=+0.076660721 container create d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.971 232432 INFO nova.compute.manager [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Took 8.47 seconds to build instance.
Nov 29 08:09:56 compute-2 nova_compute[232428]: 2025-11-29 08:09:56.993 232432 DEBUG oslo_concurrency.lockutils [None req-a474d459-00dd-44e5-815a-7a1a995f1de8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:56 compute-2 systemd[1]: Started libpod-conmon-d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466.scope.
Nov 29 08:09:57 compute-2 podman[274302]: 2025-11-29 08:09:56.918174406 +0000 UTC m=+0.033556967 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:57 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:09:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dddaf1b27cdcf8c51ed399abdf9fbb11ac38ba26c4e9f988ea81eda53636b39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:57 compute-2 podman[274302]: 2025-11-29 08:09:57.057998274 +0000 UTC m=+0.173380815 container init d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:09:57 compute-2 podman[274302]: 2025-11-29 08:09:57.064816567 +0000 UTC m=+0.180199088 container start d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:09:57 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [NOTICE]   (274337) : New worker (274342) forked
Nov 29 08:09:57 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [NOTICE]   (274337) : Loading success.
Nov 29 08:09:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:57 compute-2 podman[274315]: 2025-11-29 08:09:57.18394427 +0000 UTC m=+0.172084545 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.496 232432 DEBUG nova.network.neutron [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Updating instance_info_cache with network_info: [{"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.518 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Releasing lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.518 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Instance network_info: |[{"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.519 232432 DEBUG oslo_concurrency.lockutils [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.519 232432 DEBUG nova.network.neutron [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Refreshing network info cache for port 43f2b65c-3445-4ada-b1cf-f8725a3e53db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.523 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Start _get_guest_xml network_info=[{"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.530 232432 WARNING nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.541 232432 DEBUG nova.virt.libvirt.host [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.543 232432 DEBUG nova.virt.libvirt.host [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.548 232432 DEBUG nova.virt.libvirt.host [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.549 232432 DEBUG nova.virt.libvirt.host [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.551 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.551 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.552 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.553 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.554 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.554 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.555 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.555 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.556 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.557 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.557 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.558 232432 DEBUG nova.virt.hardware [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:57 compute-2 nova_compute[232428]: 2025-11-29 08:09:57.562 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:57 compute-2 ceph-mon[77138]: pgmap v2079: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 8.9 MiB/s wr, 178 op/s
Nov 29 08:09:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4044706291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2543181314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.033 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.066 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.073 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2851880590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.525 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.530 232432 DEBUG nova.virt.libvirt.vif [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-867035356',display_name='tempest-ListServerFiltersTestJSON-instance-867035356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-867035356',id=101,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-e0t5he6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:50Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=1df8ad18-c052-4ab6-9941-d61ae842ea2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.531 232432 DEBUG nova.network.os_vif_util [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.532 232432 DEBUG nova.network.os_vif_util [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.534 232432 DEBUG nova.objects.instance [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1df8ad18-c052-4ab6-9941-d61ae842ea2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.554 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <uuid>1df8ad18-c052-4ab6-9941-d61ae842ea2d</uuid>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <name>instance-00000065</name>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-867035356</nova:name>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:09:57</nova:creationTime>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:user uuid="7c90fe1780904a6098015abc66b38d9d">tempest-ListServerFiltersTestJSON-207904478-project-member</nova:user>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:project uuid="baca94adaa5145a6b9cef930bff28fa4">tempest-ListServerFiltersTestJSON-207904478</nova:project>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <nova:port uuid="43f2b65c-3445-4ada-b1cf-f8725a3e53db">
Nov 29 08:09:58 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <system>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="serial">1df8ad18-c052-4ab6-9941-d61ae842ea2d</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="uuid">1df8ad18-c052-4ab6-9941-d61ae842ea2d</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </system>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <os>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </os>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <features>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </features>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk">
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config">
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:09:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:b1:61:58"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <target dev="tap43f2b65c-34"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/console.log" append="off"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <video>
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </video>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:09:58 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:09:58 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:09:58 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:09:58 compute-2 nova_compute[232428]: </domain>
Nov 29 08:09:58 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.562 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Preparing to wait for external event network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.562 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.563 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.563 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.564 232432 DEBUG nova.virt.libvirt.vif [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-867035356',display_name='tempest-ListServerFiltersTestJSON-instance-867035356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-867035356',id=101,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-e0t5he6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:50Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=1df8ad18-c052-4ab6-9941-d61ae842ea2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.565 232432 DEBUG nova.network.os_vif_util [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.566 232432 DEBUG nova.network.os_vif_util [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.566 232432 DEBUG os_vif [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.569 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.570 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.575 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.575 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43f2b65c-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.576 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43f2b65c-34, col_values=(('external_ids', {'iface-id': '43f2b65c-3445-4ada-b1cf-f8725a3e53db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:61:58', 'vm-uuid': '1df8ad18-c052-4ab6-9941-d61ae842ea2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:58 compute-2 NetworkManager[48993]: <info>  [1764403798.5791] manager: (tap43f2b65c-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.587 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.588 232432 INFO os_vif [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34')
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.649 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.650 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.650 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No VIF found with MAC fa:16:3e:b1:61:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.651 232432 INFO nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Using config drive
Nov 29 08:09:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2543181314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2851880590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.697 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.719 232432 DEBUG nova.compute.manager [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.720 232432 DEBUG oslo_concurrency.lockutils [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.720 232432 DEBUG oslo_concurrency.lockutils [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.721 232432 DEBUG oslo_concurrency.lockutils [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.721 232432 DEBUG nova.compute.manager [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:58 compute-2 nova_compute[232428]: 2025-11-29 08:09:58.721 232432 WARNING nova.compute.manager [req-c5704ef7-43ba-4631-8baf-bf2255b4a5a3 req-28c8d190-19d5-40a2-a027-e11d6eccf2da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:09:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:58.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:09:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:09:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.172 232432 INFO nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Creating config drive at /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.179 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe0l734uv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.323 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe0l734uv" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.372 232432 DEBUG nova.storage.rbd_utils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.377 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.596 232432 DEBUG oslo_concurrency.processutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config 1df8ad18-c052-4ab6-9941-d61ae842ea2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.598 232432 INFO nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Deleting local config drive /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d/disk.config because it was imported into RBD.
Nov 29 08:09:59 compute-2 kernel: tap43f2b65c-34: entered promiscuous mode
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.6956] manager: (tap43f2b65c-34): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.699 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:59 compute-2 ovn_controller[134375]: 2025-11-29T08:09:59Z|00462|binding|INFO|Claiming lport 43f2b65c-3445-4ada-b1cf-f8725a3e53db for this chassis.
Nov 29 08:09:59 compute-2 ovn_controller[134375]: 2025-11-29T08:09:59Z|00463|binding|INFO|43f2b65c-3445-4ada-b1cf-f8725a3e53db: Claiming fa:16:3e:b1:61:58 10.100.0.9
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.715 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:61:58 10.100.0.9'], port_security=['fa:16:3e:b1:61:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1df8ad18-c052-4ab6-9941-d61ae842ea2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=43f2b65c-3445-4ada-b1cf-f8725a3e53db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.718 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 43f2b65c-3445-4ada-b1cf-f8725a3e53db in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 bound to our chassis
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.722 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 08:09:59 compute-2 ceph-mon[77138]: pgmap v2080: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.9 MiB/s wr, 192 op/s
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.750 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50724065-f4dd-438f-9449-dcafa67b81f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.752 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a0b70e3-11 in ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:59 compute-2 systemd-machined[194747]: New machine qemu-43-instance-00000065.
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.755 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a0b70e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.755 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0d5c16-fa6d-40d2-a479-4aabf4724a39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.759 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f23b0756-40d7-437f-9dd6-503d08748ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:59 compute-2 ovn_controller[134375]: 2025-11-29T08:09:59Z|00464|binding|INFO|Setting lport 43f2b65c-3445-4ada-b1cf-f8725a3e53db ovn-installed in OVS
Nov 29 08:09:59 compute-2 ovn_controller[134375]: 2025-11-29T08:09:59Z|00465|binding|INFO|Setting lport 43f2b65c-3445-4ada-b1cf-f8725a3e53db up in Southbound
Nov 29 08:09:59 compute-2 systemd[1]: Started Virtual Machine qemu-43-instance-00000065.
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:59 compute-2 systemd-udevd[274499]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.793 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea30a08-6277-4eff-99b4-c0ed1d588eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.8077] device (tap43f2b65c-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.8098] device (tap43f2b65c-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.815 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3606bec7-0055-46cc-ba19-67ad75d5d32f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.8553] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.8567] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 08:09:59 compute-2 nova_compute[232428]: 2025-11-29 08:09:59.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.875 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[65b80d2b-c894-4907-8a02-a58f263d3bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.886 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9e6db0-56f8-4b6c-a86e-a55a10354a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.8879] manager: (tap9a0b70e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.944 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4d293f23-4b63-47a3-a1e9-ddf81a830d84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.948 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[034d9c3a-6848-41a8-bfda-765fc5431145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:59 compute-2 NetworkManager[48993]: <info>  [1764403799.9840] device (tap9a0b70e3-10): carrier: link connected
Nov 29 08:09:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:09:59.993 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[523f8e8e-f5a6-467f-8430-8c3db044ff6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.020 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.019 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e546ed-fa6a-4135-b07a-77d5745394d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676625, 'reachable_time': 26413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274530, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 ovn_controller[134375]: 2025-11-29T08:10:00Z|00466|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.045 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[72424c61-e64d-4249-a999-8d1c85789df9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676625, 'tstamp': 676625}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274531, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.070 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[227da7a5-106e-4431-a448-a2c758f11e1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676625, 'reachable_time': 26413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274532, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.110 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f23ce5-8a90-4245-bab5-9a2d834f39ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.188 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[391e4801-dfcd-441a-abac-5f8b58dec5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.190 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.191 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.191 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a0b70e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:00 compute-2 kernel: tap9a0b70e3-10: entered promiscuous mode
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.194 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-2 NetworkManager[48993]: <info>  [1764403800.1961] manager: (tap9a0b70e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.197 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a0b70e3-10, col_values=(('external_ids', {'iface-id': '564ded89-d5cd-4ed0-aa20-e32de45b6125'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.199 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-2 ovn_controller[134375]: 2025-11-29T08:10:00Z|00467|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.217 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.219 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[685f3d42-b7c2-48a7-a620-c95b1d3b8ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.220 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:10:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:00.223 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'env', 'PROCESS_TAG=haproxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.461 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403800.4600623, 1df8ad18-c052-4ab6-9941-d61ae842ea2d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.461 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] VM Started (Lifecycle Event)
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.709 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.716 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403800.460399, 1df8ad18-c052-4ab6-9941-d61ae842ea2d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.717 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] VM Paused (Lifecycle Event)
Nov 29 08:10:00 compute-2 sshd-session[272938]: Received disconnect from 114.66.38.28 port 36612:11:  [preauth]
Nov 29 08:10:00 compute-2 sshd-session[272938]: Disconnected from authenticating user root 114.66.38.28 port 36612 [preauth]
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.764 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.770 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.821 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:00 compute-2 podman[274607]: 2025-11-29 08:10:00.74230649 +0000 UTC m=+0.062805178 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:10:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:10:00 compute-2 podman[274607]: 2025-11-29 08:10:00.961394809 +0000 UTC m=+0.281893477 container create 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.975 232432 DEBUG nova.network.neutron [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Updated VIF entry in instance network info cache for port 43f2b65c-3445-4ada-b1cf-f8725a3e53db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:00 compute-2 nova_compute[232428]: 2025-11-29 08:10:00.976 232432 DEBUG nova.network.neutron [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Updating instance_info_cache with network_info: [{"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.010 232432 DEBUG oslo_concurrency.lockutils [req-321e3419-9e73-4646-a193-faa0ee077bdf req-5ac6290a-09cc-4f9a-b333-d15852793bc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1df8ad18-c052-4ab6-9941-d61ae842ea2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:01 compute-2 systemd[1]: Started libpod-conmon-8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936.scope.
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.030 232432 DEBUG nova.compute.manager [req-a8fd156c-10e1-4740-b455-928d26c48aee req-75671797-bfee-4760-8c7a-e15c8c8c9cc8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Received event network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.030 232432 DEBUG oslo_concurrency.lockutils [req-a8fd156c-10e1-4740-b455-928d26c48aee req-75671797-bfee-4760-8c7a-e15c8c8c9cc8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.030 232432 DEBUG oslo_concurrency.lockutils [req-a8fd156c-10e1-4740-b455-928d26c48aee req-75671797-bfee-4760-8c7a-e15c8c8c9cc8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.031 232432 DEBUG oslo_concurrency.lockutils [req-a8fd156c-10e1-4740-b455-928d26c48aee req-75671797-bfee-4760-8c7a-e15c8c8c9cc8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.031 232432 DEBUG nova.compute.manager [req-a8fd156c-10e1-4740-b455-928d26c48aee req-75671797-bfee-4760-8c7a-e15c8c8c9cc8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Processing event network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.031 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.033 232432 DEBUG nova.compute.manager [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-changed-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.034 232432 DEBUG nova.compute.manager [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Refreshing instance network info cache due to event network-changed-25f618be-492d-4ac9-9c9c-6583e0402572. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.034 232432 DEBUG oslo_concurrency.lockutils [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.034 232432 DEBUG oslo_concurrency.lockutils [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.034 232432 DEBUG nova.network.neutron [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Refreshing network info cache for port 25f618be-492d-4ac9-9c9c-6583e0402572 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.038 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403801.0384536, 1df8ad18-c052-4ab6-9941-d61ae842ea2d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.038 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] VM Resumed (Lifecycle Event)
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.040 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.048 232432 INFO nova.virt.libvirt.driver [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Instance spawned successfully.
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.049 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:10:01 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:10:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e29d1ab6e3cd8cf0c80bf4e28b9bfd469259677eff1a4ecfd494bfb209e927/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:01 compute-2 podman[274607]: 2025-11-29 08:10:01.084290549 +0000 UTC m=+0.404789257 container init 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:10:01 compute-2 podman[274607]: 2025-11-29 08:10:01.092948259 +0000 UTC m=+0.413446937 container start 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.103 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.110 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.114 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.114 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.115 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.115 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.116 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.116 232432 DEBUG nova.virt.libvirt.driver [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:01 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [NOTICE]   (274626) : New worker (274628) forked
Nov 29 08:10:01 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [NOTICE]   (274626) : Loading success.
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.157 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:01.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.226 232432 INFO nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Took 10.41 seconds to spawn the instance on the hypervisor.
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.227 232432 DEBUG nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.302 232432 INFO nova.compute.manager [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Took 11.51 seconds to build instance.
Nov 29 08:10:01 compute-2 nova_compute[232428]: 2025-11-29 08:10:01.325 232432 DEBUG oslo_concurrency.lockutils [None req-90e72cb1-2836-468c-8e28-4e084f76c75d 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:01 compute-2 ceph-mon[77138]: pgmap v2081: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.1 MiB/s wr, 191 op/s
Nov 29 08:10:02 compute-2 nova_compute[232428]: 2025-11-29 08:10:02.634 232432 DEBUG nova.network.neutron [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updated VIF entry in instance network info cache for port 25f618be-492d-4ac9-9c9c-6583e0402572. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:02 compute-2 nova_compute[232428]: 2025-11-29 08:10:02.635 232432 DEBUG nova.network.neutron [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:02 compute-2 nova_compute[232428]: 2025-11-29 08:10:02.661 232432 DEBUG oslo_concurrency.lockutils [req-2da17a6f-4e62-4f00-90df-57fab084bd64 req-7aca9d0f-2bf8-4d46-8ac8-cd90b9bbeccc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:03.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.245 232432 DEBUG nova.compute.manager [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Received event network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.246 232432 DEBUG oslo_concurrency.lockutils [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.247 232432 DEBUG oslo_concurrency.lockutils [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.248 232432 DEBUG oslo_concurrency.lockutils [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.249 232432 DEBUG nova.compute.manager [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] No waiting events found dispatching network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.250 232432 WARNING nova.compute.manager [req-65905559-3774-4f40-bb04-d46f615770c1 req-688ab97d-b3a7-485c-b199-805c4ff14cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Received unexpected event network-vif-plugged-43f2b65c-3445-4ada-b1cf-f8725a3e53db for instance with vm_state active and task_state None.
Nov 29 08:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:03.315 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:03.316 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:03.317 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:03 compute-2 nova_compute[232428]: 2025-11-29 08:10:03.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:04 compute-2 ceph-mon[77138]: pgmap v2082: 305 pgs: 305 active+clean; 274 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.2 MiB/s wr, 341 op/s
Nov 29 08:10:04 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 29 08:10:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:04.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:06 compute-2 ceph-mon[77138]: pgmap v2083: 305 pgs: 305 active+clean; 274 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 344 op/s
Nov 29 08:10:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3628253542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/338495937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:06 compute-2 nova_compute[232428]: 2025-11-29 08:10:06.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:06 compute-2 sudo[274640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:06 compute-2 sudo[274640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:06 compute-2 sudo[274640]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:06.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:06 compute-2 sudo[274665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:06 compute-2 sudo[274665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:06 compute-2 sudo[274665]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:06 compute-2 sudo[274690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:06 compute-2 sudo[274690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:06 compute-2 sudo[274690]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:07 compute-2 sudo[274715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:10:07 compute-2 sudo[274715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:07 compute-2 ceph-mon[77138]: pgmap v2084: 305 pgs: 305 active+clean; 255 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 4.1 MiB/s wr, 456 op/s
Nov 29 08:10:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:07 compute-2 sudo[274715]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:08 compute-2 sudo[274770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:08 compute-2 sudo[274770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:08 compute-2 sudo[274770]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:08 compute-2 nova_compute[232428]: 2025-11-29 08:10:08.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-2 sudo[274795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:08 compute-2 sudo[274795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:08 compute-2 sudo[274795]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:08.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:09.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:09 compute-2 ceph-mon[77138]: pgmap v2085: 305 pgs: 305 active+clean; 269 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 2.7 MiB/s wr, 404 op/s
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:10:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:11 compute-2 nova_compute[232428]: 2025-11-29 08:10:11.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:10:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:11.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:10:11 compute-2 ceph-mon[77138]: pgmap v2086: 305 pgs: 305 active+clean; 269 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 2.7 MiB/s wr, 364 op/s
Nov 29 08:10:12 compute-2 ovn_controller[134375]: 2025-11-29T08:10:12Z|00468|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 08:10:12 compute-2 ovn_controller[134375]: 2025-11-29T08:10:12Z|00469|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:10:12 compute-2 nova_compute[232428]: 2025-11-29 08:10:12.100 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:12 compute-2 ovn_controller[134375]: 2025-11-29T08:10:12Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:10:12 compute-2 ovn_controller[134375]: 2025-11-29T08:10:12Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:10:12 compute-2 nova_compute[232428]: 2025-11-29 08:10:12.370 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:12.372 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:12.374 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:10:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3375144778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:13.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:13 compute-2 nova_compute[232428]: 2025-11-29 08:10:13.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:13 compute-2 ceph-mon[77138]: pgmap v2087: 305 pgs: 305 active+clean; 334 MiB data, 905 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.0 MiB/s wr, 467 op/s
Nov 29 08:10:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1406636072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:13 compute-2 podman[274823]: 2025-11-29 08:10:13.751271555 +0000 UTC m=+0.106204041 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:10:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:15.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:15 compute-2 ceph-mon[77138]: pgmap v2088: 305 pgs: 305 active+clean; 352 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.0 MiB/s wr, 348 op/s
Nov 29 08:10:16 compute-2 nova_compute[232428]: 2025-11-29 08:10:16.057 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:16 compute-2 sudo[274845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:16 compute-2 sudo[274845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:16 compute-2 sudo[274845]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:16 compute-2 sudo[274870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:10:16 compute-2 sudo[274870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:16 compute-2 sudo[274870]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:16 compute-2 ovn_controller[134375]: 2025-11-29T08:10:16Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:61:58 10.100.0.9
Nov 29 08:10:16 compute-2 ovn_controller[134375]: 2025-11-29T08:10:16Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:61:58 10.100.0.9
Nov 29 08:10:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:10:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:10:17 compute-2 ceph-mon[77138]: pgmap v2089: 305 pgs: 305 active+clean; 385 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.7 MiB/s wr, 378 op/s
Nov 29 08:10:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:17 compute-2 podman[274896]: 2025-11-29 08:10:17.739110832 +0000 UTC m=+0.119138994 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:18 compute-2 nova_compute[232428]: 2025-11-29 08:10:18.608 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:19 compute-2 ceph-mon[77138]: pgmap v2090: 305 pgs: 305 active+clean; 397 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.9 MiB/s wr, 279 op/s
Nov 29 08:10:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:19.377 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3569949997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:21 compute-2 nova_compute[232428]: 2025-11-29 08:10:21.061 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:21.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:21 compute-2 ceph-mon[77138]: pgmap v2091: 305 pgs: 305 active+clean; 397 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.6 MiB/s wr, 251 op/s
Nov 29 08:10:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1762040387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:23.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:23 compute-2 ceph-mon[77138]: pgmap v2092: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 301 op/s
Nov 29 08:10:23 compute-2 nova_compute[232428]: 2025-11-29 08:10:23.613 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:24.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:25 compute-2 ceph-mon[77138]: pgmap v2093: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 234 op/s
Nov 29 08:10:26 compute-2 nova_compute[232428]: 2025-11-29 08:10:26.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:26.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:27 compute-2 ceph-mon[77138]: pgmap v2094: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 239 op/s
Nov 29 08:10:27 compute-2 nova_compute[232428]: 2025-11-29 08:10:27.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:27 compute-2 podman[274922]: 2025-11-29 08:10:27.737552772 +0000 UTC m=+0.119746105 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:10:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755924019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755924019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2755924019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2755924019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-2 nova_compute[232428]: 2025-11-29 08:10:28.635 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-2 sudo[274949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:28 compute-2 sudo[274949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:28 compute-2 sudo[274949]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:28 compute-2 sudo[274974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:28 compute-2 sudo[274974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:28 compute-2 sudo[274974]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:29 compute-2 ceph-mon[77138]: pgmap v2095: 305 pgs: 305 active+clean; 412 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 177 op/s
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.911 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.913 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.913 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.914 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.915 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.919 232432 INFO nova.compute.manager [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Terminating instance
Nov 29 08:10:29 compute-2 nova_compute[232428]: 2025-11-29 08:10:29.921 232432 DEBUG nova.compute.manager [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:10:29 compute-2 kernel: tap43f2b65c-34 (unregistering): left promiscuous mode
Nov 29 08:10:29 compute-2 NetworkManager[48993]: <info>  [1764403829.9890] device (tap43f2b65c-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 ovn_controller[134375]: 2025-11-29T08:10:30Z|00470|binding|INFO|Releasing lport 43f2b65c-3445-4ada-b1cf-f8725a3e53db from this chassis (sb_readonly=0)
Nov 29 08:10:30 compute-2 ovn_controller[134375]: 2025-11-29T08:10:30Z|00471|binding|INFO|Setting lport 43f2b65c-3445-4ada-b1cf-f8725a3e53db down in Southbound
Nov 29 08:10:30 compute-2 ovn_controller[134375]: 2025-11-29T08:10:30Z|00472|binding|INFO|Removing iface tap43f2b65c-34 ovn-installed in OVS
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.010 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.028 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:61:58 10.100.0.9'], port_security=['fa:16:3e:b1:61:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1df8ad18-c052-4ab6-9941-d61ae842ea2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=43f2b65c-3445-4ada-b1cf-f8725a3e53db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.030 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 43f2b65c-3445-4ada-b1cf-f8725a3e53db in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 unbound from our chassis
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.033 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.035 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9bde0f3a-4c7d-439f-8e85-b8fd030e2bde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.037 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace which is not needed anymore
Nov 29 08:10:30 compute-2 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 29 08:10:30 compute-2 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000065.scope: Consumed 16.395s CPU time.
Nov 29 08:10:30 compute-2 systemd-machined[194747]: Machine qemu-43-instance-00000065 terminated.
Nov 29 08:10:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.185 232432 INFO nova.virt.libvirt.driver [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Instance destroyed successfully.
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.186 232432 DEBUG nova.objects.instance [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'resources' on Instance uuid 1df8ad18-c052-4ab6-9941-d61ae842ea2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:30 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [NOTICE]   (274626) : haproxy version is 2.8.14-c23fe91
Nov 29 08:10:30 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [NOTICE]   (274626) : path to executable is /usr/sbin/haproxy
Nov 29 08:10:30 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [WARNING]  (274626) : Exiting Master process...
Nov 29 08:10:30 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [ALERT]    (274626) : Current worker (274628) exited with code 143 (Terminated)
Nov 29 08:10:30 compute-2 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[274622]: [WARNING]  (274626) : All workers exited. Exiting... (0)
Nov 29 08:10:30 compute-2 systemd[1]: libpod-8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936.scope: Deactivated successfully.
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.293 232432 DEBUG nova.virt.libvirt.vif [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-867035356',display_name='tempest-ListServerFiltersTestJSON-instance-867035356',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-867035356',id=101,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-e0t5he6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:01Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=1df8ad18-c052-4ab6-9941-d61ae842ea2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.294 232432 DEBUG nova.network.os_vif_util [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "address": "fa:16:3e:b1:61:58", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f2b65c-34", "ovs_interfaceid": "43f2b65c-3445-4ada-b1cf-f8725a3e53db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:30 compute-2 podman[275035]: 2025-11-29 08:10:30.296130841 +0000 UTC m=+0.077242819 container died 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.296 232432 DEBUG nova.network.os_vif_util [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.297 232432 DEBUG os_vif [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.302 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43f2b65c-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.317 232432 INFO os_vif [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:61:58,bridge_name='br-int',has_traffic_filtering=True,id=43f2b65c-3445-4ada-b1cf-f8725a3e53db,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f2b65c-34')
Nov 29 08:10:30 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936-userdata-shm.mount: Deactivated successfully.
Nov 29 08:10:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-d2e29d1ab6e3cd8cf0c80bf4e28b9bfd469259677eff1a4ecfd494bfb209e927-merged.mount: Deactivated successfully.
Nov 29 08:10:30 compute-2 podman[275035]: 2025-11-29 08:10:30.378204889 +0000 UTC m=+0.159316867 container cleanup 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 08:10:30 compute-2 systemd[1]: libpod-conmon-8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936.scope: Deactivated successfully.
Nov 29 08:10:30 compute-2 podman[275085]: 2025-11-29 08:10:30.502196134 +0000 UTC m=+0.076909128 container remove 8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.515 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4cf880-cc10-4ba1-abaf-0bf4bf2cd66e]: (4, ('Sat Nov 29 08:10:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936)\n8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936\nSat Nov 29 08:10:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936)\n8ffacffb964f1d00f5ee2b2511706856847c0584a933d0f210569440bc190936\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.518 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b07cf4c-b67b-4d3e-bfc8-e738f78471fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.520 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 kernel: tap9a0b70e3-10: left promiscuous mode
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.557 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.562 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[baec24d6-c5b7-4fe4-8eef-f9c96e3ffb8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.578 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1695e7fd-3bc0-442a-b33d-91578009f734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.581 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de18f044-6176-4b0c-adba-1c3b76bb6666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.618 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b11232-9a8e-4e63-8af2-8bded9cd7a97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676613, 'reachable_time': 18593, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275100, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.623 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:10:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:30.623 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c19a9828-e45b-4d11-be8f-6a56d56e84e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:30 compute-2 systemd[1]: run-netns-ovnmeta\x2d9a0b70e3\x2d1894\x2d47e1\x2dbc43\x2d1721fdb1c9d6.mount: Deactivated successfully.
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.831 232432 INFO nova.virt.libvirt.driver [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Deleting instance files /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d_del
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.832 232432 INFO nova.virt.libvirt.driver [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Deletion of /var/lib/nova/instances/1df8ad18-c052-4ab6-9941-d61ae842ea2d_del complete
Nov 29 08:10:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.924 232432 INFO nova.compute.manager [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.925 232432 DEBUG oslo.service.loopingcall [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.926 232432 DEBUG nova.compute.manager [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:10:30 compute-2 nova_compute[232428]: 2025-11-29 08:10:30.927 232432 DEBUG nova.network.neutron [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.069 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:10:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:31.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:10:31 compute-2 ceph-mon[77138]: pgmap v2096: 305 pgs: 305 active+clean; 412 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 642 KiB/s wr, 133 op/s
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.598 232432 DEBUG nova.network.neutron [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.629 232432 INFO nova.compute.manager [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Took 0.70 seconds to deallocate network for instance.
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.674 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.675 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.713 232432 DEBUG nova.compute.manager [req-f02f7b7e-7230-4715-bacb-f2bc3a9021d1 req-991758af-5942-4eaa-b3da-e2bed32e11fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Received event network-vif-deleted-43f2b65c-3445-4ada-b1cf-f8725a3e53db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:31 compute-2 nova_compute[232428]: 2025-11-29 08:10:31.776 232432 DEBUG oslo_concurrency.processutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3220788781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.283 232432 DEBUG oslo_concurrency.processutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.295 232432 DEBUG nova.compute.provider_tree [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.339 232432 DEBUG nova.scheduler.client.report [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.370 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.415 232432 INFO nova.scheduler.client.report [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Deleted allocations for instance 1df8ad18-c052-4ab6-9941-d61ae842ea2d
Nov 29 08:10:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3220788781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:32 compute-2 nova_compute[232428]: 2025-11-29 08:10:32.511 232432 DEBUG oslo_concurrency.lockutils [None req-7bcea557-1d6e-4aba-8698-2380a0911234 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "1df8ad18-c052-4ab6-9941-d61ae842ea2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:33 compute-2 ceph-mon[77138]: pgmap v2097: 305 pgs: 305 active+clean; 436 MiB data, 971 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Nov 29 08:10:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.110 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.111 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.112 232432 INFO nova.compute.manager [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Rebooting instance
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.130 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.130 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.131 232432 DEBUG nova.network.neutron [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:10:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:35.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:35 compute-2 nova_compute[232428]: 2025-11-29 08:10:35.307 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:35 compute-2 ceph-mon[77138]: pgmap v2098: 305 pgs: 305 active+clean; 414 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 08:10:36 compute-2 nova_compute[232428]: 2025-11-29 08:10:36.071 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:37.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:37 compute-2 ceph-mon[77138]: pgmap v2099: 305 pgs: 305 active+clean; 292 MiB data, 895 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Nov 29 08:10:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1567484573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:37 compute-2 nova_compute[232428]: 2025-11-29 08:10:37.998 232432 DEBUG nova.network.neutron [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.018 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.020 232432 DEBUG nova.compute.manager [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:38 compute-2 kernel: tap25f618be-49 (unregistering): left promiscuous mode
Nov 29 08:10:38 compute-2 NetworkManager[48993]: <info>  [1764403838.2441] device (tap25f618be-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.262 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 ovn_controller[134375]: 2025-11-29T08:10:38Z|00473|binding|INFO|Releasing lport 25f618be-492d-4ac9-9c9c-6583e0402572 from this chassis (sb_readonly=0)
Nov 29 08:10:38 compute-2 ovn_controller[134375]: 2025-11-29T08:10:38Z|00474|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 down in Southbound
Nov 29 08:10:38 compute-2 ovn_controller[134375]: 2025-11-29T08:10:38Z|00475|binding|INFO|Removing iface tap25f618be-49 ovn-installed in OVS
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.264 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.268 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.271 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.273 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.274 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[18dba97c-3395-4c39-bb89-befb5d55888a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.276 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:10:38 compute-2 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 08:10:38 compute-2 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000064.scope: Consumed 17.484s CPU time.
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.307 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 systemd-machined[194747]: Machine qemu-42-instance-00000064 terminated.
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.435 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.445 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.446 232432 DEBUG nova.objects.instance [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.465 232432 DEBUG nova.virt.libvirt.vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.466 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.467 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.468 232432 DEBUG os_vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.471 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f618be-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.477 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.481 232432 INFO os_vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.494 232432 DEBUG nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start _get_guest_xml network_info=[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.500 232432 WARNING nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.506 232432 DEBUG nova.virt.libvirt.host [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.507 232432 DEBUG nova.virt.libvirt.host [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.511 232432 DEBUG nova.virt.libvirt.host [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.512 232432 DEBUG nova.virt.libvirt.host [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.514 232432 DEBUG nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [NOTICE]   (274337) : haproxy version is 2.8.14-c23fe91
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [NOTICE]   (274337) : path to executable is /usr/sbin/haproxy
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [WARNING]  (274337) : Exiting Master process...
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [WARNING]  (274337) : Exiting Master process...
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.514 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.515 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.515 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.515 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.516 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.516 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.516 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [ALERT]    (274337) : Current worker (274342) exited with code 143 (Terminated)
Nov 29 08:10:38 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[274318]: [WARNING]  (274337) : All workers exited. Exiting... (0)
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.517 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.517 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.517 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.518 232432 DEBUG nova.virt.hardware [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.518 232432 DEBUG nova.objects.instance [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:38 compute-2 systemd[1]: libpod-d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466.scope: Deactivated successfully.
Nov 29 08:10:38 compute-2 podman[275163]: 2025-11-29 08:10:38.526578864 +0000 UTC m=+0.062220000 container died d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.536 232432 DEBUG oslo_concurrency.processutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466-userdata-shm.mount: Deactivated successfully.
Nov 29 08:10:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-0dddaf1b27cdcf8c51ed399abdf9fbb11ac38ba26c4e9f988ea81eda53636b39-merged.mount: Deactivated successfully.
Nov 29 08:10:38 compute-2 podman[275163]: 2025-11-29 08:10:38.582657432 +0000 UTC m=+0.118298548 container cleanup d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:10:38 compute-2 systemd[1]: libpod-conmon-d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466.scope: Deactivated successfully.
Nov 29 08:10:38 compute-2 podman[275195]: 2025-11-29 08:10:38.667874338 +0000 UTC m=+0.051349291 container remove d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.676 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[228d92c4-f4a3-43fa-8128-3d0e9de88ff0]: (4, ('Sat Nov 29 08:10:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466)\nd661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466\nSat Nov 29 08:10:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (d661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466)\nd661d5190bc4d17bdb6cb9131fed8c952baaf3a636eb6b46c25a875b649c1466\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.678 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fb5d49-3b37-4af6-ac6b-b9ea505fca31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.680 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.682 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.689 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8827b363-dd0c-440c-8589-57b027b41b70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 nova_compute[232428]: 2025-11-29 08:10:38.699 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.707 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a81167fd-4080-4ff5-bb70-f8afc296ed80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.709 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6953ba22-3c9d-4c6e-a165-e6270181b7a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.740 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b99c4ef-c15a-4849-87e2-7754710c95c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676233, 'reachable_time': 42416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275210, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.745 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:10:38 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:10:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:38.745 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[502986d2-737d-4125-90bc-9489a15a31e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:38.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.050 232432 DEBUG nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.051 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.052 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.052 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.052 232432 DEBUG nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.053 232432 WARNING nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state reboot_started_hard.
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.053 232432 DEBUG nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.053 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.054 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.054 232432 DEBUG oslo_concurrency.lockutils [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.054 232432 DEBUG nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.054 232432 WARNING nova.compute.manager [req-ae3ebc9d-e7d3-444f-97fe-8da0a0727134 req-d25c19d5-4091-4fb3-a019-03c706d815c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state reboot_started_hard.
Nov 29 08:10:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2704748923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.120 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.121 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.129 232432 DEBUG oslo_concurrency.processutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.168 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.176 232432 DEBUG oslo_concurrency.processutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.219 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.294 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.295 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.304 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.305 232432 INFO nova.compute.claims [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.441 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:39 compute-2 ceph-mon[77138]: pgmap v2100: 305 pgs: 305 active+clean; 279 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 895 KiB/s rd, 2.2 MiB/s wr, 164 op/s
Nov 29 08:10:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2704748923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2531341004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.665 232432 DEBUG oslo_concurrency.processutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.669 232432 DEBUG nova.virt.libvirt.vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.670 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.671 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.673 232432 DEBUG nova.objects.instance [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.693 232432 DEBUG nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <uuid>2ed45397-ad95-4437-a0df-a49849d1d9bf</uuid>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <name>instance-00000064</name>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1161621840</nova:name>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:10:38</nova:creationTime>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <nova:port uuid="25f618be-492d-4ac9-9c9c-6583e0402572">
Nov 29 08:10:39 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <system>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="serial">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="uuid">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </system>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <os>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </os>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <features>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </features>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk">
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </source>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config">
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </source>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:10:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e8:62:3f"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <target dev="tap25f618be-49"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log" append="off"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <video>
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </video>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:10:39 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:10:39 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:10:39 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:10:39 compute-2 nova_compute[232428]: </domain>
Nov 29 08:10:39 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.696 232432 DEBUG nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.696 232432 DEBUG nova.virt.libvirt.driver [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.698 232432 DEBUG nova.virt.libvirt.vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.699 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.700 232432 DEBUG nova.network.os_vif_util [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.701 232432 DEBUG os_vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.702 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.703 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.704 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.708 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.708 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f618be-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.709 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f618be-49, col_values=(('external_ids', {'iface-id': '25f618be-492d-4ac9-9c9c-6583e0402572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:62:3f', 'vm-uuid': '2ed45397-ad95-4437-a0df-a49849d1d9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 NetworkManager[48993]: <info>  [1764403839.7125] manager: (tap25f618be-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.714 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.717 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.718 232432 INFO os_vif [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:10:39 compute-2 virtqemud[231977]: End of file while reading data: Input/output error
Nov 29 08:10:39 compute-2 virtqemud[231977]: End of file while reading data: Input/output error
Nov 29 08:10:39 compute-2 kernel: tap25f618be-49: entered promiscuous mode
Nov 29 08:10:39 compute-2 systemd-udevd[275131]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:10:39 compute-2 NetworkManager[48993]: <info>  [1764403839.8330] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.844 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 ovn_controller[134375]: 2025-11-29T08:10:39Z|00476|binding|INFO|Claiming lport 25f618be-492d-4ac9-9c9c-6583e0402572 for this chassis.
Nov 29 08:10:39 compute-2 ovn_controller[134375]: 2025-11-29T08:10:39Z|00477|binding|INFO|25f618be-492d-4ac9-9c9c-6583e0402572: Claiming fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:10:39 compute-2 NetworkManager[48993]: <info>  [1764403839.8495] device (tap25f618be-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:10:39 compute-2 NetworkManager[48993]: <info>  [1764403839.8509] device (tap25f618be-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.854 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.855 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.857 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:10:39 compute-2 ovn_controller[134375]: 2025-11-29T08:10:39Z|00478|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 ovn-installed in OVS
Nov 29 08:10:39 compute-2 ovn_controller[134375]: 2025-11-29T08:10:39Z|00479|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 up in Southbound
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.876 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[45d0b203-91c2-49d8-9bec-eade7b5182ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.878 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.883 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.883 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4a1939-5d50-462a-bfa0-321ca46017b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.885 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aaff25a3-802b-44ed-afb4-b2259fea1ca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 systemd-machined[194747]: New machine qemu-44-instance-00000064.
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.909 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f56166-340b-441d-a097-60c1859a7c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 systemd[1]: Started Virtual Machine qemu-44-instance-00000064.
Nov 29 08:10:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3384945818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.935 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[40b6dfd5-aa10-47cf-9dbd-056141a8b775]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.954 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:39 compute-2 nova_compute[232428]: 2025-11-29 08:10:39.975 232432 DEBUG nova.compute.provider_tree [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.978 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7142b903-ccb8-429c-b03d-74f103e5c7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:39.987 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[341447fa-2cd8-40ab-85f1-78e7cd8342f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:39 compute-2 NetworkManager[48993]: <info>  [1764403839.9883] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.017 232432 DEBUG nova.scheduler.client.report [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.020 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5da8cdcd-c89f-461c-88b3-487fe43ee1b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.023 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a86513-6023-4c56-9369-028dcf8f11ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.056 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.057 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:10:40 compute-2 NetworkManager[48993]: <info>  [1764403840.0590] device (tap988c10fa-90): carrier: link connected
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.065 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7b3298-9996-49f5-a6ba-3fefcde8296e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.084 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3df653a5-88ea-4d5d-b45c-a085de2c7c90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680632, 'reachable_time': 18207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275340, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.109 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3440ac-fb0f-4e85-82f0-72bda5610dda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680632, 'tstamp': 680632}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275341, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.115 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.116 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.129 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae78149-5b90-4b27-bf2b-841f8b6a647f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680632, 'reachable_time': 18207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275342, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.137 232432 INFO nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.165 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.171 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6e05fbea-e2ce-45d3-8c69-3ab2404ec4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.250 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b531a29-d8e9-4960-99ad-54a74418657d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.252 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.253 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.254 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:40 compute-2 NetworkManager[48993]: <info>  [1764403840.2581] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 08:10:40 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.262 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:40 compute-2 ovn_controller[134375]: 2025-11-29T08:10:40Z|00480|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.266 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.267 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b29dcee-e49c-4b23-91dc-83e67c4e87e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.269 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:10:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:40.272 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.309 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.311 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.312 232432 INFO nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Creating image(s)
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.349 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.393 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.432 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.437 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.471 232432 DEBUG nova.policy [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58625e4c2b5d43a1abbab05b98853a65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '250671461f27498d9f6b4476c7b69533', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.510 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.511 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.512 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.513 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.543 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.551 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2531341004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3384945818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:40 compute-2 podman[275448]: 2025-11-29 08:10:40.652855498 +0000 UTC m=+0.064657857 container create 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:40 compute-2 systemd[1]: Started libpod-conmon-7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5.scope.
Nov 29 08:10:40 compute-2 podman[275448]: 2025-11-29 08:10:40.62243669 +0000 UTC m=+0.034239079 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:10:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:10:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad69db77a15474c1bd561af2dd9276c826dd4d03aa460b0e72427b59f1524e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:40 compute-2 podman[275448]: 2025-11-29 08:10:40.781110275 +0000 UTC m=+0.192912704 container init 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:10:40 compute-2 podman[275448]: 2025-11-29 08:10:40.798433455 +0000 UTC m=+0.210235834 container start 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:10:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [NOTICE]   (275519) : New worker (275523) forked
Nov 29 08:10:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [NOTICE]   (275519) : Loading success.
Nov 29 08:10:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.981 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 2ed45397-ad95-4437-a0df-a49849d1d9bf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.982 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403840.9809904, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.982 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.986 232432 DEBUG nova.compute.manager [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.991 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance rebooted successfully.
Nov 29 08:10:40 compute-2 nova_compute[232428]: 2025-11-29 08:10:40.991 232432 DEBUG nova.compute.manager [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.069 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.077 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.080 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.172 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.178 232432 DEBUG oslo_concurrency.lockutils [None req-1c0a20eb-4d31-46fe-8063-0b9ea49b8681 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.185 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.233 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403840.985168, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.234 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Started (Lifecycle Event)
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.238 232432 DEBUG nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.238 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.238 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.239 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.239 232432 DEBUG nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.239 232432 WARNING nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.239 232432 DEBUG nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.239 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.240 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.240 232432 DEBUG oslo_concurrency.lockutils [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.240 232432 DEBUG nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.240 232432 WARNING nova.compute.manager [req-9b05a17e-e305-43c8-a7a9-d0b4a596471e req-1ccabc5e-1bae-44e7-917b-ff334d64b897 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:10:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.259 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.264 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.314 232432 DEBUG nova.objects.instance [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid 35f7492d-e1a0-4369-bf32-ba8fa094036a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.327 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.328 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Ensure instance console log exists: /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.328 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.329 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.329 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:41 compute-2 nova_compute[232428]: 2025-11-29 08:10:41.530 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Successfully created port: bd853c6d-a3b6-4414-8e4e-24d926fd6692 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:10:41 compute-2 ceph-mon[77138]: pgmap v2101: 305 pgs: 305 active+clean; 279 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 1.6 MiB/s wr, 153 op/s
Nov 29 08:10:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3812345530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.462 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Successfully updated port: bd853c6d-a3b6-4414-8e4e-24d926fd6692 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.492 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.492 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.492 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:10:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.967 232432 DEBUG nova.compute.manager [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.968 232432 DEBUG nova.compute.manager [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing instance network info cache due to event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:42 compute-2 nova_compute[232428]: 2025-11-29 08:10:42.969 232432 DEBUG oslo_concurrency.lockutils [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:43 compute-2 nova_compute[232428]: 2025-11-29 08:10:43.057 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:10:43 compute-2 nova_compute[232428]: 2025-11-29 08:10:43.193 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:43 compute-2 ovn_controller[134375]: 2025-11-29T08:10:43Z|00481|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:10:43 compute-2 nova_compute[232428]: 2025-11-29 08:10:43.373 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:43 compute-2 ceph-mon[77138]: pgmap v2102: 305 pgs: 305 active+clean; 227 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 1.6 MiB/s wr, 175 op/s
Nov 29 08:10:43 compute-2 nova_compute[232428]: 2025-11-29 08:10:43.721 232432 INFO nova.compute.manager [None req-d07c8fdb-c378-421f-a5ff-b820c3d25bb5 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Get console output
Nov 29 08:10:43 compute-2 nova_compute[232428]: 2025-11-29 08:10:43.730 232432 INFO oslo.privsep.daemon [None req-d07c8fdb-c378-421f-a5ff-b820c3d25bb5 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpcdm9h61j/privsep.sock']
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.159 232432 DEBUG nova.network.neutron [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.557 232432 INFO oslo.privsep.daemon [None req-d07c8fdb-c378-421f-a5ff-b820c3d25bb5 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Spawned new privsep daemon via rootwrap
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.407 275616 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.412 275616 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.414 275616 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.414 275616 INFO oslo.privsep.daemon [-] privsep daemon running as pid 275616
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.654 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:10:44 compute-2 podman[275618]: 2025-11-29 08:10:44.690019992 +0000 UTC m=+0.092884287 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:10:44 compute-2 nova_compute[232428]: 2025-11-29 08:10:44.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.180 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.181 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Instance network_info: |[{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.182 232432 DEBUG oslo_concurrency.lockutils [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.183 232432 DEBUG nova.network.neutron [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.187 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Start _get_guest_xml network_info=[{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.188 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403830.181786, 1df8ad18-c052-4ab6-9941-d61ae842ea2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.188 232432 INFO nova.compute.manager [-] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] VM Stopped (Lifecycle Event)
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.197 232432 WARNING nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.209 232432 DEBUG nova.virt.libvirt.host [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.211 232432 DEBUG nova.virt.libvirt.host [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.220 232432 DEBUG nova.compute.manager [None req-0a928c15-9a69-4773-b8ab-ca6918d03538 - - - - - -] [instance: 1df8ad18-c052-4ab6-9941-d61ae842ea2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.224 232432 DEBUG nova.virt.libvirt.host [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.226 232432 DEBUG nova.virt.libvirt.host [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.228 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.229 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.230 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.231 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.232 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.232 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.233 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.234 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.234 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.235 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.236 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.237 232432 DEBUG nova.virt.hardware [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.243 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:45.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/308128042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.758 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:45 compute-2 ceph-mon[77138]: pgmap v2103: 305 pgs: 305 active+clean; 219 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 883 KiB/s wr, 153 op/s
Nov 29 08:10:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/308128042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.788 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:45 compute-2 nova_compute[232428]: 2025-11-29 08:10:45.793 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.075 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3238045007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.220 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.222 232432 DEBUG nova.virt.libvirt.vif [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-738978804',display_name='tempest-ServerActionsTestOtherA-server-738978804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-738978804',id=104,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-zmcr4jkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=35f7492d-e1a0-4369-bf32-ba8fa094036a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.222 232432 DEBUG nova.network.os_vif_util [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.223 232432 DEBUG nova.network.os_vif_util [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.225 232432 DEBUG nova.objects.instance [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid 35f7492d-e1a0-4369-bf32-ba8fa094036a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:46.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3238045007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.986 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <uuid>35f7492d-e1a0-4369-bf32-ba8fa094036a</uuid>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <name>instance-00000068</name>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestOtherA-server-738978804</nova:name>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:10:45</nova:creationTime>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <nova:port uuid="bd853c6d-a3b6-4414-8e4e-24d926fd6692">
Nov 29 08:10:46 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <system>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="serial">35f7492d-e1a0-4369-bf32-ba8fa094036a</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="uuid">35f7492d-e1a0-4369-bf32-ba8fa094036a</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </system>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <os>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </os>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <features>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </features>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/35f7492d-e1a0-4369-bf32-ba8fa094036a_disk">
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </source>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config">
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </source>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:10:46 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:d6:f6:4f"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <target dev="tapbd853c6d-a3"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/console.log" append="off"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <video>
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </video>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:10:46 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:10:46 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:10:46 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:10:46 compute-2 nova_compute[232428]: </domain>
Nov 29 08:10:46 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.988 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Preparing to wait for external event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.988 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.989 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.989 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.989 232432 DEBUG nova.virt.libvirt.vif [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-738978804',display_name='tempest-ServerActionsTestOtherA-server-738978804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-738978804',id=104,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-zmcr4jkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=35f7492d-e1a0-4369-bf32-ba8fa094036a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.990 232432 DEBUG nova.network.os_vif_util [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.990 232432 DEBUG nova.network.os_vif_util [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.991 232432 DEBUG os_vif [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.991 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.992 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.992 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.994 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.995 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd853c6d-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.995 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd853c6d-a3, col_values=(('external_ids', {'iface-id': 'bd853c6d-a3b6-4414-8e4e-24d926fd6692', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:f6:4f', 'vm-uuid': '35f7492d-e1a0-4369-bf32-ba8fa094036a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.996 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:46 compute-2 NetworkManager[48993]: <info>  [1764403846.9984] manager: (tapbd853c6d-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 08:10:46 compute-2 nova_compute[232428]: 2025-11-29 08:10:46.999 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.006 232432 INFO os_vif [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3')
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.059 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.059 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.060 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:d6:f6:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.060 232432 INFO nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Using config drive
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.086 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.233 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:10:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:47.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.419 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.420 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.421 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.421 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.505 232432 INFO nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Creating config drive at /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.511 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqo9krwz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.677 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqo9krwz" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.708 232432 DEBUG nova.storage.rbd_utils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.712 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:47 compute-2 ceph-mon[77138]: pgmap v2104: 305 pgs: 305 active+clean; 246 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Nov 29 08:10:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3956711814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.950 232432 DEBUG oslo_concurrency.processutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config 35f7492d-e1a0-4369-bf32-ba8fa094036a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:47 compute-2 nova_compute[232428]: 2025-11-29 08:10:47.952 232432 INFO nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Deleting local config drive /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a/disk.config because it was imported into RBD.
Nov 29 08:10:48 compute-2 kernel: tapbd853c6d-a3: entered promiscuous mode
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.0270] manager: (tapbd853c6d-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 08:10:48 compute-2 ovn_controller[134375]: 2025-11-29T08:10:48Z|00482|binding|INFO|Claiming lport bd853c6d-a3b6-4414-8e4e-24d926fd6692 for this chassis.
Nov 29 08:10:48 compute-2 ovn_controller[134375]: 2025-11-29T08:10:48Z|00483|binding|INFO|bd853c6d-a3b6-4414-8e4e-24d926fd6692: Claiming fa:16:3e:d6:f6:4f 10.100.0.8
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.071 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:f6:4f 10.100.0.8'], port_security=['fa:16:3e:d6:f6:4f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '35f7492d-e1a0-4369-bf32-ba8fa094036a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=bd853c6d-a3b6-4414-8e4e-24d926fd6692) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.075 143801 INFO neutron.agent.ovn.metadata.agent [-] Port bd853c6d-a3b6-4414-8e4e-24d926fd6692 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis
Nov 29 08:10:48 compute-2 ovn_controller[134375]: 2025-11-29T08:10:48Z|00484|binding|INFO|Setting lport bd853c6d-a3b6-4414-8e4e-24d926fd6692 ovn-installed in OVS
Nov 29 08:10:48 compute-2 ovn_controller[134375]: 2025-11-29T08:10:48Z|00485|binding|INFO|Setting lport bd853c6d-a3b6-4414-8e4e-24d926fd6692 up in Southbound
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.078 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.090 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.095 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea86dbd8-39d6-4e42-bf50-f9b654cca275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.096 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.099 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.099 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1521bc-5d8b-40e9-838d-2709cdbd1e6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.100 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b76a69-a8ed-48f7-83eb-5410d1a6492a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.114 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[972edf5d-65b5-414d-9ed0-a83b7124d73d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 systemd-machined[194747]: New machine qemu-45-instance-00000068.
Nov 29 08:10:48 compute-2 systemd[1]: Started Virtual Machine qemu-45-instance-00000068.
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.139 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f883ed7a-8124-4077-82ea-bc6585b09c29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 systemd-udevd[275795]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.1626] device (tapbd853c6d-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.1640] device (tapbd853c6d-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:10:48 compute-2 podman[275773]: 2025-11-29 08:10:48.189206337 +0000 UTC m=+0.110019489 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.194 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ed428dc4-2e4b-473b-b3c6-df5df9b5992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.2033] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.201 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5d6a46-80d5-4a74-b3eb-141382dcb948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.240 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc2bcde-3509-484a-9b86-984431120c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.244 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[94e1c9cd-2251-4234-a52b-6a0c8f081d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.2722] device (tap10a9b8d1-20): carrier: link connected
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.277 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8e2a3c-1ac8-43c0-a39e-19dda3549ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.297 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2daa888c-c5d8-44f4-9674-d8ed33e22a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681454, 'reachable_time': 26634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275830, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.322 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[729db421-6c9a-43cb-ad46-8c8656843248]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681454, 'tstamp': 681454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275831, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.342 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[03a78f2b-0eb8-458f-bceb-e26e1b3dfc25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681454, 'reachable_time': 26634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275832, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.390 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4519cdff-900c-4eff-94a3-9c3af75b2429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.483 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[46117e96-7078-4cd9-a984-731c7915a685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.485 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.486 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.487 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:48 compute-2 NetworkManager[48993]: <info>  [1764403848.4898] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 08:10:48 compute-2 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.494 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-2 ovn_controller[134375]: 2025-11-29T08:10:48Z|00486|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.529 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.528 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.530 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e91f7e99-92e2-4ecc-b70f-22a75f532d5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.531 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:10:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:10:48.534 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.843 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403848.842577, 35f7492d-e1a0-4369-bf32-ba8fa094036a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.844 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] VM Started (Lifecycle Event)
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.872 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.877 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403848.8439288, 35f7492d-e1a0-4369-bf32-ba8fa094036a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.878 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] VM Paused (Lifecycle Event)
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.904 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.910 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:48.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:48 compute-2 nova_compute[232428]: 2025-11-29 08:10:48.944 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:48 compute-2 sudo[275907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:48 compute-2 sudo[275907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:48 compute-2 sudo[275907]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:49 compute-2 podman[275906]: 2025-11-29 08:10:49.012430266 +0000 UTC m=+0.091733720 container create c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:49 compute-2 podman[275906]: 2025-11-29 08:10:48.961753747 +0000 UTC m=+0.041057221 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:10:49 compute-2 systemd[1]: Started libpod-conmon-c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964.scope.
Nov 29 08:10:49 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.092 232432 DEBUG nova.compute.manager [req-ddb9ef41-d55c-40a8-8ecd-f5d8827849a4 req-656a1c55-9b70-4362-9dad-4a2514032dfd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bceed7345574fe0b4c417a1d4581b93ba5b1227588f4fd769b421c11eae02893/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:49 compute-2 sudo[275940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:49 compute-2 sudo[275940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:49 compute-2 sudo[275940]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.112 232432 DEBUG oslo_concurrency.lockutils [req-ddb9ef41-d55c-40a8-8ecd-f5d8827849a4 req-656a1c55-9b70-4362-9dad-4a2514032dfd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.113 232432 DEBUG oslo_concurrency.lockutils [req-ddb9ef41-d55c-40a8-8ecd-f5d8827849a4 req-656a1c55-9b70-4362-9dad-4a2514032dfd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.114 232432 DEBUG oslo_concurrency.lockutils [req-ddb9ef41-d55c-40a8-8ecd-f5d8827849a4 req-656a1c55-9b70-4362-9dad-4a2514032dfd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.115 232432 DEBUG nova.compute.manager [req-ddb9ef41-d55c-40a8-8ecd-f5d8827849a4 req-656a1c55-9b70-4362-9dad-4a2514032dfd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Processing event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.117 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.124 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403849.1241302, 35f7492d-e1a0-4369-bf32-ba8fa094036a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.124 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] VM Resumed (Lifecycle Event)
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.128 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.133 232432 INFO nova.virt.libvirt.driver [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Instance spawned successfully.
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.134 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:10:49 compute-2 podman[275906]: 2025-11-29 08:10:49.203061112 +0000 UTC m=+0.282364586 container init c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.208 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.210 232432 DEBUG nova.network.neutron [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated VIF entry in instance network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.210 232432 DEBUG nova.network.neutron [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:49 compute-2 podman[275906]: 2025-11-29 08:10:49.216920623 +0000 UTC m=+0.296224077 container start c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.227 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.230 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.231 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.233 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.234 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.235 232432 DEBUG nova.virt.libvirt.driver [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.242 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:49 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [NOTICE]   (275974) : New worker (275976) forked
Nov 29 08:10:49 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [NOTICE]   (275974) : Loading success.
Nov 29 08:10:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.348 232432 DEBUG oslo_concurrency.lockutils [req-a6c744ed-903a-4a26-9731-1284a63bcff2 req-a22c4008-a6c8-401c-8a01-fcb0822a0a6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.353 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.380 232432 INFO nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Took 9.07 seconds to spawn the instance on the hypervisor.
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.381 232432 DEBUG nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.446 232432 INFO nova.compute.manager [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Took 10.20 seconds to build instance.
Nov 29 08:10:49 compute-2 nova_compute[232428]: 2025-11-29 08:10:49.464 232432 DEBUG oslo_concurrency.lockutils [None req-ca19093f-11d3-4e51-af74-9db7d07469ca 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1390464953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:50.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.005 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.033 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.034 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.035 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.078 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.278 232432 DEBUG nova.compute.manager [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.279 232432 DEBUG oslo_concurrency.lockutils [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.280 232432 DEBUG oslo_concurrency.lockutils [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.281 232432 DEBUG oslo_concurrency.lockutils [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.281 232432 DEBUG nova.compute.manager [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] No waiting events found dispatching network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.282 232432 WARNING nova.compute.manager [req-c16ad65f-351b-463d-adb5-b30f73569fed req-d2158bcf-c478-4cc4-b33b-1160648846af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received unexpected event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 for instance with vm_state active and task_state None.
Nov 29 08:10:51 compute-2 ceph-mon[77138]: pgmap v2105: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 29 08:10:51 compute-2 ceph-mon[77138]: pgmap v2106: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.565 232432 DEBUG oslo_concurrency.lockutils [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.568 232432 DEBUG oslo_concurrency.lockutils [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.569 232432 DEBUG nova.compute.manager [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.578 232432 DEBUG nova.compute.manager [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.581 232432 DEBUG nova.objects.instance [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:51 compute-2 nova_compute[232428]: 2025-11-29 08:10:51.620 232432 DEBUG nova.virt.libvirt.driver [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3661516400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.709 232432 DEBUG nova.compute.manager [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.710 232432 DEBUG nova.compute.manager [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing instance network info cache due to event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.710 232432 DEBUG oslo_concurrency.lockutils [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.710 232432 DEBUG oslo_concurrency.lockutils [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:52 compute-2 nova_compute[232428]: 2025-11-29 08:10:52.710 232432 DEBUG nova.network.neutron [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:52.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:53 compute-2 nova_compute[232428]: 2025-11-29 08:10:53.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:53 compute-2 nova_compute[232428]: 2025-11-29 08:10:53.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:10:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:10:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:10:53 compute-2 ceph-mon[77138]: pgmap v2107: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 29 08:10:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3378207188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.232 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.233 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3355981760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.772 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.913 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.914 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.918 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:54 compute-2 nova_compute[232428]: 2025-11-29 08:10:54.918 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:54.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.135 232432 DEBUG nova.network.neutron [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated VIF entry in instance network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.137 232432 DEBUG nova.network.neutron [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.151 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.152 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4100MB free_disk=20.876178741455078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.153 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.153 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.194 232432 DEBUG oslo_concurrency.lockutils [req-2d93c1bd-6969-4464-a930-8187ca428d10 req-fa1cbaa5-c49b-4cff-a08f-47a5ffef3598 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.264 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ed45397-ad95-4437-a0df-a49849d1d9bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.266 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 35f7492d-e1a0-4369-bf32-ba8fa094036a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.266 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.267 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:10:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:55 compute-2 nova_compute[232428]: 2025-11-29 08:10:55.335 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:55 compute-2 ovn_controller[134375]: 2025-11-29T08:10:55Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:10:55 compute-2 ceph-mon[77138]: pgmap v2108: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 08:10:55 compute-2 ceph-mon[77138]: osdmap e266: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3355981760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.083 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4105178030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.110 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.117 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.145 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.189 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:10:56 compute-2 nova_compute[232428]: 2025-11-29 08:10:56.190 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-2 ceph-mon[77138]: osdmap e267: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4105178030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:57 compute-2 nova_compute[232428]: 2025-11-29 08:10:57.055 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:57 compute-2 nova_compute[232428]: 2025-11-29 08:10:57.721 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:57 compute-2 ceph-mon[77138]: pgmap v2111: 305 pgs: 305 active+clean; 261 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.1 MiB/s wr, 197 op/s
Nov 29 08:10:57 compute-2 ceph-mon[77138]: osdmap e268: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-2 podman[276035]: 2025-11-29 08:10:58.719575488 +0000 UTC m=+0.107550724 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:10:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:10:59 compute-2 nova_compute[232428]: 2025-11-29 08:10:59.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:10:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:10:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:59.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:10:59 compute-2 ceph-mon[77138]: pgmap v2113: 305 pgs: 305 active+clean; 288 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 4.0 MiB/s wr, 267 op/s
Nov 29 08:11:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:00.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:01 compute-2 nova_compute[232428]: 2025-11-29 08:11:01.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:01.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:01 compute-2 nova_compute[232428]: 2025-11-29 08:11:01.683 232432 DEBUG nova.virt.libvirt.driver [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:11:01 compute-2 ceph-mon[77138]: pgmap v2114: 305 pgs: 305 active+clean; 326 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 7.8 MiB/s wr, 272 op/s
Nov 29 08:11:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-2 nova_compute[232428]: 2025-11-29 08:11:02.096 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-2 ceph-mon[77138]: osdmap e269: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3485771019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:02.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:03.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:03.317 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:03.319 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:03.320 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:03 compute-2 ovn_controller[134375]: 2025-11-29T08:11:03Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:f6:4f 10.100.0.8
Nov 29 08:11:03 compute-2 ovn_controller[134375]: 2025-11-29T08:11:03Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:f6:4f 10.100.0.8
Nov 29 08:11:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-2 ceph-mon[77138]: pgmap v2116: 305 pgs: 305 active+clean; 326 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.9 MiB/s wr, 138 op/s
Nov 29 08:11:03 compute-2 ceph-mon[77138]: osdmap e270: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-2 kernel: tap25f618be-49 (unregistering): left promiscuous mode
Nov 29 08:11:03 compute-2 NetworkManager[48993]: <info>  [1764403863.9849] device (tap25f618be-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:11:04 compute-2 ovn_controller[134375]: 2025-11-29T08:11:03Z|00487|binding|INFO|Releasing lport 25f618be-492d-4ac9-9c9c-6583e0402572 from this chassis (sb_readonly=0)
Nov 29 08:11:04 compute-2 ovn_controller[134375]: 2025-11-29T08:11:03Z|00488|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 down in Southbound
Nov 29 08:11:04 compute-2 ovn_controller[134375]: 2025-11-29T08:11:04Z|00489|binding|INFO|Removing iface tap25f618be-49 ovn-installed in OVS
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.006 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.008 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.009 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.011 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b60868fe-6cf0-4b3b-ae8e-9f8e5de61bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.011 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.028 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 08:11:04 compute-2 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000064.scope: Consumed 15.762s CPU time.
Nov 29 08:11:04 compute-2 systemd-machined[194747]: Machine qemu-44-instance-00000064 terminated.
Nov 29 08:11:04 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [NOTICE]   (275519) : haproxy version is 2.8.14-c23fe91
Nov 29 08:11:04 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [NOTICE]   (275519) : path to executable is /usr/sbin/haproxy
Nov 29 08:11:04 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [WARNING]  (275519) : Exiting Master process...
Nov 29 08:11:04 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [ALERT]    (275519) : Current worker (275523) exited with code 143 (Terminated)
Nov 29 08:11:04 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[275493]: [WARNING]  (275519) : All workers exited. Exiting... (0)
Nov 29 08:11:04 compute-2 systemd[1]: libpod-7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5.scope: Deactivated successfully.
Nov 29 08:11:04 compute-2 conmon[275493]: conmon 7cd79b30d7fbb8866b52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5.scope/container/memory.events
Nov 29 08:11:04 compute-2 podman[276089]: 2025-11-29 08:11:04.20457315 +0000 UTC m=+0.051040933 container died 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5-userdata-shm.mount: Deactivated successfully.
Nov 29 08:11:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-fad69db77a15474c1bd561af2dd9276c826dd4d03aa460b0e72427b59f1524e0-merged.mount: Deactivated successfully.
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 podman[276089]: 2025-11-29 08:11:04.260567595 +0000 UTC m=+0.107035338 container cleanup 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:11:04 compute-2 systemd[1]: libpod-conmon-7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5.scope: Deactivated successfully.
Nov 29 08:11:04 compute-2 podman[276127]: 2025-11-29 08:11:04.333104936 +0000 UTC m=+0.045161549 container remove 7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.342 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc65263-84f5-470c-b147-3677f30ad801]: (4, ('Sat Nov 29 08:11:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5)\n7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5\nSat Nov 29 08:11:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5)\n7cd79b30d7fbb8866b52f26ce59ea813bc633a62095aacd53ad0afe1b88ae2f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.344 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[01cc26bd-cd64-4420-b6a7-f7ada6ef4dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.345 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.397 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.420 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.423 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[acaea3df-f77d-4c51-aeb3-0d906657d30d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.443 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4afbdb58-7501-4dbe-a79c-a190669bd86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.446 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8470e450-dac3-4365-8d80-8311e1a23be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.464 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d5a037-6326-4e41-b7f8-f78be567edcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680624, 'reachable_time': 42579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276144, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.470 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.470 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf7a418-d794-4a04-90ea-ae2cdcb903c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.700 232432 INFO nova.virt.libvirt.driver [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance shutdown successfully after 13 seconds.
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.706 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.706 232432 DEBUG nova.objects.instance [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.721 232432 DEBUG nova.compute.manager [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.779 232432 DEBUG oslo_concurrency.lockutils [None req-7cbb1539-f10a-41d4-9ffc-1d4660f27d74 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:04 compute-2 ceph-mon[77138]: osdmap e271: 3 total, 3 up, 3 in
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.879 232432 DEBUG nova.compute.manager [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.880 232432 DEBUG oslo_concurrency.lockutils [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.880 232432 DEBUG oslo_concurrency.lockutils [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.880 232432 DEBUG oslo_concurrency.lockutils [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.881 232432 DEBUG nova.compute.manager [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.881 232432 WARNING nova.compute.manager [req-8973035b-a16c-416e-90b2-50842f395d08 req-e8ee7dae-1e34-4035-b168-946e7741f03f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state stopped and task_state None.
Nov 29 08:11:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:04.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:04 compute-2 nova_compute[232428]: 2025-11-29 08:11:04.996 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.998 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:04.999 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:11:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:05.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:05 compute-2 ceph-mon[77138]: pgmap v2119: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 367 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 7.6 MiB/s wr, 135 op/s
Nov 29 08:11:06 compute-2 nova_compute[232428]: 2025-11-29 08:11:06.087 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 08:11:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:06.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:07.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.302 232432 DEBUG nova.compute.manager [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.302 232432 DEBUG oslo_concurrency.lockutils [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.303 232432 DEBUG oslo_concurrency.lockutils [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.303 232432 DEBUG oslo_concurrency.lockutils [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.303 232432 DEBUG nova.compute.manager [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.304 232432 WARNING nova.compute.manager [req-363f8c1b-fa6a-4452-be0c-e60122491095 req-96957748-8ae3-4512-8cb5-01d2decc8aff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state stopped and task_state None.
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.448 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.471 232432 DEBUG oslo_concurrency.lockutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.472 232432 DEBUG oslo_concurrency.lockutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.472 232432 DEBUG nova.network.neutron [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:11:07 compute-2 nova_compute[232428]: 2025-11-29 08:11:07.473 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'info_cache' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:07 compute-2 ceph-mon[77138]: pgmap v2120: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 480 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 337 op/s
Nov 29 08:11:07 compute-2 ceph-mon[77138]: osdmap e272: 3 total, 3 up, 3 in
Nov 29 08:11:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/581233916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:08.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:09 compute-2 sudo[276147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:09 compute-2 sudo[276147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:09 compute-2 sudo[276147]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:09 compute-2 sudo[276172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:09 compute-2 sudo[276172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:09 compute-2 sudo[276172]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:10 compute-2 ceph-mon[77138]: pgmap v2122: 305 pgs: 305 active+clean; 483 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 16 MiB/s wr, 348 op/s
Nov 29 08:11:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2287252584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 08:11:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:10.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-2 ceph-mon[77138]: osdmap e273: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-2 ceph-mon[77138]: pgmap v2124: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 29 08:11:11 compute-2 nova_compute[232428]: 2025-11-29 08:11:11.089 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:11 compute-2 nova_compute[232428]: 2025-11-29 08:11:11.738 232432 DEBUG nova.network.neutron [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-2 nova_compute[232428]: 2025-11-29 08:11:11.986 232432 DEBUG oslo_concurrency.lockutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:12 compute-2 ceph-mon[77138]: osdmap e274: 3 total, 3 up, 3 in
Nov 29 08:11:12 compute-2 ceph-mon[77138]: osdmap e275: 3 total, 3 up, 3 in
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.102 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.130 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.130 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.148 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.190 232432 DEBUG nova.virt.libvirt.vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.190 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.191 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.192 232432 DEBUG os_vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.195 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f618be-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.197 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.202 232432 INFO os_vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.210 232432 DEBUG nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start _get_guest_xml network_info=[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.214 232432 WARNING nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.218 232432 DEBUG nova.virt.libvirt.host [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.219 232432 DEBUG nova.virt.libvirt.host [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.221 232432 DEBUG nova.virt.libvirt.host [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.222 232432 DEBUG nova.virt.libvirt.host [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.223 232432 DEBUG nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.224 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.224 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.224 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.225 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.225 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.225 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.226 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.226 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.226 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.227 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.227 232432 DEBUG nova.virt.hardware [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.227 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.246 232432 DEBUG oslo_concurrency.processutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3021305718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.707 232432 DEBUG oslo_concurrency.processutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:12 compute-2 nova_compute[232428]: 2025-11-29 08:11:12.752 232432 DEBUG oslo_concurrency.processutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:12.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:13 compute-2 ceph-mon[77138]: pgmap v2127: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 173 KiB/s wr, 84 op/s
Nov 29 08:11:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3021305718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1659998606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.221 232432 DEBUG oslo_concurrency.processutils [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.224 232432 DEBUG nova.virt.libvirt.vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.224 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.225 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.227 232432 DEBUG nova.objects.instance [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.245 232432 DEBUG nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <uuid>2ed45397-ad95-4437-a0df-a49849d1d9bf</uuid>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <name>instance-00000064</name>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1161621840</nova:name>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:11:12</nova:creationTime>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <nova:port uuid="25f618be-492d-4ac9-9c9c-6583e0402572">
Nov 29 08:11:13 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <system>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="serial">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="uuid">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </system>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <os>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </os>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <features>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </features>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk">
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </source>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config">
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </source>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:11:13 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e8:62:3f"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <target dev="tap25f618be-49"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log" append="off"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <video>
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </video>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:11:13 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:11:13 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:11:13 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:11:13 compute-2 nova_compute[232428]: </domain>
Nov 29 08:11:13 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.246 232432 DEBUG nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.246 232432 DEBUG nova.virt.libvirt.driver [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.247 232432 DEBUG nova.virt.libvirt.vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.247 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.247 232432 DEBUG nova.network.os_vif_util [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.248 232432 DEBUG os_vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.249 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.249 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.252 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.252 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f618be-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.253 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f618be-49, col_values=(('external_ids', {'iface-id': '25f618be-492d-4ac9-9c9c-6583e0402572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:62:3f', 'vm-uuid': '2ed45397-ad95-4437-a0df-a49849d1d9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.2555] manager: (tap25f618be-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.257 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.262 232432 INFO os_vif [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:11:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:13 compute-2 kernel: tap25f618be-49: entered promiscuous mode
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.3401] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.340 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 ovn_controller[134375]: 2025-11-29T08:11:13Z|00490|binding|INFO|Claiming lport 25f618be-492d-4ac9-9c9c-6583e0402572 for this chassis.
Nov 29 08:11:13 compute-2 ovn_controller[134375]: 2025-11-29T08:11:13Z|00491|binding|INFO|25f618be-492d-4ac9-9c9c-6583e0402572: Claiming fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.349 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.350 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.351 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:13 compute-2 ovn_controller[134375]: 2025-11-29T08:11:13Z|00492|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 ovn-installed in OVS
Nov 29 08:11:13 compute-2 ovn_controller[134375]: 2025-11-29T08:11:13Z|00493|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 up in Southbound
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.359 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.366 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd36c16-8307-4f94-87ed-9233c4a0df91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.368 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.370 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.370 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd76308-efb8-4736-87f9-156ec5dbe92e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.371 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1af0a9-f081-435f-9a3b-01cbd969ff8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 systemd-udevd[276276]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:11:13 compute-2 systemd-machined[194747]: New machine qemu-46-instance-00000064.
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.387 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3a70cf88-d04b-4afc-8b1a-5ff394c0a500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.3907] device (tap25f618be-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.3918] device (tap25f618be-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:11:13 compute-2 systemd[1]: Started Virtual Machine qemu-46-instance-00000064.
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.413 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3245f93f-067d-4831-a126-64037af71eea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.449 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1231b482-cdec-4e6e-bef1-6a36eb411feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.455 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36237e43-3ce2-4ac7-93b0-b1763dbf7ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.4566] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 08:11:13 compute-2 systemd-udevd[276280]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.500 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c29485-8e96-486f-81cd-414300d309b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.504 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[80f58e60-b58b-4b6e-8f42-71cd1bbc8fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.5360] device (tap988c10fa-90): carrier: link connected
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.543 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a79b5901-d8a3-4ac5-9bc7-52ef11f5bea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.565 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a17a6a2-7860-4b69-b0bd-5f1286dde4a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683980, 'reachable_time': 38948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276309, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.587 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[945b8c53-c610-4aef-ba3f-f8316f51ab58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683980, 'tstamp': 683980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276310, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.613 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df3d0844-3333-4c1f-bd24-9813dcd953c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683980, 'reachable_time': 38948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276312, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.652 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bc07f0a3-deff-4054-8b14-1d62e459757b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.726 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[088f690c-cb06-4272-81cd-1ec297c2b605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.728 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.728 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.728 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.730 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:11:13 compute-2 NetworkManager[48993]: <info>  [1764403873.7313] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.734 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.736 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 ovn_controller[134375]: 2025-11-29T08:11:13Z|00494|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.737 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.739 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.740 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[00532797-be12-444e-b268-e7f042045864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.741 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:11:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:13.742 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:11:13 compute-2 nova_compute[232428]: 2025-11-29 08:11:13.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:14.001 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.081 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 2ed45397-ad95-4437-a0df-a49849d1d9bf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.081 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403874.0807147, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.082 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.084 232432 DEBUG nova.compute.manager [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.088 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance rebooted successfully.
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.089 232432 DEBUG nova.compute.manager [None req-9e400350-104b-4842-b946-cac298ccc859 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.101 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.105 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1659998606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.148 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.148 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403874.0840487, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.149 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Started (Lifecycle Event)
Nov 29 08:11:14 compute-2 podman[276386]: 2025-11-29 08:11:14.159764141 +0000 UTC m=+0.065798572 container create 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.175 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.179 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:14 compute-2 systemd[1]: Started libpod-conmon-05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4.scope.
Nov 29 08:11:14 compute-2 podman[276386]: 2025-11-29 08:11:14.123622975 +0000 UTC m=+0.029657406 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.226 232432 DEBUG nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.227 232432 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.227 232432 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.228 232432 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.228 232432 DEBUG nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:14 compute-2 nova_compute[232428]: 2025-11-29 08:11:14.228 232432 WARNING nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:11:14 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:11:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa07e2fd33d6c372af6ad6c9d47d0812edc0df6cc48b1d7e9c53e4fdb11b4653/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-2 podman[276386]: 2025-11-29 08:11:14.265178257 +0000 UTC m=+0.171212688 container init 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:11:14 compute-2 podman[276386]: 2025-11-29 08:11:14.272029191 +0000 UTC m=+0.178063612 container start 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:11:14 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [NOTICE]   (276406) : New worker (276408) forked
Nov 29 08:11:14 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [NOTICE]   (276406) : Loading success.
Nov 29 08:11:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:14.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:15 compute-2 ceph-mon[77138]: pgmap v2128: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Nov 29 08:11:15 compute-2 ceph-mon[77138]: osdmap e276: 3 total, 3 up, 3 in
Nov 29 08:11:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:15 compute-2 podman[276417]: 2025-11-29 08:11:15.694896619 +0000 UTC m=+0.085782784 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.092 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.343 232432 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.343 232432 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.344 232432 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.344 232432 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.344 232432 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:16 compute-2 nova_compute[232428]: 2025-11-29 08:11:16.345 232432 WARNING nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:11:16 compute-2 sudo[276438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:16 compute-2 sudo[276438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-2 sudo[276438]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 08:11:16 compute-2 sudo[276463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:16 compute-2 sudo[276463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-2 sudo[276463]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:16 compute-2 sudo[276488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:16 compute-2 sudo[276488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-2 sudo[276488]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:16 compute-2 sudo[276513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:11:16 compute-2 sudo[276513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:16.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:17 compute-2 sudo[276513]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 08:11:17 compute-2 ceph-mon[77138]: pgmap v2130: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 551 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 449 op/s
Nov 29 08:11:17 compute-2 ceph-mon[77138]: osdmap e277: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-2 nova_compute[232428]: 2025-11-29 08:11:18.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:18 compute-2 podman[276570]: 2025-11-29 08:11:18.674096098 +0000 UTC m=+0.067389302 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:11:18 compute-2 ceph-mon[77138]: osdmap e278: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/776797890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:18.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:18 compute-2 sshd-session[276571]: Invalid user sol from 45.148.10.240 port 43726
Nov 29 08:11:19 compute-2 sshd-session[276571]: Connection closed by invalid user sol 45.148.10.240 port 43726 [preauth]
Nov 29 08:11:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.559 232432 INFO nova.compute.manager [None req-6acbb6e7-691a-4bce-9b9d-9c3cf432c0fb 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Pausing
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.562 232432 DEBUG nova.objects.instance [None req-6acbb6e7-691a-4bce-9b9d-9c3cf432c0fb 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.592 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403879.592565, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.593 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Paused (Lifecycle Event)
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.596 232432 DEBUG nova.compute.manager [None req-6acbb6e7-691a-4bce-9b9d-9c3cf432c0fb 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.641 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.645 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:19 compute-2 nova_compute[232428]: 2025-11-29 08:11:19.682 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 29 08:11:20 compute-2 ceph-mon[77138]: pgmap v2133: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 7.8 MiB/s wr, 498 op/s
Nov 29 08:11:20 compute-2 ceph-mon[77138]: osdmap e279: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:11:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:21 compute-2 nova_compute[232428]: 2025-11-29 08:11:21.096 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 08:11:22 compute-2 ceph-mon[77138]: pgmap v2135: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 426 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.9 MiB/s wr, 524 op/s
Nov 29 08:11:22 compute-2 ceph-mon[77138]: osdmap e280: 3 total, 3 up, 3 in
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.178 232432 INFO nova.compute.manager [None req-66539e8b-e7a2-4045-9bff-e92f65f7665c 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Unpausing
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.180 232432 DEBUG nova.objects.instance [None req-66539e8b-e7a2-4045-9bff-e92f65f7665c 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.218 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403882.2181175, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.218 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:11:22 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.226 232432 DEBUG nova.virt.libvirt.guest [None req-66539e8b-e7a2-4045-9bff-e92f65f7665c 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.227 232432 DEBUG nova.compute.manager [None req-66539e8b-e7a2-4045-9bff-e92f65f7665c 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.266 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.271 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.299 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 29 08:11:22 compute-2 ovn_controller[134375]: 2025-11-29T08:11:22Z|00495|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 08:11:22 compute-2 ovn_controller[134375]: 2025-11-29T08:11:22Z|00496|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:11:22 compute-2 nova_compute[232428]: 2025-11-29 08:11:22.414 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:22.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:23 compute-2 ceph-mon[77138]: pgmap v2137: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 364 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 26 KiB/s wr, 176 op/s
Nov 29 08:11:23 compute-2 nova_compute[232428]: 2025-11-29 08:11:23.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:24.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:25 compute-2 ceph-mon[77138]: pgmap v2138: 305 pgs: 305 active+clean; 314 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 24 KiB/s wr, 165 op/s
Nov 29 08:11:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:26 compute-2 nova_compute[232428]: 2025-11-29 08:11:26.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 08:11:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:27 compute-2 sudo[276597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:27 compute-2 sudo[276597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:27 compute-2 sudo[276597]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:27 compute-2 sudo[276622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:11:27 compute-2 sudo[276622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:27 compute-2 sudo[276622]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:27 compute-2 ceph-mon[77138]: pgmap v2139: 305 pgs: 305 active+clean; 281 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 617 KiB/s rd, 18 KiB/s wr, 113 op/s
Nov 29 08:11:27 compute-2 ceph-mon[77138]: osdmap e281: 3 total, 3 up, 3 in
Nov 29 08:11:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:11:28 compute-2 nova_compute[232428]: 2025-11-29 08:11:28.204 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:28 compute-2 nova_compute[232428]: 2025-11-29 08:11:28.297 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1831097261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1831097261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:28.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:29 compute-2 sudo[276648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:29 compute-2 sudo[276648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-2 sudo[276648]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-2 sudo[276677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:29 compute-2 sudo[276677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-2 sudo[276677]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-2 podman[276672]: 2025-11-29 08:11:29.572622683 +0000 UTC m=+0.162925399 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:11:29 compute-2 ceph-mon[77138]: pgmap v2141: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Nov 29 08:11:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3968321550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:30 compute-2 ovn_controller[134375]: 2025-11-29T08:11:30Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:11:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:31 compute-2 nova_compute[232428]: 2025-11-29 08:11:31.101 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:31 compute-2 ceph-mon[77138]: pgmap v2142: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Nov 29 08:11:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3588585032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Cumulative writes: 8698 writes, 44K keys, 8698 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 8698 writes, 8698 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1709 writes, 8103 keys, 1709 commit groups, 1.0 writes per commit group, ingest: 16.46 MB, 0.03 MB/s
                                           Interval WAL: 1709 writes, 1709 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.9      0.65              0.21        24    0.027       0      0       0.0       0.0
                                             L6      1/0    8.37 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   4.0    118.5     98.1      2.14              0.73        23    0.093    135K    13K       0.0       0.0
                                            Sum      1/0    8.37 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   5.0     90.9     93.9      2.79              0.95        47    0.059    135K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.6    121.3    120.0      0.48              0.19        10    0.048     37K   2521       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    118.5     98.1      2.14              0.73        23    0.093    135K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     92.0      0.57              0.21        23    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 2.8 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 30.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1682,29.74 MB,9.78359%) FilterBlock(47,393.80 KB,0.126502%) IndexBlock(47,657.22 KB,0.211123%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:11:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:33 compute-2 nova_compute[232428]: 2025-11-29 08:11:33.160 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:33 compute-2 nova_compute[232428]: 2025-11-29 08:11:33.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:33 compute-2 ceph-mon[77138]: pgmap v2143: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 15 KiB/s wr, 50 op/s
Nov 29 08:11:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:35 compute-2 ceph-mon[77138]: pgmap v2144: 305 pgs: 305 active+clean; 308 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 962 KiB/s wr, 63 op/s
Nov 29 08:11:36 compute-2 nova_compute[232428]: 2025-11-29 08:11:36.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1476214331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4152178789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.283 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:37 compute-2 ceph-mon[77138]: pgmap v2145: 305 pgs: 305 active+clean; 374 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 117 op/s
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.791 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.792 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.793 232432 INFO nova.compute.manager [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Rebooting instance
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.815 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.815 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:37 compute-2 nova_compute[232428]: 2025-11-29 08:11:37.816 232432 DEBUG nova.network.neutron [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:11:38 compute-2 nova_compute[232428]: 2025-11-29 08:11:38.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2072111826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:39 compute-2 ceph-mon[77138]: pgmap v2146: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 3.8 MiB/s wr, 105 op/s
Nov 29 08:11:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2277657958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:40.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.107 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.261 232432 DEBUG nova.network.neutron [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.289 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.291 232432 DEBUG nova.compute.manager [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:41.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:41 compute-2 ceph-mon[77138]: pgmap v2147: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 3.6 MiB/s wr, 113 op/s
Nov 29 08:11:41 compute-2 kernel: tap25f618be-49 (unregistering): left promiscuous mode
Nov 29 08:11:41 compute-2 NetworkManager[48993]: <info>  [1764403901.4768] device (tap25f618be-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:11:41 compute-2 ovn_controller[134375]: 2025-11-29T08:11:41Z|00497|binding|INFO|Releasing lport 25f618be-492d-4ac9-9c9c-6583e0402572 from this chassis (sb_readonly=0)
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 ovn_controller[134375]: 2025-11-29T08:11:41Z|00498|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 down in Southbound
Nov 29 08:11:41 compute-2 ovn_controller[134375]: 2025-11-29T08:11:41Z|00499|binding|INFO|Removing iface tap25f618be-49 ovn-installed in OVS
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:41.506 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:41.508 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:11:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:41.509 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:11:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:41.511 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb9493c-70ab-4e60-910c-e69c421e6847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:41.512 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.516 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 08:11:41 compute-2 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000064.scope: Consumed 15.176s CPU time.
Nov 29 08:11:41 compute-2 systemd-machined[194747]: Machine qemu-46-instance-00000064 terminated.
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.653 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.655 232432 DEBUG nova.objects.instance [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.676 232432 DEBUG nova.virt.libvirt.vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.677 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.677 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.678 232432 DEBUG os_vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.679 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.680 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f618be-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.682 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.687 232432 INFO os_vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.695 232432 DEBUG nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start _get_guest_xml network_info=[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.699 232432 WARNING nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.706 232432 DEBUG nova.virt.libvirt.host [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.708 232432 DEBUG nova.virt.libvirt.host [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.714 232432 DEBUG nova.virt.libvirt.host [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.715 232432 DEBUG nova.virt.libvirt.host [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.716 232432 DEBUG nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.717 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.717 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.718 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.718 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.718 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.718 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.718 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.719 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.719 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.719 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.719 232432 DEBUG nova.virt.hardware [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.720 232432 DEBUG nova.objects.instance [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:41 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [NOTICE]   (276406) : haproxy version is 2.8.14-c23fe91
Nov 29 08:11:41 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [NOTICE]   (276406) : path to executable is /usr/sbin/haproxy
Nov 29 08:11:41 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [WARNING]  (276406) : Exiting Master process...
Nov 29 08:11:41 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [ALERT]    (276406) : Current worker (276408) exited with code 143 (Terminated)
Nov 29 08:11:41 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276402]: [WARNING]  (276406) : All workers exited. Exiting... (0)
Nov 29 08:11:41 compute-2 systemd[1]: libpod-05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4.scope: Deactivated successfully.
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.758 232432 DEBUG oslo_concurrency.processutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:41 compute-2 podman[276756]: 2025-11-29 08:11:41.759698451 +0000 UTC m=+0.116065218 container died 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.801 232432 DEBUG nova.compute.manager [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.801 232432 DEBUG oslo_concurrency.lockutils [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.802 232432 DEBUG oslo_concurrency.lockutils [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.802 232432 DEBUG oslo_concurrency.lockutils [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.802 232432 DEBUG nova.compute.manager [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:41 compute-2 nova_compute[232428]: 2025-11-29 08:11:41.802 232432 WARNING nova.compute.manager [req-20e44c35-4b04-402e-9a8f-4479cbd5903a req-47f49df5-18cb-4384-bc01-ff8d8746144f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state reboot_started_hard.
Nov 29 08:11:42 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4-userdata-shm.mount: Deactivated successfully.
Nov 29 08:11:42 compute-2 systemd[1]: var-lib-containers-storage-overlay-aa07e2fd33d6c372af6ad6c9d47d0812edc0df6cc48b1d7e9c53e4fdb11b4653-merged.mount: Deactivated successfully.
Nov 29 08:11:42 compute-2 nova_compute[232428]: 2025-11-29 08:11:42.246 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/264640078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:42 compute-2 nova_compute[232428]: 2025-11-29 08:11:42.271 232432 DEBUG oslo_concurrency.processutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:42 compute-2 nova_compute[232428]: 2025-11-29 08:11:42.319 232432 DEBUG oslo_concurrency.processutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:42 compute-2 podman[276756]: 2025-11-29 08:11:42.643776936 +0000 UTC m=+1.000143713 container cleanup 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:11:42 compute-2 systemd[1]: libpod-conmon-05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4.scope: Deactivated successfully.
Nov 29 08:11:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/264640078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3283319977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.568 232432 DEBUG oslo_concurrency.processutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.571 232432 DEBUG nova.virt.libvirt.vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.572 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.574 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.576 232432 DEBUG nova.objects.instance [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:43 compute-2 podman[276853]: 2025-11-29 08:11:43.674239735 +0000 UTC m=+0.995982625 container remove 05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.686 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9e786cca-ef4d-4808-9c1a-b50b5b7d2c74]: (4, ('Sat Nov 29 08:11:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4)\n05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4\nSat Nov 29 08:11:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4)\n05bc1a61524a82a75ca55bf7573ab17a25c87869c2aede49ff24cb9b7d2a8da4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.688 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fab3e4f8-29d4-4edd-ac64-16e7c93e9d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.689 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.708 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.711 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1d2de4-9604-4f94-965b-0484734d8484]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.728 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1e14a139-891d-4858-a074-962ffe1929d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.730 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[207afd76-38a7-4a48-8fb9-757654b230c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.741 232432 DEBUG nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <uuid>2ed45397-ad95-4437-a0df-a49849d1d9bf</uuid>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <name>instance-00000064</name>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1161621840</nova:name>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:11:41</nova:creationTime>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <nova:port uuid="25f618be-492d-4ac9-9c9c-6583e0402572">
Nov 29 08:11:43 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <system>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="serial">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="uuid">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </system>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <os>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </os>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <features>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </features>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk">
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </source>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config">
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </source>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:11:43 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e8:62:3f"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <target dev="tap25f618be-49"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log" append="off"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <video>
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </video>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:11:43 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:11:43 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:11:43 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:11:43 compute-2 nova_compute[232428]: </domain>
Nov 29 08:11:43 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.743 232432 DEBUG nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.744 232432 DEBUG nova.virt.libvirt.driver [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.745 232432 DEBUG nova.virt.libvirt.vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.746 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.747 232432 DEBUG nova.network.os_vif_util [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.748 232432 DEBUG os_vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.750 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.751 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.757 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.758 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f618be-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.759 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f618be-49, col_values=(('external_ids', {'iface-id': '25f618be-492d-4ac9-9c9c-6583e0402572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:62:3f', 'vm-uuid': '2ed45397-ad95-4437-a0df-a49849d1d9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.762 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 NetworkManager[48993]: <info>  [1764403903.7635] manager: (tap25f618be-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.763 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bbf5ba-cf41-41af-93bd-07dac18eb783]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683971, 'reachable_time': 35224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276874, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.769 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.769 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[28ce0444-b479-462d-80fd-4a89246dce97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.778 232432 INFO os_vif [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:11:43 compute-2 ceph-mon[77138]: pgmap v2148: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 29 08:11:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3283319977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.882 232432 DEBUG nova.compute.manager [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.882 232432 DEBUG oslo_concurrency.lockutils [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.883 232432 DEBUG oslo_concurrency.lockutils [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.883 232432 DEBUG oslo_concurrency.lockutils [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.883 232432 DEBUG nova.compute.manager [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.883 232432 WARNING nova.compute.manager [req-8c36202c-676f-4ee7-921c-b8406d946477 req-082ececc-c422-4918-b8df-5e03ce8b0d16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state reboot_started_hard.
Nov 29 08:11:43 compute-2 kernel: tap25f618be-49: entered promiscuous mode
Nov 29 08:11:43 compute-2 NetworkManager[48993]: <info>  [1764403903.8893] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 08:11:43 compute-2 systemd-udevd[276732]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:11:43 compute-2 ovn_controller[134375]: 2025-11-29T08:11:43Z|00500|binding|INFO|Claiming lport 25f618be-492d-4ac9-9c9c-6583e0402572 for this chassis.
Nov 29 08:11:43 compute-2 ovn_controller[134375]: 2025-11-29T08:11:43Z|00501|binding|INFO|25f618be-492d-4ac9-9c9c-6583e0402572: Claiming fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.890 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.898 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.899 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.901 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:43 compute-2 NetworkManager[48993]: <info>  [1764403903.9077] device (tap25f618be-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:11:43 compute-2 NetworkManager[48993]: <info>  [1764403903.9091] device (tap25f618be-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:11:43 compute-2 ovn_controller[134375]: 2025-11-29T08:11:43Z|00502|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 ovn-installed in OVS
Nov 29 08:11:43 compute-2 ovn_controller[134375]: 2025-11-29T08:11:43Z|00503|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 up in Southbound
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.913 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.918 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b0eccef4-f604-4c6e-bac9-740ef18bd517]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.930 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.932 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.932 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5c619b56-3792-4adc-a23c-2ee89df41a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.933 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[45b0043a-8ed4-4cfc-b5dc-ddbda284b2b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 nova_compute[232428]: 2025-11-29 08:11:43.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.947 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a641d1ed-f9a9-4cbb-9209-643b3eaea895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 systemd-machined[194747]: New machine qemu-47-instance-00000064.
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.962 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c229502d-87ca-4cf9-8397-5878c93a5896]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:43 compute-2 systemd[1]: Started Virtual Machine qemu-47-instance-00000064.
Nov 29 08:11:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:43.997 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[21985cfa-6e8f-4d12-bc0c-5b1eba3fdbdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 NetworkManager[48993]: <info>  [1764403904.0050] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.003 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fdffe29c-d266-468f-94bd-1503f19bdcb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.040 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[347c1da0-b529-4851-9d64-85dbd4ca21f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.044 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6faf23-2bbe-4b95-b6bc-f07dd51329b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 NetworkManager[48993]: <info>  [1764403904.0672] device (tap988c10fa-90): carrier: link connected
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.071 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9637d9c7-3b52-4d04-9de1-7e8e062880dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.092 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d7707804-cfd7-4af0-a9c2-358cd49b8b57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 38688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276920, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.116 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ec5c05-c728-483f-a711-3d2e1f760200]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687033, 'tstamp': 687033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276921, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.138 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[930f6b4a-6204-4b6d-bfe6-c7738c8d682c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 38688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276922, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.183 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[65a68241-a18a-4ca5-bfda-b72adb268a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.265 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de5e72d2-71df-428e-afdc-5a80ca1b9742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.267 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.267 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.267 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:44 compute-2 nova_compute[232428]: 2025-11-29 08:11:44.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:44 compute-2 NetworkManager[48993]: <info>  [1764403904.2704] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 08:11:44 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:11:44 compute-2 nova_compute[232428]: 2025-11-29 08:11:44.274 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.274 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:44 compute-2 ovn_controller[134375]: 2025-11-29T08:11:44Z|00504|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:11:44 compute-2 nova_compute[232428]: 2025-11-29 08:11:44.297 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.298 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.299 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[25e89cae-06ec-45ad-bd5f-1a680a661007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.300 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:11:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:44.302 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:11:44 compute-2 podman[276954]: 2025-11-29 08:11:44.66835484 +0000 UTC m=+0.025858607 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:11:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:45 compute-2 podman[276954]: 2025-11-29 08:11:45.421955989 +0000 UTC m=+0.779459716 container create e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:11:45 compute-2 systemd[1]: Started libpod-conmon-e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6.scope.
Nov 29 08:11:45 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:11:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c825ced726bc342f67b48a6a7f61a5e8d5e35434a1a009df35c349d1013fb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:45 compute-2 podman[276954]: 2025-11-29 08:11:45.551326542 +0000 UTC m=+0.908830289 container init e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:11:45 compute-2 podman[276954]: 2025-11-29 08:11:45.558093972 +0000 UTC m=+0.915597699 container start e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:11:45 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [NOTICE]   (276987) : New worker (276991) forked
Nov 29 08:11:45 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [NOTICE]   (276987) : Loading success.
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.806 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 2ed45397-ad95-4437-a0df-a49849d1d9bf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.807 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403905.806344, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.807 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.809 232432 DEBUG nova.compute.manager [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.814 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance rebooted successfully.
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.814 232432 DEBUG nova.compute.manager [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:45 compute-2 ceph-mon[77138]: pgmap v2149: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.952 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.956 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.983 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.984 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403905.8091545, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.984 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Started (Lifecycle Event)
Nov 29 08:11:45 compute-2 nova_compute[232428]: 2025-11-29 08:11:45.998 232432 DEBUG oslo_concurrency.lockutils [None req-b589377b-6b47-4eee-ab11-d823b3d907f2 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.005 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.011 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.082 232432 DEBUG nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.083 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.083 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.084 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.084 232432 DEBUG nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.085 232432 WARNING nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.085 232432 DEBUG nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.085 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.086 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.086 232432 DEBUG oslo_concurrency.lockutils [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.086 232432 DEBUG nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.087 232432 WARNING nova.compute.manager [req-6763f180-4a1d-4337-93ed-8c91bc0a8175 req-06df0051-3560-4e51-9ac9-6c731cb4a752 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state None.
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.110 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:46 compute-2 nova_compute[232428]: 2025-11-29 08:11:46.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:46 compute-2 podman[277028]: 2025-11-29 08:11:46.698178768 +0000 UTC m=+0.082947456 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:11:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:47 compute-2 ceph-mon[77138]: pgmap v2150: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.700 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.700 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.701 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.701 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:48 compute-2 nova_compute[232428]: 2025-11-29 08:11:48.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:49 compute-2 sudo[277047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:49 compute-2 sudo[277047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:49 compute-2 sudo[277047]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:49 compute-2 sudo[277074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:49 compute-2 sudo[277074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:49 compute-2 sudo[277074]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:49 compute-2 podman[277070]: 2025-11-29 08:11:49.708365302 +0000 UTC m=+0.100481813 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:11:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:50 compute-2 ceph-mon[77138]: pgmap v2151: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 40 KiB/s wr, 161 op/s
Nov 29 08:11:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3995624476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:50 compute-2 nova_compute[232428]: 2025-11-29 08:11:50.621 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:50 compute-2 nova_compute[232428]: 2025-11-29 08:11:50.653 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:50 compute-2 nova_compute[232428]: 2025-11-29 08:11:50.653 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:11:50 compute-2 nova_compute[232428]: 2025-11-29 08:11:50.654 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:51.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:51 compute-2 nova_compute[232428]: 2025-11-29 08:11:51.112 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:51 compute-2 ceph-mon[77138]: pgmap v2152: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 36 KiB/s wr, 174 op/s
Nov 29 08:11:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/38568835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:53.263 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:53.264 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:11:53 compute-2 nova_compute[232428]: 2025-11-29 08:11:53.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:11:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:11:53 compute-2 ceph-mon[77138]: pgmap v2153: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 36 KiB/s wr, 214 op/s
Nov 29 08:11:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4282980456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:53 compute-2 nova_compute[232428]: 2025-11-29 08:11:53.765 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-2 nova_compute[232428]: 2025-11-29 08:11:54.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:54 compute-2 nova_compute[232428]: 2025-11-29 08:11:54.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:11:54 compute-2 nova_compute[232428]: 2025-11-29 08:11:54.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:54 compute-2 nova_compute[232428]: 2025-11-29 08:11:54.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:11:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:11:54.266 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:11:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:11:55 compute-2 ceph-mon[77138]: pgmap v2154: 305 pgs: 305 active+clean; 397 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.3 MiB/s wr, 226 op/s
Nov 29 08:11:56 compute-2 nova_compute[232428]: 2025-11-29 08:11:56.114 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:56 compute-2 nova_compute[232428]: 2025-11-29 08:11:56.295 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:11:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:57 compute-2 nova_compute[232428]: 2025-11-29 08:11:57.295 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:57 compute-2 nova_compute[232428]: 2025-11-29 08:11:57.295 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:58 compute-2 ceph-mon[77138]: pgmap v2155: 305 pgs: 305 active+clean; 453 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Nov 29 08:11:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1294256293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:58 compute-2 nova_compute[232428]: 2025-11-29 08:11:58.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:11:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:11:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:11:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:00 compute-2 ceph-mon[77138]: pgmap v2156: 305 pgs: 305 active+clean; 459 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.4 MiB/s wr, 199 op/s
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.319 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.321 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.321 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.321 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.322 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:00 compute-2 podman[277144]: 2025-11-29 08:12:00.696236643 +0000 UTC m=+0.094225148 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:12:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2527604229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:00 compute-2 nova_compute[232428]: 2025-11-29 08:12:00.811 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:01 compute-2 nova_compute[232428]: 2025-11-29 08:12:01.116 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:01 compute-2 ceph-mon[77138]: pgmap v2157: 305 pgs: 305 active+clean; 470 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Nov 29 08:12:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2527604229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:01 compute-2 ovn_controller[134375]: 2025-11-29T08:12:01Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:12:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:03.319 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:03.320 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:03.321 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:03 compute-2 ceph-mon[77138]: pgmap v2158: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 217 op/s
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.781 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.781 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.786 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.786 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.953 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.954 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4029MB free_disk=20.76736831665039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.954 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:03 compute-2 nova_compute[232428]: 2025-11-29 08:12:03.955 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:04 compute-2 nova_compute[232428]: 2025-11-29 08:12:04.539 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ed45397-ad95-4437-a0df-a49849d1d9bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:04 compute-2 nova_compute[232428]: 2025-11-29 08:12:04.539 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 35f7492d-e1a0-4369-bf32-ba8fa094036a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:04 compute-2 nova_compute[232428]: 2025-11-29 08:12:04.540 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:12:04 compute-2 nova_compute[232428]: 2025-11-29 08:12:04.540 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:12:04 compute-2 nova_compute[232428]: 2025-11-29 08:12:04.755 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1722738950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:05 compute-2 nova_compute[232428]: 2025-11-29 08:12:05.205 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:05 compute-2 nova_compute[232428]: 2025-11-29 08:12:05.213 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:05 compute-2 ceph-mon[77138]: pgmap v2159: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 859 KiB/s rd, 6.1 MiB/s wr, 186 op/s
Nov 29 08:12:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1722738950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:06 compute-2 nova_compute[232428]: 2025-11-29 08:12:06.120 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:06 compute-2 nova_compute[232428]: 2025-11-29 08:12:06.292 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:06 compute-2 nova_compute[232428]: 2025-11-29 08:12:06.348 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:12:06 compute-2 nova_compute[232428]: 2025-11-29 08:12:06.349 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:07 compute-2 ovn_controller[134375]: 2025-11-29T08:12:07Z|00505|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:12:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:08 compute-2 ceph-mon[77138]: pgmap v2160: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 984 KiB/s rd, 4.8 MiB/s wr, 162 op/s
Nov 29 08:12:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4285221341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:08 compute-2 nova_compute[232428]: 2025-11-29 08:12:08.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 08:12:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:09 compute-2 nova_compute[232428]: 2025-11-29 08:12:09.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:09 compute-2 nova_compute[232428]: 2025-11-29 08:12:09.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:12:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:09 compute-2 sudo[277200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:09 compute-2 sudo[277200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:09 compute-2 sudo[277200]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:09 compute-2 ceph-mon[77138]: pgmap v2161: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 08:12:09 compute-2 sudo[277225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:09 compute-2 sudo[277225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:09 compute-2 sudo[277225]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:11 compute-2 nova_compute[232428]: 2025-11-29 08:12:11.124 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:11.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:11 compute-2 ceph-mon[77138]: pgmap v2162: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 29 08:12:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:13 compute-2 ceph-mon[77138]: pgmap v2163: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 845 KiB/s wr, 73 op/s
Nov 29 08:12:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1886074226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:13 compute-2 nova_compute[232428]: 2025-11-29 08:12:13.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:15.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:15 compute-2 ceph-mon[77138]: pgmap v2164: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 434 KiB/s rd, 116 KiB/s wr, 43 op/s
Nov 29 08:12:16 compute-2 nova_compute[232428]: 2025-11-29 08:12:16.126 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3551191270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3551191270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:17.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:17 compute-2 podman[277253]: 2025-11-29 08:12:17.662815565 +0000 UTC m=+0.061191689 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 08:12:17 compute-2 ceph-mon[77138]: pgmap v2165: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 46 KiB/s wr, 21 op/s
Nov 29 08:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 39.82 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4077 syncs, 2.51 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:12:18 compute-2 nova_compute[232428]: 2025-11-29 08:12:18.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:19 compute-2 ceph-mon[77138]: pgmap v2166: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 43 KiB/s wr, 5 op/s
Nov 29 08:12:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:20 compute-2 podman[277275]: 2025-11-29 08:12:20.728616072 +0000 UTC m=+0.114238161 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:12:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:21 compute-2 nova_compute[232428]: 2025-11-29 08:12:21.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:21.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:21 compute-2 ceph-mon[77138]: pgmap v2167: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Nov 29 08:12:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3492893778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:23.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:23 compute-2 nova_compute[232428]: 2025-11-29 08:12:23.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:23 compute-2 ceph-mon[77138]: pgmap v2168: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 30 KiB/s wr, 4 op/s
Nov 29 08:12:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3484016015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 08:12:26 compute-2 ceph-mon[77138]: pgmap v2169: 305 pgs: 305 active+clean; 478 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 524 KiB/s wr, 16 op/s
Nov 29 08:12:26 compute-2 nova_compute[232428]: 2025-11-29 08:12:26.130 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:27 compute-2 ceph-mon[77138]: osdmap e282: 3 total, 3 up, 3 in
Nov 29 08:12:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1652710344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1662404296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:27.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:27 compute-2 sudo[277299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:27 compute-2 sudo[277299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:27 compute-2 sudo[277299]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:27 compute-2 sudo[277324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:27 compute-2 sudo[277324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:27 compute-2 sudo[277324]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:27 compute-2 sudo[277349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:27 compute-2 sudo[277349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:27 compute-2 sudo[277349]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:27 compute-2 sudo[277374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:12:27 compute-2 sudo[277374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561427187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561427187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:28 compute-2 ceph-mon[77138]: pgmap v2171: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 29 08:12:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3458217817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2561427187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2561427187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:28 compute-2 sudo[277374]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-2 sudo[277429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:28 compute-2 sudo[277429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-2 sudo[277429]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.653 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.654 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.676 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:12:28 compute-2 sudo[277454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:28 compute-2 sudo[277454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-2 sudo[277454]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-2 sudo[277479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:28 compute-2 sudo[277479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-2 sudo[277479]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.802 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.802 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.810 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.811 232432 INFO nova.compute.claims [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:12:28 compute-2 sudo[277504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 08:12:28 compute-2 sudo[277504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.879 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:28 compute-2 nova_compute[232428]: 2025-11-29 08:12:28.990 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:29 compute-2 sudo[277504]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:29 compute-2 ceph-mon[77138]: pgmap v2172: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Nov 29 08:12:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:12:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:12:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:12:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:12:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/399367608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.440 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.448 232432 DEBUG nova.compute.provider_tree [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.473 232432 DEBUG nova.scheduler.client.report [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.505 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.507 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.563 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.564 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.636 232432 INFO nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.656 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.791 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.793 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.793 232432 INFO nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating image(s)
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.830 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.864 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.894 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.899 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:29 compute-2 sudo[277617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.937 232432 DEBUG nova.policy [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '661b6600a32b40d8a48db16cb71c7e75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd72b5448be0e463f80dca118feb42d3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:12:29 compute-2 sudo[277617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:29 compute-2 sudo[277617]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.994 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.995 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.996 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:29 compute-2 nova_compute[232428]: 2025-11-29 08:12:29.996 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:30 compute-2 sudo[277651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:30 compute-2 sudo[277651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:30 compute-2 sudo[277651]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.029 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.035 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2ac9da94-cac6-4662-9bcf-9185ca957035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:12:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:12:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/399367608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.355 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2ac9da94-cac6-4662-9bcf-9185ca957035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.461 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] resizing rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.580 232432 DEBUG nova.objects.instance [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.616 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.616 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Ensure instance console log exists: /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.617 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.617 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:30 compute-2 nova_compute[232428]: 2025-11-29 08:12:30.618 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:31 compute-2 nova_compute[232428]: 2025-11-29 08:12:31.133 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:31 compute-2 ceph-mon[77138]: pgmap v2173: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 08:12:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:31.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:31 compute-2 nova_compute[232428]: 2025-11-29 08:12:31.433 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Successfully created port: d99aab02-2744-459b-978f-87807bdafb90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:12:31 compute-2 podman[277787]: 2025-11-29 08:12:31.781353234 +0000 UTC m=+0.166471610 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:12:31 compute-2 nova_compute[232428]: 2025-11-29 08:12:31.961 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:31.962 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:31.963 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:12:32 compute-2 nova_compute[232428]: 2025-11-29 08:12:32.905 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Successfully updated port: d99aab02-2744-459b-978f-87807bdafb90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:12:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.149 232432 DEBUG nova.compute.manager [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-changed-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.149 232432 DEBUG nova.compute.manager [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Refreshing instance network info cache due to event network-changed-d99aab02-2744-459b-978f-87807bdafb90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.150 232432 DEBUG oslo_concurrency.lockutils [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.150 232432 DEBUG oslo_concurrency.lockutils [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.150 232432 DEBUG nova.network.neutron [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Refreshing network info cache for port d99aab02-2744-459b-978f-87807bdafb90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.231 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 08:12:33 compute-2 ceph-mon[77138]: pgmap v2174: 305 pgs: 305 active+clean; 479 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 740 KiB/s rd, 3.3 MiB/s wr, 137 op/s
Nov 29 08:12:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2216913731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:33 compute-2 nova_compute[232428]: 2025-11-29 08:12:33.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:33.965 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:34 compute-2 nova_compute[232428]: 2025-11-29 08:12:34.138 232432 DEBUG nova.network.neutron [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:12:34 compute-2 ceph-mon[77138]: osdmap e283: 3 total, 3 up, 3 in
Nov 29 08:12:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1722645418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:34 compute-2 nova_compute[232428]: 2025-11-29 08:12:34.851 232432 DEBUG nova.network.neutron [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:34 compute-2 nova_compute[232428]: 2025-11-29 08:12:34.876 232432 DEBUG oslo_concurrency.lockutils [req-6ba12dc2-2592-4718-9ecf-6dbb90c806b5 req-74304be8-4ae9-4cef-ba93-a6e568e6ddeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:34 compute-2 nova_compute[232428]: 2025-11-29 08:12:34.877 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:34 compute-2 nova_compute[232428]: 2025-11-29 08:12:34.877 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:12:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:35 compute-2 nova_compute[232428]: 2025-11-29 08:12:35.102 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:12:35 compute-2 ceph-mon[77138]: pgmap v2176: 305 pgs: 305 active+clean; 477 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Nov 29 08:12:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:12:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:12:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:35 compute-2 sudo[277815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:35 compute-2 sudo[277815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:35 compute-2 sudo[277815]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:35 compute-2 sudo[277841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:12:35 compute-2 sudo[277841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:35 compute-2 sudo[277841]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:36 compute-2 nova_compute[232428]: 2025-11-29 08:12:36.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.794227) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956794298, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2256, "num_deletes": 262, "total_data_size": 5078674, "memory_usage": 5149904, "flush_reason": "Manual Compaction"}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956824127, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 3293722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42980, "largest_seqno": 45230, "table_properties": {"data_size": 3284429, "index_size": 5787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19770, "raw_average_key_size": 20, "raw_value_size": 3265629, "raw_average_value_size": 3415, "num_data_blocks": 250, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403795, "oldest_key_time": 1764403795, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 29977 microseconds, and 12427 cpu microseconds.
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.824190) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 3293722 bytes OK
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.824230) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.826942) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.826965) EVENT_LOG_v1 {"time_micros": 1764403956826957, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.826990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 5068608, prev total WAL file size 5068608, number of live WAL files 2.
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.828943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323633' seq:72057594037927935, type:22 .. '6C6F676D0031353134' seq:0, type:0; will stop at (end)
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(3216KB)], [81(8575KB)]
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956829045, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12074734, "oldest_snapshot_seqno": -1}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7486 keys, 11910138 bytes, temperature: kUnknown
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956911246, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11910138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11859443, "index_size": 30871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 193617, "raw_average_key_size": 25, "raw_value_size": 11725035, "raw_average_value_size": 1566, "num_data_blocks": 1223, "num_entries": 7486, "num_filter_entries": 7486, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.912117) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11910138 bytes
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.914355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.0 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.3) write-amplify(3.6) OK, records in: 8026, records dropped: 540 output_compression: NoCompression
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.914431) EVENT_LOG_v1 {"time_micros": 1764403956914416, "job": 50, "event": "compaction_finished", "compaction_time_micros": 82716, "compaction_time_cpu_micros": 33212, "output_level": 6, "num_output_files": 1, "total_output_size": 11910138, "num_input_records": 8026, "num_output_records": 7486, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956915842, "job": 50, "event": "table_file_deletion", "file_number": 83}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956918125, "job": 50, "event": "table_file_deletion", "file_number": 81}
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.828819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.918296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.918303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.918322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.918324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:36.918326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.133 232432 DEBUG nova.network.neutron [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Updating instance_info_cache with network_info: [{"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.177 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ac9da94-cac6-4662-9bcf-9185ca957035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.178 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance network_info: |[{"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.180 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start _get_guest_xml network_info=[{"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.184 232432 WARNING nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.189 232432 DEBUG nova.virt.libvirt.host [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.189 232432 DEBUG nova.virt.libvirt.host [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.191 232432 DEBUG nova.virt.libvirt.host [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.192 232432 DEBUG nova.virt.libvirt.host [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.193 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.193 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.193 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.194 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.194 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.194 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.194 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.195 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.195 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.195 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.195 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.195 232432 DEBUG nova.virt.hardware [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.198 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3615503511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.650 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.679 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:37 compute-2 nova_compute[232428]: 2025-11-29 08:12:37.684 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:37 compute-2 ceph-mon[77138]: pgmap v2177: 305 pgs: 305 active+clean; 453 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 29 08:12:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3615503511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2452052417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.165 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.167 232432 DEBUG nova.virt.libvirt.vif [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-tempest.common.compute-instance-216254991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:29Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.168 232432 DEBUG nova.network.os_vif_util [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.169 232432 DEBUG nova.network.os_vif_util [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.170 232432 DEBUG nova.objects.instance [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.254 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <uuid>2ac9da94-cac6-4662-9bcf-9185ca957035</uuid>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <name>instance-0000006d</name>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:name>tempest-tempest.common.compute-instance-216254991</nova:name>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:12:37</nova:creationTime>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <nova:port uuid="d99aab02-2744-459b-978f-87807bdafb90">
Nov 29 08:12:38 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <system>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="serial">2ac9da94-cac6-4662-9bcf-9185ca957035</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="uuid">2ac9da94-cac6-4662-9bcf-9185ca957035</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </system>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <os>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </os>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <features>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </features>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ac9da94-cac6-4662-9bcf-9185ca957035_disk">
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config">
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:12:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:fa:b9:0c"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <target dev="tapd99aab02-27"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/console.log" append="off"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <video>
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </video>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:12:38 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:12:38 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:12:38 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:12:38 compute-2 nova_compute[232428]: </domain>
Nov 29 08:12:38 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.256 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Preparing to wait for external event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.256 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.257 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.257 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.258 232432 DEBUG nova.virt.libvirt.vif [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-tempest.common.compute-instance-216254991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:29Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.258 232432 DEBUG nova.network.os_vif_util [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.259 232432 DEBUG nova.network.os_vif_util [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.259 232432 DEBUG os_vif [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.260 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.261 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.261 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.267 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.268 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd99aab02-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.269 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd99aab02-27, col_values=(('external_ids', {'iface-id': 'd99aab02-2744-459b-978f-87807bdafb90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:b9:0c', 'vm-uuid': '2ac9da94-cac6-4662-9bcf-9185ca957035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:38 compute-2 NetworkManager[48993]: <info>  [1764403958.3031] manager: (tapd99aab02-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.307 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.314 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.315 232432 INFO os_vif [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27')
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.436 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.437 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.438 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:fa:b9:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.438 232432 INFO nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Using config drive
Nov 29 08:12:38 compute-2 nova_compute[232428]: 2025-11-29 08:12:38.476 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2452052417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.349 232432 INFO nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating config drive at /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.354 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36x1l2cn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:39.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.491 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36x1l2cn" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.526 232432 DEBUG nova.storage.rbd_utils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.531 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.703 232432 DEBUG oslo_concurrency.processutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.705 232432 INFO nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deleting local config drive /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config because it was imported into RBD.
Nov 29 08:12:39 compute-2 kernel: tapd99aab02-27: entered promiscuous mode
Nov 29 08:12:39 compute-2 NetworkManager[48993]: <info>  [1764403959.7697] manager: (tapd99aab02-27): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 08:12:39 compute-2 ovn_controller[134375]: 2025-11-29T08:12:39Z|00506|binding|INFO|Claiming lport d99aab02-2744-459b-978f-87807bdafb90 for this chassis.
Nov 29 08:12:39 compute-2 ovn_controller[134375]: 2025-11-29T08:12:39Z|00507|binding|INFO|d99aab02-2744-459b-978f-87807bdafb90: Claiming fa:16:3e:fa:b9:0c 10.100.0.13
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.776 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b9:0c 10.100.0.13'], port_security=['fa:16:3e:fa:b9:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ac9da94-cac6-4662-9bcf-9185ca957035', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '12a7db5e-3d29-492a-8e3f-e9e843cf9feb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d99aab02-2744-459b-978f-87807bdafb90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.778 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d99aab02-2744-459b-978f-87807bdafb90 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.780 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:12:39 compute-2 ovn_controller[134375]: 2025-11-29T08:12:39Z|00508|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 ovn-installed in OVS
Nov 29 08:12:39 compute-2 ovn_controller[134375]: 2025-11-29T08:12:39Z|00509|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 up in Southbound
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.800 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10f0ac9b-b3d3-4829-b7b2-70d0d909311b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 systemd-machined[194747]: New machine qemu-48-instance-0000006d.
Nov 29 08:12:39 compute-2 systemd[1]: Started Virtual Machine qemu-48-instance-0000006d.
Nov 29 08:12:39 compute-2 systemd-udevd[278006]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.839 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c45c129-645a-42c8-ad96-52fcd538ac05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.844 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a406a2d9-090d-4876-9137-208e1245c6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 ceph-mon[77138]: pgmap v2178: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Nov 29 08:12:39 compute-2 NetworkManager[48993]: <info>  [1764403959.8576] device (tapd99aab02-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:12:39 compute-2 NetworkManager[48993]: <info>  [1764403959.8592] device (tapd99aab02-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.879 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[770a3586-2423-4dc4-9d01-360e2746a061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.903 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74157dee-0c52-48a5-9acb-3c9b36a71b71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 43230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278016, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.920 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c75dfb09-eaf1-435a-9916-ccd8ca37e68e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687048, 'tstamp': 687048}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278018, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687052, 'tstamp': 687052}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278018, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.922 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.923 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-2 nova_compute[232428]: 2025-11-29 08:12:39.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.925 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.926 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.926 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:12:39.926 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.726 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403960.7257395, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.727 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Started (Lifecycle Event)
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.777 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.783 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403960.7277064, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.783 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Paused (Lifecycle Event)
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.808 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.813 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:40 compute-2 nova_compute[232428]: 2025-11-29 08:12:40.845 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:41 compute-2 nova_compute[232428]: 2025-11-29 08:12:41.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 08:12:41 compute-2 ceph-mon[77138]: pgmap v2179: 305 pgs: 305 active+clean; 454 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Nov 29 08:12:41 compute-2 ceph-mon[77138]: osdmap e284: 3 total, 3 up, 3 in
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.809 232432 DEBUG nova.compute.manager [req-164965e6-88dc-4b4c-87e5-a10470b6d7f2 req-028571bf-b341-4103-b7f2-4f30573e2735 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.810 232432 DEBUG oslo_concurrency.lockutils [req-164965e6-88dc-4b4c-87e5-a10470b6d7f2 req-028571bf-b341-4103-b7f2-4f30573e2735 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.810 232432 DEBUG oslo_concurrency.lockutils [req-164965e6-88dc-4b4c-87e5-a10470b6d7f2 req-028571bf-b341-4103-b7f2-4f30573e2735 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.811 232432 DEBUG oslo_concurrency.lockutils [req-164965e6-88dc-4b4c-87e5-a10470b6d7f2 req-028571bf-b341-4103-b7f2-4f30573e2735 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.811 232432 DEBUG nova.compute.manager [req-164965e6-88dc-4b4c-87e5-a10470b6d7f2 req-028571bf-b341-4103-b7f2-4f30573e2735 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Processing event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.812 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.818 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403962.8178248, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.818 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Resumed (Lifecycle Event)
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.821 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.826 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance spawned successfully.
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.827 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.864 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.868 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.917 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.918 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.918 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.919 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.919 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:42 compute-2 nova_compute[232428]: 2025-11-29 08:12:42.920 232432 DEBUG nova.virt.libvirt.driver [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.007 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.146 232432 INFO nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Took 13.35 seconds to spawn the instance on the hypervisor.
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.146 232432 DEBUG nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.257 232432 INFO nova.compute.manager [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Took 14.48 seconds to build instance.
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.286 232432 DEBUG oslo_concurrency.lockutils [None req-74031cbf-ba2e-46da-a832-d316ff39f82f 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:43 compute-2 nova_compute[232428]: 2025-11-29 08:12:43.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:43.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:44 compute-2 ceph-mon[77138]: pgmap v2181: 305 pgs: 305 active+clean; 454 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 660 KiB/s wr, 106 op/s
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.023 232432 DEBUG nova.compute.manager [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.024 232432 DEBUG oslo_concurrency.lockutils [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.025 232432 DEBUG oslo_concurrency.lockutils [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.025 232432 DEBUG oslo_concurrency.lockutils [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.026 232432 DEBUG nova.compute.manager [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.026 232432 WARNING nova.compute.manager [req-5be5d2a2-8d88-4edb-8ae6-0def5ddda96e req-f2f23a8e-aea0-44c1-abb2-a420a63f7a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received unexpected event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with vm_state active and task_state None.
Nov 29 08:12:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:45.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.894 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.895 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:45 compute-2 nova_compute[232428]: 2025-11-29 08:12:45.957 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:46 compute-2 ceph-mon[77138]: pgmap v2182: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Nov 29 08:12:46 compute-2 nova_compute[232428]: 2025-11-29 08:12:46.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:46 compute-2 nova_compute[232428]: 2025-11-29 08:12:46.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:46 compute-2 nova_compute[232428]: 2025-11-29 08:12:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:47 compute-2 ceph-mon[77138]: pgmap v2183: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Nov 29 08:12:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:47.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/362817229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:48 compute-2 nova_compute[232428]: 2025-11-29 08:12:48.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:48 compute-2 podman[278065]: 2025-11-29 08:12:48.724973751 +0000 UTC m=+0.100942228 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:12:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:49 compute-2 ceph-mon[77138]: pgmap v2184: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Nov 29 08:12:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4217449745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:49 compute-2 nova_compute[232428]: 2025-11-29 08:12:49.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:49 compute-2 nova_compute[232428]: 2025-11-29 08:12:49.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:12:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:49.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:49 compute-2 nova_compute[232428]: 2025-11-29 08:12:49.670 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:49 compute-2 nova_compute[232428]: 2025-11-29 08:12:49.671 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:49 compute-2 nova_compute[232428]: 2025-11-29 08:12:49.671 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:12:50 compute-2 sudo[278086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:50 compute-2 sudo[278086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:50 compute-2 sudo[278086]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:50 compute-2 sudo[278111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:50 compute-2 sudo[278111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:50 compute-2 sudo[278111]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:50 compute-2 nova_compute[232428]: 2025-11-29 08:12:50.178 232432 INFO nova.compute.manager [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Rebuilding instance
Nov 29 08:12:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1034312422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:50 compute-2 nova_compute[232428]: 2025-11-29 08:12:50.877 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:50 compute-2 nova_compute[232428]: 2025-11-29 08:12:50.927 232432 DEBUG nova.compute.manager [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.026 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.057 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.098 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.112 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.124 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.127 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:12:51 compute-2 nova_compute[232428]: 2025-11-29 08:12:51.147 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:51 compute-2 podman[278136]: 2025-11-29 08:12:51.157399136 +0000 UTC m=+0.066800603 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:12:51 compute-2 ceph-mon[77138]: pgmap v2185: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 171 op/s
Nov 29 08:12:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:51.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 08:12:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2774113000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:52 compute-2 nova_compute[232428]: 2025-11-29 08:12:52.739 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:52 compute-2 nova_compute[232428]: 2025-11-29 08:12:52.766 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:52 compute-2 nova_compute[232428]: 2025-11-29 08:12:52.767 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:12:52 compute-2 nova_compute[232428]: 2025-11-29 08:12:52.769 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:53.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:53 compute-2 nova_compute[232428]: 2025-11-29 08:12:53.353 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:53 compute-2 ceph-mon[77138]: pgmap v2186: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Nov 29 08:12:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1327195148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:55 compute-2 nova_compute[232428]: 2025-11-29 08:12:55.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:55 compute-2 nova_compute[232428]: 2025-11-29 08:12:55.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:12:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:55 compute-2 ceph-mon[77138]: pgmap v2187: 305 pgs: 305 active+clean; 462 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 29 08:12:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3540463941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:56 compute-2 nova_compute[232428]: 2025-11-29 08:12:56.150 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.625378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976625447, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 483, "num_deletes": 252, "total_data_size": 556867, "memory_usage": 565896, "flush_reason": "Manual Compaction"}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Nov 29 08:12:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3573923313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1171770234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976629524, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 366532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45235, "largest_seqno": 45713, "table_properties": {"data_size": 363959, "index_size": 609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6564, "raw_average_key_size": 19, "raw_value_size": 358675, "raw_average_value_size": 1048, "num_data_blocks": 27, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403957, "oldest_key_time": 1764403957, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 4200 microseconds, and 1864 cpu microseconds.
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.629580) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 366532 bytes OK
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.629602) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.631559) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.631580) EVENT_LOG_v1 {"time_micros": 1764403976631573, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.631601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 553942, prev total WAL file size 553942, number of live WAL files 2.
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.632145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(357KB)], [84(11MB)]
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976632233, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 12276670, "oldest_snapshot_seqno": -1}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7311 keys, 10415198 bytes, temperature: kUnknown
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976728518, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 10415198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10366992, "index_size": 28830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 190678, "raw_average_key_size": 26, "raw_value_size": 10236951, "raw_average_value_size": 1400, "num_data_blocks": 1130, "num_entries": 7311, "num_filter_entries": 7311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.728913) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10415198 bytes
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.733626) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 108.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(61.9) write-amplify(28.4) OK, records in: 7828, records dropped: 517 output_compression: NoCompression
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.733645) EVENT_LOG_v1 {"time_micros": 1764403976733636, "job": 52, "event": "compaction_finished", "compaction_time_micros": 96472, "compaction_time_cpu_micros": 41531, "output_level": 6, "num_output_files": 1, "total_output_size": 10415198, "num_input_records": 7828, "num_output_records": 7311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976733817, "job": 52, "event": "table_file_deletion", "file_number": 86}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976736186, "job": 52, "event": "table_file_deletion", "file_number": 84}
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.632018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.736221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.736227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.736229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.736230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:12:56.736232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:12:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.248 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.249 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.249 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.250 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:12:57 compute-2 nova_compute[232428]: 2025-11-29 08:12:57.250 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:12:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:57.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:12:57 compute-2 ovn_controller[134375]: 2025-11-29T08:12:57Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:b9:0c 10.100.0.13
Nov 29 08:12:57 compute-2 ovn_controller[134375]: 2025-11-29T08:12:57Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:b9:0c 10.100.0.13
Nov 29 08:12:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2047091255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:58 compute-2 ceph-mon[77138]: pgmap v2188: 305 pgs: 305 active+clean; 407 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.2 MiB/s wr, 351 op/s
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.334 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.410 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.512 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.513 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.519 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.519 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.524 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.525 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.730 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.731 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3839MB free_disk=20.830413818359375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.731 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.732 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.998 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ed45397-ad95-4437-a0df-a49849d1d9bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:58 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.999 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 35f7492d-e1a0-4369-bf32-ba8fa094036a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:58.999 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ac9da94-cac6-4662-9bcf-9185ca957035 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.000 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.000 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:12:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.003000095s ======
Nov 29 08:12:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000095s
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.288 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:59 compute-2 ceph-mon[77138]: pgmap v2189: 305 pgs: 305 active+clean; 417 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 769 KiB/s wr, 294 op/s
Nov 29 08:12:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2047091255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:12:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:12:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:59.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:12:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1249959394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.775 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.781 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.814 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.862 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:12:59 compute-2 nova_compute[232428]: 2025-11-29 08:12:59.863 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1249959394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:01 compute-2 nova_compute[232428]: 2025-11-29 08:13:01.152 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:01 compute-2 nova_compute[232428]: 2025-11-29 08:13:01.180 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:13:01 compute-2 ceph-mon[77138]: pgmap v2190: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 365 op/s
Nov 29 08:13:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:01.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1652912322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1958067740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:02 compute-2 podman[278209]: 2025-11-29 08:13:02.696826856 +0000 UTC m=+0.101075410 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:13:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:03.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.319 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.321 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.322 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:03 compute-2 ceph-mon[77138]: pgmap v2191: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 353 op/s
Nov 29 08:13:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3453105964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.438 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:03.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:03 compute-2 kernel: tapd99aab02-27 (unregistering): left promiscuous mode
Nov 29 08:13:03 compute-2 NetworkManager[48993]: <info>  [1764403983.4893] device (tapd99aab02-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 ovn_controller[134375]: 2025-11-29T08:13:03Z|00510|binding|INFO|Releasing lport d99aab02-2744-459b-978f-87807bdafb90 from this chassis (sb_readonly=0)
Nov 29 08:13:03 compute-2 ovn_controller[134375]: 2025-11-29T08:13:03Z|00511|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 down in Southbound
Nov 29 08:13:03 compute-2 ovn_controller[134375]: 2025-11-29T08:13:03Z|00512|binding|INFO|Removing iface tapd99aab02-27 ovn-installed in OVS
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.516 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b9:0c 10.100.0.13'], port_security=['fa:16:3e:fa:b9:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ac9da94-cac6-4662-9bcf-9185ca957035', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '12a7db5e-3d29-492a-8e3f-e9e843cf9feb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d99aab02-2744-459b-978f-87807bdafb90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.518 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d99aab02-2744-459b-978f-87807bdafb90 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.520 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.519 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.544 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[41ecc63f-aa68-4c79-808b-dc26f2258eae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 29 08:13:03 compute-2 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006d.scope: Consumed 15.044s CPU time.
Nov 29 08:13:03 compute-2 systemd-machined[194747]: Machine qemu-48-instance-0000006d terminated.
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.583 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f6182362-d103-4135-bb66-064df8c1bdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.588 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9a952977-7717-4f48-837d-017abd8df937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.618 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[55f28376-f1dd-4a63-b492-c73d6d743187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.638 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[274157d3-332d-4b0d-b288-37c23fd22447]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 43230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278247, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.657 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e9905d67-c759-4964-b0e3-9360115f84e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687048, 'tstamp': 687048}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278248, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687052, 'tstamp': 687052}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278248, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.659 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 nova_compute[232428]: 2025-11-29 08:13:03.667 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.667 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.668 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.668 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:03.668 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.194 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance shutdown successfully after 13 seconds.
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.199 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance destroyed successfully.
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.203 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance destroyed successfully.
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.205 232432 DEBUG nova.virt.libvirt.vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-ServerActionsTestJSON-server-2048715280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:49Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.205 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.206 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.206 232432 DEBUG os_vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.209 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.209 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd99aab02-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.211 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.213 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.215 232432 INFO os_vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27')
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.715 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deleting instance files /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035_del
Nov 29 08:13:04 compute-2 nova_compute[232428]: 2025-11-29 08:13:04.716 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deletion of /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035_del complete
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.038 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.039 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating image(s)
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.074 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.108 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.142 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.147 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.148 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.314 232432 DEBUG nova.compute.manager [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.314 232432 DEBUG oslo_concurrency.lockutils [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.314 232432 DEBUG oslo_concurrency.lockutils [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.315 232432 DEBUG oslo_concurrency.lockutils [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.315 232432 DEBUG nova.compute.manager [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.315 232432 WARNING nova.compute.manager [req-a4331649-be5c-4175-af6c-2f68426b7fa5 req-3ab43cc4-5bec-40f6-8a11-c31d3f968e53 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received unexpected event network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with vm_state active and task_state rebuild_spawning.
Nov 29 08:13:05 compute-2 ceph-mon[77138]: pgmap v2192: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 359 op/s
Nov 29 08:13:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:05.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:05 compute-2 nova_compute[232428]: 2025-11-29 08:13:05.967 232432 DEBUG nova.virt.libvirt.imagebackend [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:13:06 compute-2 nova_compute[232428]: 2025-11-29 08:13:06.156 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:07.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:07 compute-2 ceph-mon[77138]: pgmap v2193: 305 pgs: 305 active+clean; 475 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 351 op/s
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.733 232432 DEBUG nova.compute.manager [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.734 232432 DEBUG oslo_concurrency.lockutils [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.734 232432 DEBUG oslo_concurrency.lockutils [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.734 232432 DEBUG oslo_concurrency.lockutils [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.735 232432 DEBUG nova.compute.manager [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.735 232432 WARNING nova.compute.manager [req-52f3e223-fb85-422d-8f96-2f380d627db1 req-9523c177-24fd-48e9-8410-ab4dcb03cdde 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received unexpected event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with vm_state active and task_state rebuild_spawning.
Nov 29 08:13:08 compute-2 nova_compute[232428]: 2025-11-29 08:13:08.933 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.010 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.013 232432 DEBUG nova.virt.images [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] ed489666-5fa2-4ea4-8005-7a7505ac1b78 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.014 232432 DEBUG nova.privsep.utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.015 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.211 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.224 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.229 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.302 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.305 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.345 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.348 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 2ac9da94-cac6-4662-9bcf-9185ca957035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:09 compute-2 ceph-mon[77138]: pgmap v2194: 305 pgs: 305 active+clean; 472 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.5 MiB/s wr, 203 op/s
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.702 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 2ac9da94-cac6-4662-9bcf-9185ca957035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.809 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] resizing rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.957 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.957 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Ensure instance console log exists: /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.958 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.958 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.958 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.961 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start _get_guest_xml network_info=[{"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.965 232432 WARNING nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.974 232432 DEBUG nova.virt.libvirt.host [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:13:09 compute-2 nova_compute[232428]: 2025-11-29 08:13:09.974 232432 DEBUG nova.virt.libvirt.host [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.031 232432 DEBUG nova.virt.libvirt.host [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.032 232432 DEBUG nova.virt.libvirt.host [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.034 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.034 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.034 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.035 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.035 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.035 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.035 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.035 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.036 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.036 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.036 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.036 232432 DEBUG nova.virt.hardware [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:13:10 compute-2 nova_compute[232428]: 2025-11-29 08:13:10.036 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:10 compute-2 sudo[278462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:10 compute-2 sudo[278462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:10 compute-2 sudo[278462]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:10 compute-2 sudo[278487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:10 compute-2 sudo[278487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:10 compute-2 sudo[278487]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:11.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:11 compute-2 nova_compute[232428]: 2025-11-29 08:13:11.159 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:11.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:11 compute-2 ceph-mon[77138]: pgmap v2195: 305 pgs: 305 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.0 MiB/s wr, 252 op/s
Nov 29 08:13:11 compute-2 nova_compute[232428]: 2025-11-29 08:13:11.905 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3789732447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.324 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.369 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.376 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3789732447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3291386794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.866 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.868 232432 DEBUG nova.virt.libvirt.vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-ServerActionsTestJSON-server-2048715280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:04Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.869 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.870 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.874 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <uuid>2ac9da94-cac6-4662-9bcf-9185ca957035</uuid>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <name>instance-0000006d</name>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-2048715280</nova:name>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:13:09</nova:creationTime>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <nova:port uuid="d99aab02-2744-459b-978f-87807bdafb90">
Nov 29 08:13:12 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <system>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="serial">2ac9da94-cac6-4662-9bcf-9185ca957035</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="uuid">2ac9da94-cac6-4662-9bcf-9185ca957035</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </system>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <os>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </os>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <features>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </features>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ac9da94-cac6-4662-9bcf-9185ca957035_disk">
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config">
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:13:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:fa:b9:0c"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <target dev="tapd99aab02-27"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/console.log" append="off"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <video>
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </video>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:13:12 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:13:12 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:13:12 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:13:12 compute-2 nova_compute[232428]: </domain>
Nov 29 08:13:12 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.875 232432 DEBUG nova.compute.manager [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Preparing to wait for external event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.875 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.875 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.876 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.877 232432 DEBUG nova.virt.libvirt.vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-ServerActionsTestJSON-server-2048715280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:04Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.877 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.878 232432 DEBUG nova.network.os_vif_util [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.878 232432 DEBUG os_vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.879 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.880 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.880 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.885 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd99aab02-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.885 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd99aab02-27, col_values=(('external_ids', {'iface-id': 'd99aab02-2744-459b-978f-87807bdafb90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:b9:0c', 'vm-uuid': '2ac9da94-cac6-4662-9bcf-9185ca957035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.887 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:12 compute-2 NetworkManager[48993]: <info>  [1764403992.8891] manager: (tapd99aab02-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.892 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.893 232432 INFO os_vif [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27')
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.994 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.995 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.995 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:fa:b9:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:13:12 compute-2 nova_compute[232428]: 2025-11-29 08:13:12.995 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Using config drive
Nov 29 08:13:13 compute-2 nova_compute[232428]: 2025-11-29 08:13:13.022 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:13 compute-2 nova_compute[232428]: 2025-11-29 08:13:13.062 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:13 compute-2 nova_compute[232428]: 2025-11-29 08:13:13.104 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'keypairs' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:13.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:13.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:13 compute-2 ceph-mon[77138]: pgmap v2196: 305 pgs: 305 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.8 MiB/s wr, 163 op/s
Nov 29 08:13:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3291386794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:15.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:15 compute-2 ceph-mon[77138]: pgmap v2197: 305 pgs: 305 active+clean; 461 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.3 MiB/s wr, 181 op/s
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.169 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.187 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Creating config drive at /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.193 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzd_ow78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.341 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzd_ow78" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.380 232432 DEBUG nova.storage.rbd_utils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.385 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.600 232432 DEBUG oslo_concurrency.processutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config 2ac9da94-cac6-4662-9bcf-9185ca957035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.601 232432 INFO nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deleting local config drive /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035/disk.config because it was imported into RBD.
Nov 29 08:13:16 compute-2 kernel: tapd99aab02-27: entered promiscuous mode
Nov 29 08:13:16 compute-2 NetworkManager[48993]: <info>  [1764403996.6764] manager: (tapd99aab02-27): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.676 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:16 compute-2 ovn_controller[134375]: 2025-11-29T08:13:16Z|00513|binding|INFO|Claiming lport d99aab02-2744-459b-978f-87807bdafb90 for this chassis.
Nov 29 08:13:16 compute-2 ovn_controller[134375]: 2025-11-29T08:13:16Z|00514|binding|INFO|d99aab02-2744-459b-978f-87807bdafb90: Claiming fa:16:3e:fa:b9:0c 10.100.0.13
Nov 29 08:13:16 compute-2 ovn_controller[134375]: 2025-11-29T08:13:16Z|00515|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 ovn-installed in OVS
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.700 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:16 compute-2 nova_compute[232428]: 2025-11-29 08:13:16.705 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:16 compute-2 systemd-udevd[278650]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:13:16 compute-2 systemd-machined[194747]: New machine qemu-49-instance-0000006d.
Nov 29 08:13:16 compute-2 NetworkManager[48993]: <info>  [1764403996.7253] device (tapd99aab02-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:13:16 compute-2 NetworkManager[48993]: <info>  [1764403996.7269] device (tapd99aab02-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:13:16 compute-2 systemd[1]: Started Virtual Machine qemu-49-instance-0000006d.
Nov 29 08:13:16 compute-2 ovn_controller[134375]: 2025-11-29T08:13:16Z|00516|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 up in Southbound
Nov 29 08:13:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:16.961 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b9:0c 10.100.0.13'], port_security=['fa:16:3e:fa:b9:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ac9da94-cac6-4662-9bcf-9185ca957035', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '12a7db5e-3d29-492a-8e3f-e9e843cf9feb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d99aab02-2744-459b-978f-87807bdafb90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:16.964 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d99aab02-2744-459b-978f-87807bdafb90 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:13:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:16.969 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:16.995 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7599207-bd44-4fa8-bde7-9aaf7d8a8c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.025 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[eb337671-eb1c-4144-a720-56a2841baee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.029 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[92f576e1-d89b-47ae-ae0e-68d9146c66c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.057 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[26ff10f8-c7d6-4e5a-8543-5add6238d0c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.083 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab6fd7e-15bc-4262-8bcc-2e8f87603e16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 43230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278665, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.106 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c1476faf-7a86-4f1f-a71b-f4b201a6752e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687048, 'tstamp': 687048}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278666, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687052, 'tstamp': 687052}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278666, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.108 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.110 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.111 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.112 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.112 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:17.112 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.494 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 2ac9da94-cac6-4662-9bcf-9185ca957035 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.495 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403997.4929893, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.495 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Started (Lifecycle Event)
Nov 29 08:13:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:17.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.532 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.538 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403997.4931664, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.538 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Paused (Lifecycle Event)
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.651 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.654 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.696 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 29 08:13:17 compute-2 nova_compute[232428]: 2025-11-29 08:13:17.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:17 compute-2 ceph-mon[77138]: pgmap v2198: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 208 op/s
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.048 232432 DEBUG nova.compute.manager [req-e098a4c8-821a-49d1-aa26-ffd46f33dea3 req-81c70742-c0f5-47b0-a7d0-ce29835af219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.048 232432 DEBUG oslo_concurrency.lockutils [req-e098a4c8-821a-49d1-aa26-ffd46f33dea3 req-81c70742-c0f5-47b0-a7d0-ce29835af219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.049 232432 DEBUG oslo_concurrency.lockutils [req-e098a4c8-821a-49d1-aa26-ffd46f33dea3 req-81c70742-c0f5-47b0-a7d0-ce29835af219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.049 232432 DEBUG oslo_concurrency.lockutils [req-e098a4c8-821a-49d1-aa26-ffd46f33dea3 req-81c70742-c0f5-47b0-a7d0-ce29835af219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.049 232432 DEBUG nova.compute.manager [req-e098a4c8-821a-49d1-aa26-ffd46f33dea3 req-81c70742-c0f5-47b0-a7d0-ce29835af219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Processing event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.050 232432 DEBUG nova.compute.manager [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.055 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764403998.0551162, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.055 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Resumed (Lifecycle Event)
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.058 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.062 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance spawned successfully.
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.062 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.108 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.112 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.129 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.129 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.130 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.131 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.132 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.132 232432 DEBUG nova.virt.libvirt.driver [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.176 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.227 232432 DEBUG nova.compute.manager [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.326 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.326 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.326 232432 DEBUG nova.objects.instance [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 08:13:18 compute-2 nova_compute[232428]: 2025-11-29 08:13:18.502 232432 DEBUG oslo_concurrency.lockutils [None req-4e77c5e7-a3b6-4c44-b008-317a1cbc1da7 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2610712328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3422734767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1837911653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:19.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:19 compute-2 podman[278710]: 2025-11-29 08:13:19.697283505 +0000 UTC m=+0.094053572 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:13:19 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 29 08:13:19 compute-2 ceph-mon[77138]: pgmap v2199: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.4 MiB/s wr, 171 op/s
Nov 29 08:13:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.937 232432 DEBUG nova.compute.manager [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.938 232432 DEBUG oslo_concurrency.lockutils [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.938 232432 DEBUG oslo_concurrency.lockutils [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.938 232432 DEBUG oslo_concurrency.lockutils [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.938 232432 DEBUG nova.compute.manager [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:20 compute-2 nova_compute[232428]: 2025-11-29 08:13:20.939 232432 WARNING nova.compute.manager [req-ff7e81bc-8261-45ee-8ee1-0d21c7383e95 req-472eafa3-2b7b-4691-a2a6-2ee7f76c20b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received unexpected event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with vm_state active and task_state None.
Nov 29 08:13:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/827675621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:21 compute-2 nova_compute[232428]: 2025-11-29 08:13:21.172 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:21 compute-2 podman[278731]: 2025-11-29 08:13:21.71097551 +0000 UTC m=+0.108184864 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 08:13:22 compute-2 ceph-mon[77138]: pgmap v2200: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 236 op/s
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.223 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.224 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.225 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.225 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.226 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.228 232432 INFO nova.compute.manager [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Terminating instance
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.230 232432 DEBUG nova.compute.manager [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:13:22 compute-2 kernel: tapd99aab02-27 (unregistering): left promiscuous mode
Nov 29 08:13:22 compute-2 NetworkManager[48993]: <info>  [1764404002.2786] device (tapd99aab02-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:13:22 compute-2 ovn_controller[134375]: 2025-11-29T08:13:22Z|00517|binding|INFO|Releasing lport d99aab02-2744-459b-978f-87807bdafb90 from this chassis (sb_readonly=0)
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 ovn_controller[134375]: 2025-11-29T08:13:22Z|00518|binding|INFO|Setting lport d99aab02-2744-459b-978f-87807bdafb90 down in Southbound
Nov 29 08:13:22 compute-2 ovn_controller[134375]: 2025-11-29T08:13:22Z|00519|binding|INFO|Removing iface tapd99aab02-27 ovn-installed in OVS
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.299 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b9:0c 10.100.0.13'], port_security=['fa:16:3e:fa:b9:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ac9da94-cac6-4662-9bcf-9185ca957035', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '12a7db5e-3d29-492a-8e3f-e9e843cf9feb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d99aab02-2744-459b-978f-87807bdafb90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.301 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d99aab02-2744-459b-978f-87807bdafb90 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.302 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.310 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.324 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4d564fb4-0d84-4b40-b804-b1ebf0d16da3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 29 08:13:22 compute-2 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Consumed 5.022s CPU time.
Nov 29 08:13:22 compute-2 systemd-machined[194747]: Machine qemu-49-instance-0000006d terminated.
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.365 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[48b48a09-739c-4fc3-9871-aee9a9d76c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.369 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[81f2f8f8-5afd-494a-9190-a45bc49acbbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.407 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[19df496b-81b6-4739-99c6-a20c0dc02eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.439 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e345fb36-9029-4d9d-9601-cdf22515be55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687033, 'reachable_time': 43230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278764, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.470 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Instance destroyed successfully.
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.471 232432 DEBUG nova.objects.instance [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ac9da94-cac6-4662-9bcf-9185ca957035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.472 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[692ea1a5-cd23-4205-afa2-6924041d1ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687048, 'tstamp': 687048}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278766, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap988c10fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687052, 'tstamp': 687052}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278766, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.473 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.475 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.480 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.481 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.481 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.481 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:22.482 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.493 232432 DEBUG nova.virt.libvirt.vif [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-216254991',display_name='tempest-ServerActionsTestJSON-server-2048715280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-216254991',id=109,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-fpx0xpyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:18Z,user_data=None,user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ac9da94-cac6-4662-9bcf-9185ca957035,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.493 232432 DEBUG nova.network.os_vif_util [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "d99aab02-2744-459b-978f-87807bdafb90", "address": "fa:16:3e:fa:b9:0c", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd99aab02-27", "ovs_interfaceid": "d99aab02-2744-459b-978f-87807bdafb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.495 232432 DEBUG nova.network.os_vif_util [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.496 232432 DEBUG os_vif [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.500 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd99aab02-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.507 232432 INFO os_vif [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=d99aab02-2744-459b-978f-87807bdafb90,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd99aab02-27')
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.989 232432 INFO nova.virt.libvirt.driver [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deleting instance files /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035_del
Nov 29 08:13:22 compute-2 nova_compute[232428]: 2025-11-29 08:13:22.990 232432 INFO nova.virt.libvirt.driver [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deletion of /var/lib/nova/instances/2ac9da94-cac6-4662-9bcf-9185ca957035_del complete
Nov 29 08:13:23 compute-2 nova_compute[232428]: 2025-11-29 08:13:23.063 232432 INFO nova.compute.manager [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 29 08:13:23 compute-2 nova_compute[232428]: 2025-11-29 08:13:23.064 232432 DEBUG oslo.service.loopingcall [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:13:23 compute-2 nova_compute[232428]: 2025-11-29 08:13:23.064 232432 DEBUG nova.compute.manager [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:13:23 compute-2 nova_compute[232428]: 2025-11-29 08:13:23.065 232432 DEBUG nova.network.neutron [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:13:23 compute-2 sshd-session[278797]: Invalid user sol from 45.148.10.240 port 57198
Nov 29 08:13:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:23 compute-2 sshd-session[278797]: Connection closed by invalid user sol 45.148.10.240 port 57198 [preauth]
Nov 29 08:13:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:24 compute-2 ceph-mon[77138]: pgmap v2201: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 145 op/s
Nov 29 08:13:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.385 232432 DEBUG nova.compute.manager [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.386 232432 DEBUG oslo_concurrency.lockutils [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.386 232432 DEBUG oslo_concurrency.lockutils [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.386 232432 DEBUG oslo_concurrency.lockutils [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.386 232432 DEBUG nova.compute.manager [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:25 compute-2 nova_compute[232428]: 2025-11-29 08:13:25.387 232432 DEBUG nova.compute.manager [req-96ef10b8-a665-4718-829d-4977d6968cc8 req-62ef6bfc-ed9d-4835-aa24-3c30283aa963 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-unplugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:13:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:26 compute-2 ceph-mon[77138]: pgmap v2202: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 186 op/s
Nov 29 08:13:26 compute-2 nova_compute[232428]: 2025-11-29 08:13:26.176 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:26 compute-2 nova_compute[232428]: 2025-11-29 08:13:26.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:26.463 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:26.465 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:13:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:27.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:27 compute-2 nova_compute[232428]: 2025-11-29 08:13:27.276 232432 DEBUG nova.network.neutron [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:27 compute-2 nova_compute[232428]: 2025-11-29 08:13:27.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:27.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:27 compute-2 nova_compute[232428]: 2025-11-29 08:13:27.846 232432 INFO nova.compute.manager [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Took 4.78 seconds to deallocate network for instance.
Nov 29 08:13:28 compute-2 ceph-mon[77138]: pgmap v2203: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 286 op/s
Nov 29 08:13:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399546332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399546332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.617 232432 DEBUG nova.compute.manager [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.618 232432 DEBUG oslo_concurrency.lockutils [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.618 232432 DEBUG oslo_concurrency.lockutils [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.618 232432 DEBUG oslo_concurrency.lockutils [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.619 232432 DEBUG nova.compute.manager [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] No waiting events found dispatching network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.619 232432 WARNING nova.compute.manager [req-be57dd84-f761-4e80-b557-d9f6c347a5e6 req-872fa3d9-eb1e-4a3c-8e68-5934e2a8026d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received unexpected event network-vif-plugged-d99aab02-2744-459b-978f-87807bdafb90 for instance with vm_state active and task_state deleting.
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.620 232432 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Received event network-vif-deleted-d99aab02-2744-459b-978f-87807bdafb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.621 232432 INFO nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Neutron deleted interface d99aab02-2744-459b-978f-87807bdafb90; detaching it from the instance and deleting it from the info cache
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.621 232432 DEBUG nova.network.neutron [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.646 232432 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Detach interface failed, port_id=d99aab02-2744-459b-978f-87807bdafb90, reason: Instance 2ac9da94-cac6-4662-9bcf-9185ca957035 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.709 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.710 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.756 232432 DEBUG nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.783 232432 DEBUG nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.784 232432 DEBUG nova.compute.provider_tree [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.827 232432 DEBUG nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:13:28 compute-2 nova_compute[232428]: 2025-11-29 08:13:28.853 232432 DEBUG nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:13:29 compute-2 nova_compute[232428]: 2025-11-29 08:13:29.017 232432 DEBUG oslo_concurrency.processutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/399546332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/399546332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2011598559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:29 compute-2 nova_compute[232428]: 2025-11-29 08:13:29.510 232432 DEBUG oslo_concurrency.processutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:29 compute-2 nova_compute[232428]: 2025-11-29 08:13:29.518 232432 DEBUG nova.compute.provider_tree [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:29.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:30 compute-2 ceph-mon[77138]: pgmap v2204: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 309 op/s
Nov 29 08:13:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2011598559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:30 compute-2 nova_compute[232428]: 2025-11-29 08:13:30.419 232432 DEBUG nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:30.469 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:30 compute-2 nova_compute[232428]: 2025-11-29 08:13:30.681 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:30 compute-2 nova_compute[232428]: 2025-11-29 08:13:30.738 232432 INFO nova.scheduler.client.report [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Deleted allocations for instance 2ac9da94-cac6-4662-9bcf-9185ca957035
Nov 29 08:13:30 compute-2 nova_compute[232428]: 2025-11-29 08:13:30.801 232432 DEBUG oslo_concurrency.lockutils [None req-181ac6a1-8cdc-41cd-a19f-6ba439bf2e77 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ac9da94-cac6-4662-9bcf-9185ca957035" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:30 compute-2 sudo[278826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:30 compute-2 sudo[278826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-2 sudo[278826]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:30 compute-2 sudo[278851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:30 compute-2 sudo[278851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-2 sudo[278851]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:31 compute-2 ceph-mon[77138]: pgmap v2205: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 307 op/s
Nov 29 08:13:31 compute-2 nova_compute[232428]: 2025-11-29 08:13:31.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:32 compute-2 nova_compute[232428]: 2025-11-29 08:13:32.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:33 compute-2 ceph-mon[77138]: pgmap v2206: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 544 KiB/s wr, 216 op/s
Nov 29 08:13:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1555366208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:33.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:33 compute-2 podman[278877]: 2025-11-29 08:13:33.828593924 +0000 UTC m=+0.212610188 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:13:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:35.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:35 compute-2 ceph-mon[77138]: pgmap v2207: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 544 KiB/s wr, 216 op/s
Nov 29 08:13:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:35 compute-2 sudo[278905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:35 compute-2 sudo[278905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:35 compute-2 sudo[278905]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:35 compute-2 sudo[278930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:35 compute-2 sudo[278930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:35 compute-2 sudo[278930]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:35 compute-2 sudo[278955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:35 compute-2 sudo[278955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:35 compute-2 sudo[278955]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:36 compute-2 sudo[278980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:13:36 compute-2 sudo[278980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:36 compute-2 nova_compute[232428]: 2025-11-29 08:13:36.181 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:36 compute-2 podman[279078]: 2025-11-29 08:13:36.824240935 +0000 UTC m=+0.091334247 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:13:37 compute-2 podman[279078]: 2025-11-29 08:13:37.001730808 +0000 UTC m=+0.268824130 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:13:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:37.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:37 compute-2 ceph-mon[77138]: pgmap v2208: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 154 KiB/s wr, 178 op/s
Nov 29 08:13:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:37 compute-2 nova_compute[232428]: 2025-11-29 08:13:37.467 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404002.4652452, 2ac9da94-cac6-4662-9bcf-9185ca957035 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:37 compute-2 nova_compute[232428]: 2025-11-29 08:13:37.467 232432 INFO nova.compute.manager [-] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] VM Stopped (Lifecycle Event)
Nov 29 08:13:37 compute-2 nova_compute[232428]: 2025-11-29 08:13:37.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:37 compute-2 nova_compute[232428]: 2025-11-29 08:13:37.530 232432 DEBUG nova.compute.manager [None req-d7fb8dcc-723b-4241-84ab-ad2aa933232f - - - - - -] [instance: 2ac9da94-cac6-4662-9bcf-9185ca957035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:37.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:37 compute-2 podman[279232]: 2025-11-29 08:13:37.863162317 +0000 UTC m=+0.086310271 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:13:37 compute-2 podman[279232]: 2025-11-29 08:13:37.877751032 +0000 UTC m=+0.100898966 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:13:38 compute-2 podman[279299]: 2025-11-29 08:13:38.198133549 +0000 UTC m=+0.081235354 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Nov 29 08:13:38 compute-2 podman[279299]: 2025-11-29 08:13:38.218003757 +0000 UTC m=+0.101105532 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 08:13:38 compute-2 sudo[278980]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:38 compute-2 sudo[279333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:38 compute-2 sudo[279333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:38 compute-2 sudo[279333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:38 compute-2 sudo[279358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:38 compute-2 sudo[279358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:38 compute-2 sudo[279358]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:38 compute-2 sudo[279383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:38 compute-2 sudo[279383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:38 compute-2 sudo[279383]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:38 compute-2 sudo[279408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:13:38 compute-2 sudo[279408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:39 compute-2 sudo[279408]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:39 compute-2 ceph-mon[77138]: pgmap v2209: 305 pgs: 305 active+clean; 477 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 113 op/s
Nov 29 08:13:39 compute-2 sudo[279465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:39 compute-2 sudo[279465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:39 compute-2 sudo[279465]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:39 compute-2 sudo[279490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:39 compute-2 sudo[279490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:39 compute-2 sudo[279490]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:39 compute-2 sudo[279515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:39 compute-2 sudo[279515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:39 compute-2 sudo[279515]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:39.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:39 compute-2 sudo[279540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 08:13:39 compute-2 sudo[279540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:39 compute-2 nova_compute[232428]: 2025-11-29 08:13:39.839 232432 DEBUG nova.compute.manager [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 08:13:39 compute-2 podman[279607]: 2025-11-29 08:13:39.965296819 +0000 UTC m=+0.064860363 container create 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:39 compute-2 nova_compute[232428]: 2025-11-29 08:13:39.979 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:39 compute-2 nova_compute[232428]: 2025-11-29 08:13:39.979 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:40 compute-2 systemd[1]: Started libpod-conmon-1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970.scope.
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.028 232432 DEBUG nova.objects.instance [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:39.945860743 +0000 UTC m=+0.045424307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.059 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.060 232432 INFO nova.compute.claims [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.061 232432 DEBUG nova.objects.instance [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:40.062946412 +0000 UTC m=+0.162509996 container init 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:40.071458108 +0000 UTC m=+0.171021652 container start 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:40.076218977 +0000 UTC m=+0.175782541 container attach 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:40 compute-2 hopeful_lalande[279623]: 167 167
Nov 29 08:13:40 compute-2 systemd[1]: libpod-1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970.scope: Deactivated successfully.
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:40.083693519 +0000 UTC m=+0.183257063 container died 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.084 232432 DEBUG nova.objects.instance [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:40 compute-2 systemd[1]: var-lib-containers-storage-overlay-36cc540c2581c61cd61bd7588de2ddba7d25648312a66bcada124322514f5748-merged.mount: Deactivated successfully.
Nov 29 08:13:40 compute-2 podman[279607]: 2025-11-29 08:13:40.128672261 +0000 UTC m=+0.228235845 container remove 1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.178 232432 INFO nova.compute.resource_tracker [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating resource usage from migration 52467741-49f3-43d6-8911-ec5300b8359f
Nov 29 08:13:40 compute-2 systemd[1]: libpod-conmon-1b4c12ea289fb493760c69ae2bf10c4fe039d3d2fb76246302a3dfaf1c40a970.scope: Deactivated successfully.
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.291 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:40 compute-2 podman[279648]: 2025-11-29 08:13:40.408919126 +0000 UTC m=+0.049176414 container create 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:13:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:40 compute-2 systemd[1]: Started libpod-conmon-22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619.scope.
Nov 29 08:13:40 compute-2 podman[279648]: 2025-11-29 08:13:40.390908575 +0000 UTC m=+0.031165883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:13:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da17ce078ad86e979d768703ed6f724e4e7492e60d0373dc5248bb38d508d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da17ce078ad86e979d768703ed6f724e4e7492e60d0373dc5248bb38d508d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da17ce078ad86e979d768703ed6f724e4e7492e60d0373dc5248bb38d508d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da17ce078ad86e979d768703ed6f724e4e7492e60d0373dc5248bb38d508d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:40 compute-2 podman[279648]: 2025-11-29 08:13:40.523864879 +0000 UTC m=+0.164122177 container init 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:13:40 compute-2 podman[279648]: 2025-11-29 08:13:40.533145798 +0000 UTC m=+0.173403096 container start 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:40 compute-2 podman[279648]: 2025-11-29 08:13:40.537272977 +0000 UTC m=+0.177530265 container attach 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:13:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/69177356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.833 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.844 232432 DEBUG nova.compute.provider_tree [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.888 232432 DEBUG nova.scheduler.client.report [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.986 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:40 compute-2 nova_compute[232428]: 2025-11-29 08:13:40.986 232432 INFO nova.compute.manager [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Migrating
Nov 29 08:13:41 compute-2 nova_compute[232428]: 2025-11-29 08:13:41.132 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:41 compute-2 nova_compute[232428]: 2025-11-29 08:13:41.132 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:41 compute-2 nova_compute[232428]: 2025-11-29 08:13:41.132 232432 DEBUG nova.network.neutron [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:13:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:41 compute-2 nova_compute[232428]: 2025-11-29 08:13:41.182 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:41 compute-2 ceph-mon[77138]: pgmap v2210: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 691 KiB/s rd, 4.3 MiB/s wr, 136 op/s
Nov 29 08:13:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/69177356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:41.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:41 compute-2 epic_torvalds[279675]: [
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:     {
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "available": false,
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "ceph_device": false,
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "lsm_data": {},
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "lvs": [],
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "path": "/dev/sr0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "rejected_reasons": [
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "Insufficient space (<5GB)",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "Has a FileSystem"
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         ],
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         "sys_api": {
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "actuators": null,
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "device_nodes": "sr0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "devname": "sr0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "human_readable_size": "482.00 KB",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "id_bus": "ata",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "model": "QEMU DVD-ROM",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "nr_requests": "2",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "parent": "/dev/sr0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "partitions": {},
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "path": "/dev/sr0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "removable": "1",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "rev": "2.5+",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "ro": "0",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "rotational": "1",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "sas_address": "",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "sas_device_handle": "",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "scheduler_mode": "mq-deadline",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "sectors": 0,
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "sectorsize": "2048",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "size": 493568.0,
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "support_discard": "2048",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "type": "disk",
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:             "vendor": "QEMU"
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:         }
Nov 29 08:13:41 compute-2 epic_torvalds[279675]:     }
Nov 29 08:13:41 compute-2 epic_torvalds[279675]: ]
Nov 29 08:13:41 compute-2 systemd[1]: libpod-22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619.scope: Deactivated successfully.
Nov 29 08:13:41 compute-2 systemd[1]: libpod-22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619.scope: Consumed 1.285s CPU time.
Nov 29 08:13:41 compute-2 conmon[279675]: conmon 22f6948534e3d979d38a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619.scope/container/memory.events
Nov 29 08:13:41 compute-2 podman[279648]: 2025-11-29 08:13:41.884149417 +0000 UTC m=+1.524406705 container died 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:13:41 compute-2 systemd[1]: var-lib-containers-storage-overlay-7da17ce078ad86e979d768703ed6f724e4e7492e60d0373dc5248bb38d508d11-merged.mount: Deactivated successfully.
Nov 29 08:13:41 compute-2 podman[279648]: 2025-11-29 08:13:41.981942655 +0000 UTC m=+1.622199943 container remove 22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:41 compute-2 systemd[1]: libpod-conmon-22f6948534e3d979d38a71fdd5854562638b3d26607f8d2e3c5582e0c889d619.scope: Deactivated successfully.
Nov 29 08:13:42 compute-2 sudo[279540]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:42 compute-2 nova_compute[232428]: 2025-11-29 08:13:42.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:13:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:43 compute-2 nova_compute[232428]: 2025-11-29 08:13:43.323 232432 DEBUG nova.network.neutron [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:43 compute-2 nova_compute[232428]: 2025-11-29 08:13:43.358 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:43 compute-2 nova_compute[232428]: 2025-11-29 08:13:43.536 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 08:13:43 compute-2 nova_compute[232428]: 2025-11-29 08:13:43.542 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:13:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:44 compute-2 ceph-mon[77138]: pgmap v2211: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Nov 29 08:13:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3103633754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3103633754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:45.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:45 compute-2 kernel: tap25f618be-49 (unregistering): left promiscuous mode
Nov 29 08:13:45 compute-2 NetworkManager[48993]: <info>  [1764404025.8643] device (tap25f618be-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:13:45 compute-2 nova_compute[232428]: 2025-11-29 08:13:45.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:45 compute-2 ovn_controller[134375]: 2025-11-29T08:13:45Z|00520|binding|INFO|Releasing lport 25f618be-492d-4ac9-9c9c-6583e0402572 from this chassis (sb_readonly=0)
Nov 29 08:13:45 compute-2 ovn_controller[134375]: 2025-11-29T08:13:45Z|00521|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 down in Southbound
Nov 29 08:13:45 compute-2 nova_compute[232428]: 2025-11-29 08:13:45.881 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-2 ovn_controller[134375]: 2025-11-29T08:13:45Z|00522|binding|INFO|Removing iface tap25f618be-49 ovn-installed in OVS
Nov 29 08:13:45 compute-2 nova_compute[232428]: 2025-11-29 08:13:45.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:45.894 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '10', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:45.896 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:13:45 compute-2 nova_compute[232428]: 2025-11-29 08:13:45.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:45.899 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:13:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:45.901 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3bfc39-a43e-4c38-b056-97837e042d3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:45.901 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:13:45 compute-2 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 08:13:45 compute-2 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000064.scope: Consumed 21.296s CPU time.
Nov 29 08:13:45 compute-2 systemd-machined[194747]: Machine qemu-47-instance-00000064 terminated.
Nov 29 08:13:46 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [NOTICE]   (276987) : haproxy version is 2.8.14-c23fe91
Nov 29 08:13:46 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [NOTICE]   (276987) : path to executable is /usr/sbin/haproxy
Nov 29 08:13:46 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [WARNING]  (276987) : Exiting Master process...
Nov 29 08:13:46 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [ALERT]    (276987) : Current worker (276991) exited with code 143 (Terminated)
Nov 29 08:13:46 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[276969]: [WARNING]  (276987) : All workers exited. Exiting... (0)
Nov 29 08:13:46 compute-2 systemd[1]: libpod-e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6.scope: Deactivated successfully.
Nov 29 08:13:46 compute-2 podman[281008]: 2025-11-29 08:13:46.055929166 +0000 UTC m=+0.054862411 container died e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:13:46 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6-userdata-shm.mount: Deactivated successfully.
Nov 29 08:13:46 compute-2 ceph-mon[77138]: pgmap v2212: 305 pgs: 305 active+clean; 505 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 4.3 MiB/s wr, 135 op/s
Nov 29 08:13:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-b2c825ced726bc342f67b48a6a7f61a5e8d5e35434a1a009df35c349d1013fb7-merged.mount: Deactivated successfully.
Nov 29 08:13:46 compute-2 podman[281008]: 2025-11-29 08:13:46.101069834 +0000 UTC m=+0.100003109 container cleanup e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.109 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 systemd[1]: libpod-conmon-e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6.scope: Deactivated successfully.
Nov 29 08:13:46 compute-2 podman[281046]: 2025-11-29 08:13:46.182397919 +0000 UTC m=+0.051123105 container remove e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.185 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.191 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[948714c6-8dfe-4b4b-ac96-d8a4be791f62]: (4, ('Sat Nov 29 08:13:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6)\ne6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6\nSat Nov 29 08:13:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (e6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6)\ne6153a05e9bb35ba9b16b06340977629b0fe3bdc8ea01139743f124b878735c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.193 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5f07ea71-8c1c-45aa-b110-6138ad76e64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.195 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.197 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.215 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.218 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1e926031-e5e8-4f78-a2a3-95ea8761f5df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.238 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa158529-cbee-4a0c-b177-dc4744f4dff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.240 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[401c1c47-47f3-40be-a352-049d43533bd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.261 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[93c14d73-630d-41a4-84f2-4a1010019491]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687025, 'reachable_time': 20186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281066, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.265 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:13:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:46.265 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c22299e3-532e-468f-82df-6a5e5e3ef6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.511 232432 DEBUG nova.compute.manager [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.512 232432 DEBUG oslo_concurrency.lockutils [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.513 232432 DEBUG oslo_concurrency.lockutils [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.514 232432 DEBUG oslo_concurrency.lockutils [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.514 232432 DEBUG nova.compute.manager [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.515 232432 WARNING nova.compute.manager [req-ccc995f7-190e-426f-8dd1-12b2d858cea7 req-5d9bf0ad-9cc1-4134-8032-012acfb6af29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state resize_migrating.
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.563 232432 INFO nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance shutdown successfully after 3 seconds.
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.573 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.574 232432 DEBUG nova.virt.libvirt.vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.575 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.577 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.578 232432 DEBUG os_vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.581 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f618be-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.590 232432 INFO os_vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.597 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.597 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:46 compute-2 nova_compute[232428]: 2025-11-29 08:13:46.897 232432 DEBUG nova.network.neutron [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Port 25f618be-492d-4ac9-9c9c-6583e0402572 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.035 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.036 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.036 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1796266973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/243463392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:13:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:47.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.354 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.354 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:47 compute-2 nova_compute[232428]: 2025-11-29 08:13:47.354 232432 DEBUG nova.network.neutron [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:13:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:47.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:48 compute-2 ceph-mon[77138]: pgmap v2213: 305 pgs: 305 active+clean; 440 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 4.3 MiB/s wr, 164 op/s
Nov 29 08:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:48 compute-2 sudo[281068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:48 compute-2 sudo[281068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-2 sudo[281068]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-2 sudo[281093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:13:48 compute-2 sudo[281093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-2 sudo[281093]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.677 232432 DEBUG nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.678 232432 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.678 232432 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.679 232432 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.679 232432 DEBUG nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:48 compute-2 nova_compute[232428]: 2025-11-29 08:13:48.679 232432 WARNING nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state resize_migrated.
Nov 29 08:13:49 compute-2 ceph-mon[77138]: pgmap v2214: 305 pgs: 305 active+clean; 456 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 676 KiB/s rd, 5.0 MiB/s wr, 163 op/s
Nov 29 08:13:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:49.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1131043082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:50 compute-2 podman[281119]: 2025-11-29 08:13:50.695691661 +0000 UTC m=+0.088346964 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:13:51 compute-2 sudo[281139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:51 compute-2 sudo[281139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:51 compute-2 sudo[281139]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:51 compute-2 sudo[281164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:51 compute-2 sudo[281164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:51 compute-2 sudo[281164]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:51.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.187 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:51 compute-2 ceph-mon[77138]: pgmap v2215: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 888 KiB/s rd, 4.4 MiB/s wr, 164 op/s
Nov 29 08:13:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/587617492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:51.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.830 232432 DEBUG nova.network.neutron [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.866 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.874 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.874 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.875 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.989 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.991 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:13:51 compute-2 nova_compute[232428]: 2025-11-29 08:13:51.992 232432 INFO nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Creating image(s)
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.033 232432 DEBUG nova.storage.rbd_utils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] creating snapshot(nova-resize) on rbd image(2ed45397-ad95-4437-a0df-a49849d1d9bf_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:13:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.362 232432 DEBUG nova.objects.instance [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.533 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.534 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Ensure instance console log exists: /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.534 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.535 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.535 232432 DEBUG oslo_concurrency.lockutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.538 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start _get_guest_xml network_info=[{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.544 232432 WARNING nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.552 232432 DEBUG nova.virt.libvirt.host [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.552 232432 DEBUG nova.virt.libvirt.host [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.557 232432 DEBUG nova.virt.libvirt.host [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.557 232432 DEBUG nova.virt.libvirt.host [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.559 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.559 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.560 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.560 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.560 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.560 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.561 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.561 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.561 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.562 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.562 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.562 232432 DEBUG nova.virt.hardware [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.562 232432 DEBUG nova.objects.instance [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:52 compute-2 nova_compute[232428]: 2025-11-29 08:13:52.596 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:52 compute-2 podman[281262]: 2025-11-29 08:13:52.675038475 +0000 UTC m=+0.075890135 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:13:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2051601590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.052 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.093 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:53.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:53 compute-2 ceph-mon[77138]: pgmap v2216: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 29 08:13:53 compute-2 ceph-mon[77138]: osdmap e285: 3 total, 3 up, 3 in
Nov 29 08:13:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2051601590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1658938656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1055378530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.589 232432 DEBUG oslo_concurrency.processutils [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.591 232432 DEBUG nova.virt.libvirt.vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.591 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.592 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.595 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <uuid>2ed45397-ad95-4437-a0df-a49849d1d9bf</uuid>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <name>instance-00000064</name>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1161621840</nova:name>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:13:52</nova:creationTime>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <nova:port uuid="25f618be-492d-4ac9-9c9c-6583e0402572">
Nov 29 08:13:53 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <system>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="serial">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="uuid">2ed45397-ad95-4437-a0df-a49849d1d9bf</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </system>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <os>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </os>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <features>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </features>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk">
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2ed45397-ad95-4437-a0df-a49849d1d9bf_disk.config">
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:13:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e8:62:3f"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <target dev="tap25f618be-49"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf/console.log" append="off"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <video>
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </video>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:13:53 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:13:53 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:13:53 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:13:53 compute-2 nova_compute[232428]: </domain>
Nov 29 08:13:53 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.596 232432 DEBUG nova.virt.libvirt.vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.596 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e8:62:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.596 232432 DEBUG nova.network.os_vif_util [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.597 232432 DEBUG os_vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.598 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.598 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.602 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.602 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f618be-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.602 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f618be-49, col_values=(('external_ids', {'iface-id': '25f618be-492d-4ac9-9c9c-6583e0402572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:62:3f', 'vm-uuid': '2ed45397-ad95-4437-a0df-a49849d1d9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.643 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 NetworkManager[48993]: <info>  [1764404033.6453] manager: (tap25f618be-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.648 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.655 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.656 232432 INFO os_vif [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.744 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.745 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.746 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:e8:62:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.747 232432 INFO nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Using config drive
Nov 29 08:13:53 compute-2 kernel: tap25f618be-49: entered promiscuous mode
Nov 29 08:13:53 compute-2 NetworkManager[48993]: <info>  [1764404033.8802] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.882 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 ovn_controller[134375]: 2025-11-29T08:13:53Z|00523|binding|INFO|Claiming lport 25f618be-492d-4ac9-9c9c-6583e0402572 for this chassis.
Nov 29 08:13:53 compute-2 ovn_controller[134375]: 2025-11-29T08:13:53Z|00524|binding|INFO|25f618be-492d-4ac9-9c9c-6583e0402572: Claiming fa:16:3e:e8:62:3f 10.100.0.10
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.890 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.891 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.893 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:53 compute-2 ovn_controller[134375]: 2025-11-29T08:13:53Z|00525|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 ovn-installed in OVS
Nov 29 08:13:53 compute-2 ovn_controller[134375]: 2025-11-29T08:13:53Z|00526|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 up in Southbound
Nov 29 08:13:53 compute-2 nova_compute[232428]: 2025-11-29 08:13:53.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.912 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[42713778-e6b3-4704-85b0-251697c4e931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.914 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.917 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.917 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cac22e-d995-49ef-aab7-fa99d81bf6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.919 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b0cc36-e644-4573-ac3f-776406f00847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:53 compute-2 systemd-machined[194747]: New machine qemu-50-instance-00000064.
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.937 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[afb901d7-f2f0-40a2-91e7-adca5694262b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:53 compute-2 systemd[1]: Started Virtual Machine qemu-50-instance-00000064.
Nov 29 08:13:53 compute-2 systemd-udevd[281381]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:13:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:53.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7266c222-17b7-4f48-90a5-38d5f4499449]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:53 compute-2 NetworkManager[48993]: <info>  [1764404033.9744] device (tap25f618be-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:13:53 compute-2 NetworkManager[48993]: <info>  [1764404033.9755] device (tap25f618be-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.011 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8076088e-a498-4993-84bb-1daf23c67f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.016 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbb843a-563f-40da-8ef1-f048964b272d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 NetworkManager[48993]: <info>  [1764404034.0181] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.055 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a9544293-8092-4441-9dbe-111994be5ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.058 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[95d14a65-8b44-4451-9011-aedf8f8d7899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 NetworkManager[48993]: <info>  [1764404034.0864] device (tap988c10fa-90): carrier: link connected
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.093 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7d46cdd0-9c2f-49ec-94f9-0302136d6e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.115 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f468fd21-5af7-41a1-96fa-edfac3bda6f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700035, 'reachable_time': 44470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281411, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.133 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a9c1e9-8fa4-41e0-99c4-3d8f382b999a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700035, 'tstamp': 700035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281412, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.164 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bac24377-f318-4002-8b99-6269c2194bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700035, 'reachable_time': 44470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281413, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c3cd78-f27a-433c-99e4-fd24dea72e0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1658938656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.328 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b50672-c248-4aa6-b6cb-26d2b9ca5ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1055378530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.331 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.331 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.332 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:54 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:13:54 compute-2 NetworkManager[48993]: <info>  [1764404034.3370] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.339 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.340 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.341 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:54 compute-2 ovn_controller[134375]: 2025-11-29T08:13:54Z|00527|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.370 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.371 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.372 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.373 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[78d72e0d-e1b7-45f9-8242-5daad5a7a677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.374 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:13:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:13:54.375 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.430 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 2ed45397-ad95-4437-a0df-a49849d1d9bf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.431 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404034.4301233, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.431 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Resumed (Lifecycle Event)
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.433 232432 DEBUG nova.compute.manager [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.440 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance running successfully.
Nov 29 08:13:54 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.442 232432 DEBUG nova.virt.libvirt.guest [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.442 232432 DEBUG nova.virt.libvirt.driver [None req-b15bd457-7fc0-4f13-9d0d-1751f032375a 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.487 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.489 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.519 232432 DEBUG nova.compute.manager [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.520 232432 DEBUG oslo_concurrency.lockutils [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.520 232432 DEBUG oslo_concurrency.lockutils [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.520 232432 DEBUG oslo_concurrency.lockutils [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.521 232432 DEBUG nova.compute.manager [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.521 232432 WARNING nova.compute.manager [req-f6718c08-b8a8-481d-8620-ffa7e8079a83 req-fa19debb-a55f-48ef-b5c5-d3b57e169bf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state resize_finish.
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.539 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.540 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404034.43281, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.540 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Started (Lifecycle Event)
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.562 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:54 compute-2 nova_compute[232428]: 2025-11-29 08:13:54.564 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:54 compute-2 podman[281487]: 2025-11-29 08:13:54.786505208 +0000 UTC m=+0.050838026 container create 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:13:54 compute-2 systemd[1]: Started libpod-conmon-0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6.scope.
Nov 29 08:13:54 compute-2 podman[281487]: 2025-11-29 08:13:54.757495364 +0000 UTC m=+0.021828202 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:13:54 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:13:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09e38a6e647684b38d1cc00f862a96961877f837be443ff72bc07eb964b11c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:54 compute-2 podman[281487]: 2025-11-29 08:13:54.887299079 +0000 UTC m=+0.151631927 container init 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:13:54 compute-2 podman[281487]: 2025-11-29 08:13:54.893157132 +0000 UTC m=+0.157489960 container start 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:13:54 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [NOTICE]   (281506) : New worker (281508) forked
Nov 29 08:13:54 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [NOTICE]   (281506) : Loading success.
Nov 29 08:13:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:55.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:55 compute-2 nova_compute[232428]: 2025-11-29 08:13:55.193 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:55 compute-2 nova_compute[232428]: 2025-11-29 08:13:55.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:55 compute-2 nova_compute[232428]: 2025-11-29 08:13:55.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:13:55 compute-2 nova_compute[232428]: 2025-11-29 08:13:55.226 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:55 compute-2 ceph-mon[77138]: pgmap v2218: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 29 08:13:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:55.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.190 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2413399167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.658 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.659 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.659 232432 DEBUG nova.compute.manager [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.718 232432 DEBUG nova.compute.manager [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.719 232432 DEBUG oslo_concurrency.lockutils [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.719 232432 DEBUG oslo_concurrency.lockutils [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.719 232432 DEBUG oslo_concurrency.lockutils [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.720 232432 DEBUG nova.compute.manager [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:56 compute-2 nova_compute[232428]: 2025-11-29 08:13:56.720 232432 WARNING nova.compute.manager [req-381f128b-0e54-4388-ac72-2f1d19f97e52 req-af8ca1a2-e64f-4cf7-8c93-23ca6dc84e8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state resized and task_state None.
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.026 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.027 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.027 232432 DEBUG nova.network.neutron [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.027 232432 DEBUG nova.objects.instance [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'info_cache' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:57 compute-2 nova_compute[232428]: 2025-11-29 08:13:57.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:13:57 compute-2 ceph-mon[77138]: pgmap v2219: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Nov 29 08:13:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4279235389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:13:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:13:58 compute-2 nova_compute[232428]: 2025-11-29 08:13:58.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:58 compute-2 nova_compute[232428]: 2025-11-29 08:13:58.690 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:13:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.223 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.224 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.225 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.544 232432 DEBUG nova.network.neutron [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [{"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:59 compute-2 ceph-mon[77138]: pgmap v2220: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 162 op/s
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.569 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-2ed45397-ad95-4437-a0df-a49849d1d9bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.570 232432 DEBUG nova.objects.instance [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:13:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:13:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:59.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:13:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4064118393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.706 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.708 232432 DEBUG nova.storage.rbd_utils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] removing snapshot(nova-resize) on rbd image(2ed45397-ad95-4437-a0df-a49849d1d9bf_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.813 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.813 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.819 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:59 compute-2 nova_compute[232428]: 2025-11-29 08:13:59.819 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.026 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.027 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3978MB free_disk=20.784805297851562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.027 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.028 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.123 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Applying migration context for instance 2ed45397-ad95-4437-a0df-a49849d1d9bf as it has an incoming, in-progress migration 52467741-49f3-43d6-8911-ec5300b8359f. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.124 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating resource usage from migration 52467741-49f3-43d6-8911-ec5300b8359f
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.165 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 35f7492d-e1a0-4369-bf32-ba8fa094036a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.165 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration 52467741-49f3-43d6-8911-ec5300b8359f is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.165 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2ed45397-ad95-4437-a0df-a49849d1d9bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.166 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.166 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.271 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4064118393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/395582412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 08:14:00 compute-2 ovn_controller[134375]: 2025-11-29T08:14:00Z|00528|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.655 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2199506478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.835 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.844 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.864 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.911 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.912 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:00 compute-2 nova_compute[232428]: 2025-11-29 08:14:00.913 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.163 232432 DEBUG oslo_concurrency.processutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:01 compute-2 ceph-mon[77138]: pgmap v2221: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 KiB/s wr, 166 op/s
Nov 29 08:14:01 compute-2 ceph-mon[77138]: osdmap e286: 3 total, 3 up, 3 in
Nov 29 08:14:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2199506478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912798639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.690 232432 DEBUG oslo_concurrency.processutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.697 232432 DEBUG nova.compute.provider_tree [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.720 232432 DEBUG nova.scheduler.client.report [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:01 compute-2 nova_compute[232428]: 2025-11-29 08:14:01.854 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:02 compute-2 nova_compute[232428]: 2025-11-29 08:14:02.132 232432 INFO nova.scheduler.client.report [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Deleted allocation for migration 52467741-49f3-43d6-8911-ec5300b8359f
Nov 29 08:14:02 compute-2 nova_compute[232428]: 2025-11-29 08:14:02.240 232432 DEBUG oslo_concurrency.lockutils [None req-97397f78-c996-480d-af00-0c53524f9c26 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3912798639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:03.320 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:03.322 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:03 compute-2 nova_compute[232428]: 2025-11-29 08:14:03.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:03 compute-2 ceph-mon[77138]: pgmap v2223: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 KiB/s wr, 168 op/s
Nov 29 08:14:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4082016165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:04 compute-2 podman[281626]: 2025-11-29 08:14:04.785864807 +0000 UTC m=+0.179191747 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 29 08:14:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435786847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435786847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.217 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:06 compute-2 ceph-mon[77138]: pgmap v2224: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 KiB/s wr, 134 op/s
Nov 29 08:14:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/435786847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/435786847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.993 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.994 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.994 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.995 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.995 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.997 232432 INFO nova.compute.manager [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Terminating instance
Nov 29 08:14:06 compute-2 nova_compute[232428]: 2025-11-29 08:14:06.998 232432 DEBUG nova.compute.manager [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:14:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:07.575 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:07 compute-2 nova_compute[232428]: 2025-11-29 08:14:07.576 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:07.578 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:14:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:07 compute-2 ceph-mon[77138]: pgmap v2225: 305 pgs: 305 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 153 op/s
Nov 29 08:14:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2825660439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:08 compute-2 kernel: tap25f618be-49 (unregistering): left promiscuous mode
Nov 29 08:14:08 compute-2 NetworkManager[48993]: <info>  [1764404048.0273] device (tap25f618be-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 ovn_controller[134375]: 2025-11-29T08:14:08Z|00529|binding|INFO|Releasing lport 25f618be-492d-4ac9-9c9c-6583e0402572 from this chassis (sb_readonly=0)
Nov 29 08:14:08 compute-2 ovn_controller[134375]: 2025-11-29T08:14:08Z|00530|binding|INFO|Setting lport 25f618be-492d-4ac9-9c9c-6583e0402572 down in Southbound
Nov 29 08:14:08 compute-2 ovn_controller[134375]: 2025-11-29T08:14:08Z|00531|binding|INFO|Removing iface tap25f618be-49 ovn-installed in OVS
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.065 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:62:3f 10.100.0.10'], port_security=['fa:16:3e:e8:62:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2ed45397-ad95-4437-a0df-a49849d1d9bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '12', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=25f618be-492d-4ac9-9c9c-6583e0402572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.067 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 25f618be-492d-4ac9-9c9c-6583e0402572 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.069 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.071 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ad872298-699e-4d71-acf2-8e35222099bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.071 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.082 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 08:14:08 compute-2 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000064.scope: Consumed 12.410s CPU time.
Nov 29 08:14:08 compute-2 systemd-machined[194747]: Machine qemu-50-instance-00000064 terminated.
Nov 29 08:14:08 compute-2 NetworkManager[48993]: <info>  [1764404048.2235] manager: (tap25f618be-49): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.244 232432 INFO nova.virt.libvirt.driver [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Instance destroyed successfully.
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.245 232432 DEBUG nova.objects.instance [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 2ed45397-ad95-4437-a0df-a49849d1d9bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:08 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [NOTICE]   (281506) : haproxy version is 2.8.14-c23fe91
Nov 29 08:14:08 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [NOTICE]   (281506) : path to executable is /usr/sbin/haproxy
Nov 29 08:14:08 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [WARNING]  (281506) : Exiting Master process...
Nov 29 08:14:08 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [ALERT]    (281506) : Current worker (281508) exited with code 143 (Terminated)
Nov 29 08:14:08 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[281502]: [WARNING]  (281506) : All workers exited. Exiting... (0)
Nov 29 08:14:08 compute-2 systemd[1]: libpod-0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6.scope: Deactivated successfully.
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.293 232432 DEBUG nova.virt.libvirt.vif [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1161621840',display_name='tempest-ServerActionsTestJSON-server-1161621840',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1161621840',id=100,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-ur9w53wo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=2ed45397-ad95-4437-a0df-a49849d1d9bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.294 232432 DEBUG nova.network.os_vif_util [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "25f618be-492d-4ac9-9c9c-6583e0402572", "address": "fa:16:3e:e8:62:3f", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f618be-49", "ovs_interfaceid": "25f618be-492d-4ac9-9c9c-6583e0402572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.295 232432 DEBUG nova.network.os_vif_util [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.296 232432 DEBUG os_vif [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:14:08 compute-2 podman[281677]: 2025-11-29 08:14:08.299617907 +0000 UTC m=+0.085939120 container died 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.300 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.301 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f618be-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.310 232432 INFO os_vif [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:62:3f,bridge_name='br-int',has_traffic_filtering=True,id=25f618be-492d-4ac9-9c9c-6583e0402572,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f618be-49')
Nov 29 08:14:08 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6-userdata-shm.mount: Deactivated successfully.
Nov 29 08:14:08 compute-2 systemd[1]: var-lib-containers-storage-overlay-e09e38a6e647684b38d1cc00f862a96961877f837be443ff72bc07eb964b11c4-merged.mount: Deactivated successfully.
Nov 29 08:14:08 compute-2 podman[281677]: 2025-11-29 08:14:08.54544472 +0000 UTC m=+0.331765933 container cleanup 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:14:08 compute-2 systemd[1]: libpod-conmon-0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6.scope: Deactivated successfully.
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.617 232432 DEBUG nova.compute.manager [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.618 232432 DEBUG oslo_concurrency.lockutils [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.618 232432 DEBUG oslo_concurrency.lockutils [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.619 232432 DEBUG oslo_concurrency.lockutils [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.619 232432 DEBUG nova.compute.manager [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.620 232432 DEBUG nova.compute.manager [req-52005f20-a118-485e-a19e-f0667506fef9 req-b2434b3d-9f94-43c9-abd2-41667c4744ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-unplugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:14:08 compute-2 podman[281731]: 2025-11-29 08:14:08.836894503 +0000 UTC m=+0.228362358 container remove 0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.847 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f3a514-caf7-4dc2-ad89-4370f71f65de]: (4, ('Sat Nov 29 08:14:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6)\n0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6\nSat Nov 29 08:14:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6)\n0045d1d1bce4c16391770c27c3e56eb87172d1915e5900a88ea0e049bd4d44f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.849 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[66775ad2-16cd-4c26-aa53-0949d1c0a546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.851 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:14:08 compute-2 nova_compute[232428]: 2025-11-29 08:14:08.885 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.892 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[78d44de7-6ca5-44b1-b8d1-3268d1a69f7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.911 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf575b9-f25a-4613-816b-e582be6f3364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.913 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df85ba5a-187c-4e83-81e4-f2ad343765bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.942 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[be117733-ceaa-488e-8334-03d6f82c9318]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700027, 'reachable_time': 18357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281747, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.948 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:14:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:08.949 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[9a459351-0312-47f4-b2d3-259975ccb604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:08 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:14:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:09.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:10 compute-2 ceph-mon[77138]: pgmap v2226: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 26 KiB/s wr, 156 op/s
Nov 29 08:14:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.747 232432 DEBUG nova.compute.manager [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.748 232432 DEBUG oslo_concurrency.lockutils [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.748 232432 DEBUG oslo_concurrency.lockutils [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.748 232432 DEBUG oslo_concurrency.lockutils [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.749 232432 DEBUG nova.compute.manager [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] No waiting events found dispatching network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.749 232432 WARNING nova.compute.manager [req-ee9d8f8b-6ae2-4d4d-a89f-05501165962c req-9deb4ccb-5733-4325-9c84-47feac7183f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received unexpected event network-vif-plugged-25f618be-492d-4ac9-9c9c-6583e0402572 for instance with vm_state active and task_state deleting.
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.894 232432 INFO nova.virt.libvirt.driver [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Deleting instance files /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf_del
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.896 232432 INFO nova.virt.libvirt.driver [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Deletion of /var/lib/nova/instances/2ed45397-ad95-4437-a0df-a49849d1d9bf_del complete
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.974 232432 INFO nova.compute.manager [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Took 3.98 seconds to destroy the instance on the hypervisor.
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.975 232432 DEBUG oslo.service.loopingcall [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.976 232432 DEBUG nova.compute.manager [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:14:10 compute-2 nova_compute[232428]: 2025-11-29 08:14:10.976 232432 DEBUG nova.network.neutron [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:14:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:11 compute-2 ceph-mon[77138]: pgmap v2227: 305 pgs: 305 active+clean; 299 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 171 KiB/s rd, 35 KiB/s wr, 120 op/s
Nov 29 08:14:11 compute-2 nova_compute[232428]: 2025-11-29 08:14:11.218 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:11 compute-2 sudo[281750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:11 compute-2 sudo[281750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:11 compute-2 sudo[281750]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:11 compute-2 sudo[281775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:11 compute-2 sudo[281775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:11 compute-2 sudo[281775]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:11 compute-2 ovn_controller[134375]: 2025-11-29T08:14:11Z|00532|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 08:14:11 compute-2 nova_compute[232428]: 2025-11-29 08:14:11.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.510 232432 DEBUG nova.network.neutron [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.572 232432 INFO nova.compute.manager [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Took 1.60 seconds to deallocate network for instance.
Nov 29 08:14:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:12.581 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.699 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.700 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.708 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.748 232432 INFO nova.scheduler.client.report [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Deleted allocations for instance 2ed45397-ad95-4437-a0df-a49849d1d9bf
Nov 29 08:14:12 compute-2 nova_compute[232428]: 2025-11-29 08:14:12.852 232432 DEBUG oslo_concurrency.lockutils [None req-045533c7-3f4f-4f5a-adb4-8aad6593069d 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "2ed45397-ad95-4437-a0df-a49849d1d9bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:13 compute-2 nova_compute[232428]: 2025-11-29 08:14:13.016 232432 DEBUG nova.compute.manager [req-6763a4c2-7ad7-4e46-b2f5-c0b37314ace4 req-428f2105-7db1-451e-9a6a-e10cf289f264 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Received event network-vif-deleted-25f618be-492d-4ac9-9c9c-6583e0402572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:13 compute-2 nova_compute[232428]: 2025-11-29 08:14:13.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:13.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:13 compute-2 ceph-mon[77138]: pgmap v2228: 305 pgs: 305 active+clean; 299 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 147 KiB/s rd, 30 KiB/s wr, 103 op/s
Nov 29 08:14:13 compute-2 ceph-mon[77138]: osdmap e287: 3 total, 3 up, 3 in
Nov 29 08:14:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457809496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1457809496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:15.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 08:14:15 compute-2 ceph-mon[77138]: pgmap v2230: 305 pgs: 305 active+clean; 279 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 168 KiB/s rd, 33 KiB/s wr, 116 op/s
Nov 29 08:14:16 compute-2 nova_compute[232428]: 2025-11-29 08:14:16.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:17 compute-2 ceph-mon[77138]: osdmap e288: 3 total, 3 up, 3 in
Nov 29 08:14:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2226767345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:17.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.308 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:18 compute-2 ceph-mon[77138]: pgmap v2232: 305 pgs: 305 active+clean; 279 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 13 KiB/s wr, 54 op/s
Nov 29 08:14:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2518392122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3264506246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.841 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.842 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.864 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.987 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.987 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.995 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:14:18 compute-2 nova_compute[232428]: 2025-11-29 08:14:18.995 232432 INFO nova.compute.claims [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.175 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/548914338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.605 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.612 232432 DEBUG nova.compute.provider_tree [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:19.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.650 232432 DEBUG nova.scheduler.client.report [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.692 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.693 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.757 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.757 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.785 232432 INFO nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:14:19 compute-2 nova_compute[232428]: 2025-11-29 08:14:19.805 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.007 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.009 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.010 232432 INFO nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Creating image(s)
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.045 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:20 compute-2 ceph-mon[77138]: pgmap v2233: 305 pgs: 305 active+clean; 279 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 KiB/s wr, 13 op/s
Nov 29 08:14:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/548914338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.084 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.117 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.122 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.161 232432 DEBUG nova.policy [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '661b6600a32b40d8a48db16cb71c7e75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd72b5448be0e463f80dca118feb42d3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.215 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.216 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.217 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.217 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.248 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:20 compute-2 nova_compute[232428]: 2025-11-29 08:14:20.252 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:21.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:21 compute-2 podman[281921]: 2025-11-29 08:14:21.275345364 +0000 UTC m=+0.052714564 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:14:21 compute-2 nova_compute[232428]: 2025-11-29 08:14:21.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:21 compute-2 nova_compute[232428]: 2025-11-29 08:14:21.532 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Successfully created port: 7abc93d6-b92a-4d55-849e-3a607a8de2e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:14:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:22 compute-2 ceph-mon[77138]: pgmap v2234: 305 pgs: 305 active+clean; 304 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 1.7 MiB/s wr, 72 op/s
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.741 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Successfully updated port: 7abc93d6-b92a-4d55-849e-3a607a8de2e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.760 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.761 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.761 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.816 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:22 compute-2 nova_compute[232428]: 2025-11-29 08:14:22.897 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] resizing rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.028 232432 DEBUG nova.compute.manager [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-changed-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.028 232432 DEBUG nova.compute.manager [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Refreshing instance network info cache due to event network-changed-7abc93d6-b92a-4d55-849e-3a607a8de2e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.029 232432 DEBUG oslo_concurrency.lockutils [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.034 232432 DEBUG nova.objects.instance [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.062 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.063 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Ensure instance console log exists: /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.063 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.064 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.064 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.241 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404048.2393992, 2ed45397-ad95-4437-a0df-a49849d1d9bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.241 232432 INFO nova.compute.manager [-] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] VM Stopped (Lifecycle Event)
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.266 232432 DEBUG nova.compute.manager [None req-d81431cb-0832-4aea-9dff-fef2eac213ea - - - - - -] [instance: 2ed45397-ad95-4437-a0df-a49849d1d9bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:23 compute-2 nova_compute[232428]: 2025-11-29 08:14:23.411 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:14:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:23 compute-2 podman[282012]: 2025-11-29 08:14:23.662172209 +0000 UTC m=+0.071532741 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:14:23 compute-2 ceph-mon[77138]: pgmap v2235: 305 pgs: 305 active+clean; 304 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.188 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.658 232432 DEBUG nova.network.neutron [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.686 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.686 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance network_info: |[{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.687 232432 DEBUG oslo_concurrency.lockutils [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.687 232432 DEBUG nova.network.neutron [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Refreshing network info cache for port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.690 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start _get_guest_xml network_info=[{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.695 232432 WARNING nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.701 232432 DEBUG nova.virt.libvirt.host [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.703 232432 DEBUG nova.virt.libvirt.host [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.709 232432 DEBUG nova.virt.libvirt.host [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.709 232432 DEBUG nova.virt.libvirt.host [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.710 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.711 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.711 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.711 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.712 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.712 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.712 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.713 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.713 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.713 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.713 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.714 232432 DEBUG nova.virt.hardware [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:14:24 compute-2 nova_compute[232428]: 2025-11-29 08:14:24.716 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3153978743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/439354160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.203 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.234 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.238 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2589819633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.687 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.689 232432 DEBUG nova.virt.libvirt.vif [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.689 232432 DEBUG nova.network.os_vif_util [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.690 232432 DEBUG nova.network.os_vif_util [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.691 232432 DEBUG nova.objects.instance [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.704 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <uuid>4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</uuid>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <name>instance-00000071</name>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1739027816</nova:name>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:14:24</nova:creationTime>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <nova:port uuid="7abc93d6-b92a-4d55-849e-3a607a8de2e4">
Nov 29 08:14:25 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <system>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="serial">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="uuid">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </system>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <os>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </os>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <features>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </features>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk">
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config">
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:14:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e2:6a:60"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <target dev="tap7abc93d6-b9"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/console.log" append="off"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <video>
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </video>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:14:25 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:14:25 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:14:25 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:14:25 compute-2 nova_compute[232428]: </domain>
Nov 29 08:14:25 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.707 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Preparing to wait for external event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.708 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.708 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.709 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.710 232432 DEBUG nova.virt.libvirt.vif [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.710 232432 DEBUG nova.network.os_vif_util [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.711 232432 DEBUG nova.network.os_vif_util [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.711 232432 DEBUG os_vif [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.715 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.717 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.717 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.721 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.721 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7abc93d6-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.722 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7abc93d6-b9, col_values=(('external_ids', {'iface-id': '7abc93d6-b92a-4d55-849e-3a607a8de2e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:6a:60', 'vm-uuid': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.723 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:25 compute-2 NetworkManager[48993]: <info>  [1764404065.7245] manager: (tap7abc93d6-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.726 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.730 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.731 232432 INFO os_vif [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:14:25 compute-2 ceph-mon[77138]: pgmap v2236: 305 pgs: 305 active+clean; 349 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 3.5 MiB/s wr, 88 op/s
Nov 29 08:14:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3662821542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/439354160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2589819633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.787 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.787 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.788 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:e2:6a:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.788 232432 INFO nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Using config drive
Nov 29 08:14:25 compute-2 nova_compute[232428]: 2025-11-29 08:14:25.815 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.263 232432 DEBUG nova.network.neutron [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updated VIF entry in instance network info cache for port 7abc93d6-b92a-4d55-849e-3a607a8de2e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.263 232432 DEBUG nova.network.neutron [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.284 232432 DEBUG oslo_concurrency.lockutils [req-004d431c-c33d-4843-8146-254e4939f9e2 req-2db0ef05-7099-4797-afc0-fde71fff1f4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.292 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.303 232432 INFO nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Creating config drive at /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.310 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl20ui2yv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.454 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl20ui2yv" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.487 232432 DEBUG nova.storage.rbd_utils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.491 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.656 232432 DEBUG oslo_concurrency.processutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.658 232432 INFO nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Deleting local config drive /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/disk.config because it was imported into RBD.
Nov 29 08:14:26 compute-2 kernel: tap7abc93d6-b9: entered promiscuous mode
Nov 29 08:14:26 compute-2 NetworkManager[48993]: <info>  [1764404066.7219] manager: (tap7abc93d6-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 08:14:26 compute-2 ovn_controller[134375]: 2025-11-29T08:14:26Z|00533|binding|INFO|Claiming lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 for this chassis.
Nov 29 08:14:26 compute-2 ovn_controller[134375]: 2025-11-29T08:14:26Z|00534|binding|INFO|7abc93d6-b92a-4d55-849e-3a607a8de2e4: Claiming fa:16:3e:e2:6a:60 10.100.0.9
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.729 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.730 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.732 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:14:26 compute-2 ovn_controller[134375]: 2025-11-29T08:14:26Z|00535|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 ovn-installed in OVS
Nov 29 08:14:26 compute-2 ovn_controller[134375]: 2025-11-29T08:14:26Z|00536|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 up in Southbound
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.749 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f50f90ac-e79b-4607-8a2f-e4f9b43abc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.750 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:14:26 compute-2 nova_compute[232428]: 2025-11-29 08:14:26.753 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.753 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.753 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10f26006-d883-4e2b-a864-e25a3ce82394]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.756 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc3de54-262b-4d75-84e6-77d7db49a95f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 systemd-machined[194747]: New machine qemu-51-instance-00000071.
Nov 29 08:14:26 compute-2 systemd[1]: Started Virtual Machine qemu-51-instance-00000071.
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.772 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[55ab15fd-9a78-4797-ae14-5d9d5d44a671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 systemd-udevd[282171]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:14:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/950357901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[468eecf8-23e6-498d-8561-938e5b8248e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 NetworkManager[48993]: <info>  [1764404066.7997] device (tap7abc93d6-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:14:26 compute-2 NetworkManager[48993]: <info>  [1764404066.8009] device (tap7abc93d6-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.836 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1781fd-c102-4a73-bb50-b98ef0a197b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 systemd-udevd[282175]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:14:26 compute-2 NetworkManager[48993]: <info>  [1764404066.8807] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.880 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8baee92f-ea51-4a5f-a049-7881d6275cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.921 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[831f43f4-ac32-427a-89e3-05c5fa64092f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.926 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3b4569-b250-4a3c-bb51-4b33455a55e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 NetworkManager[48993]: <info>  [1764404066.9535] device (tap988c10fa-90): carrier: link connected
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.961 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9dc9ca-0614-413a-b712-00f1354b12f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.980 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[496142dd-b87b-473b-867c-4847f972252b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703322, 'reachable_time': 28310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282202, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:26.999 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43d33662-5cc8-46b5-ada9-88ac8d1c4eda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703322, 'tstamp': 703322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282203, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.029 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5d4b6e-a4bc-4fde-8afb-43330b207050]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703322, 'reachable_time': 28310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282204, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.043 232432 DEBUG nova.compute.manager [req-d5b9f460-eb64-4202-a3bb-f8e5212044f4 req-7e8d7b3d-eb9d-4c77-8b5f-7ce6286fe13a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.044 232432 DEBUG oslo_concurrency.lockutils [req-d5b9f460-eb64-4202-a3bb-f8e5212044f4 req-7e8d7b3d-eb9d-4c77-8b5f-7ce6286fe13a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.044 232432 DEBUG oslo_concurrency.lockutils [req-d5b9f460-eb64-4202-a3bb-f8e5212044f4 req-7e8d7b3d-eb9d-4c77-8b5f-7ce6286fe13a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.044 232432 DEBUG oslo_concurrency.lockutils [req-d5b9f460-eb64-4202-a3bb-f8e5212044f4 req-7e8d7b3d-eb9d-4c77-8b5f-7ce6286fe13a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.044 232432 DEBUG nova.compute.manager [req-d5b9f460-eb64-4202-a3bb-f8e5212044f4 req-7e8d7b3d-eb9d-4c77-8b5f-7ce6286fe13a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Processing event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.071 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fbac504c-5e64-4bf0-a3a1-69c4a3a1c315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.140 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cab9c97-b1be-460a-bcc5-688aa39384ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:27 compute-2 NetworkManager[48993]: <info>  [1764404067.1449] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 08:14:27 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.148 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:27 compute-2 ovn_controller[134375]: 2025-11-29T08:14:27Z|00537|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.165 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.166 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.167 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f6020563-64bd-4f3d-b78f-bf2415896319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.168 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:14:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:27.169 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:14:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:27.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:27 compute-2 podman[282249]: 2025-11-29 08:14:27.600796992 +0000 UTC m=+0.111179876 container create a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:14:27 compute-2 podman[282249]: 2025-11-29 08:14:27.526215718 +0000 UTC m=+0.036598602 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:14:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:27 compute-2 systemd[1]: Started libpod-conmon-a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619.scope.
Nov 29 08:14:27 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.678 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404067.6776378, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.680 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Started (Lifecycle Event)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.684 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:14:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85155341ab52d2e11a0b4965f1529fbc65db71d9fb747d0204d50330ec0a4804/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.690 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.695 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance spawned successfully.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.696 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:14:27 compute-2 podman[282249]: 2025-11-29 08:14:27.700484949 +0000 UTC m=+0.210867823 container init a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.705 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:27 compute-2 podman[282249]: 2025-11-29 08:14:27.706834917 +0000 UTC m=+0.217217771 container start a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.708 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.721 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.722 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.723 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.723 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.724 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.725 232432 DEBUG nova.virt.libvirt.driver [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:14:27 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [NOTICE]   (282297) : New worker (282299) forked
Nov 29 08:14:27 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [NOTICE]   (282297) : Loading success.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.757 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.758 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404067.6794772, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.759 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Paused (Lifecycle Event)
Nov 29 08:14:27 compute-2 ceph-mon[77138]: pgmap v2237: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 166 op/s
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.806 232432 INFO nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Took 7.80 seconds to spawn the instance on the hypervisor.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.807 232432 DEBUG nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.811 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.820 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404067.6895926, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.821 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Resumed (Lifecycle Event)
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.853 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.862 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.897 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.901 232432 INFO nova.compute.manager [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Took 8.96 seconds to build instance.
Nov 29 08:14:27 compute-2 nova_compute[232428]: 2025-11-29 08:14:27.938 232432 DEBUG oslo_concurrency.lockutils [None req-bfc61e64-df87-4bf0-b287-f4f0d8c9692b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:28 compute-2 nova_compute[232428]: 2025-11-29 08:14:28.020 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3635728905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3635728905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.509 232432 DEBUG nova.compute.manager [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.511 232432 DEBUG oslo_concurrency.lockutils [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.514 232432 DEBUG oslo_concurrency.lockutils [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.515 232432 DEBUG oslo_concurrency.lockutils [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.516 232432 DEBUG nova.compute.manager [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:29 compute-2 nova_compute[232428]: 2025-11-29 08:14:29.516 232432 WARNING nova.compute.manager [req-9c585eaf-7cde-4c8d-8ade-ed9b63ebe1f3 req-b5865ac6-fbf0-459d-b751-611da6109e3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state active and task_state None.
Nov 29 08:14:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:29.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:29 compute-2 ceph-mon[77138]: pgmap v2238: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 146 op/s
Nov 29 08:14:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.723 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.776 232432 DEBUG nova.compute.manager [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-changed-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.777 232432 DEBUG nova.compute.manager [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Refreshing instance network info cache due to event network-changed-7abc93d6-b92a-4d55-849e-3a607a8de2e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.780 232432 DEBUG oslo_concurrency.lockutils [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.780 232432 DEBUG oslo_concurrency.lockutils [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:30 compute-2 nova_compute[232428]: 2025-11-29 08:14:30.781 232432 DEBUG nova.network.neutron [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Refreshing network info cache for port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:14:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1557549871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:31.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:31 compute-2 nova_compute[232428]: 2025-11-29 08:14:31.295 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:31 compute-2 sudo[282309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:31 compute-2 sudo[282309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:31 compute-2 sudo[282309]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:31 compute-2 sudo[282334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:31 compute-2 sudo[282334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:31 compute-2 sudo[282334]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:31.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 08:14:31 compute-2 ceph-mon[77138]: pgmap v2239: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Nov 29 08:14:32 compute-2 nova_compute[232428]: 2025-11-29 08:14:32.668 232432 DEBUG nova.network.neutron [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updated VIF entry in instance network info cache for port 7abc93d6-b92a-4d55-849e-3a607a8de2e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:14:32 compute-2 nova_compute[232428]: 2025-11-29 08:14:32.669 232432 DEBUG nova.network.neutron [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:32 compute-2 nova_compute[232428]: 2025-11-29 08:14:32.690 232432 DEBUG oslo_concurrency.lockutils [req-580976de-2e91-48f6-99a9-b54e1033c24c req-c7d8bc53-51db-4d69-b907-5154b7f65981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:32 compute-2 ceph-mon[77138]: osdmap e289: 3 total, 3 up, 3 in
Nov 29 08:14:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1211497857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.005000158s ======
Nov 29 08:14:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:33.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000158s
Nov 29 08:14:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:33 compute-2 ceph-mon[77138]: pgmap v2241: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Nov 29 08:14:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/587050811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1971257752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:35.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:35 compute-2 nova_compute[232428]: 2025-11-29 08:14:35.726 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:35 compute-2 podman[282361]: 2025-11-29 08:14:35.744289995 +0000 UTC m=+0.139684534 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:35 compute-2 ceph-mon[77138]: pgmap v2242: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 820 KiB/s wr, 267 op/s
Nov 29 08:14:36 compute-2 nova_compute[232428]: 2025-11-29 08:14:36.298 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:37.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 08:14:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:37 compute-2 ceph-mon[77138]: pgmap v2243: 305 pgs: 305 active+clean; 413 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.7 MiB/s wr, 220 op/s
Nov 29 08:14:37 compute-2 ceph-mon[77138]: osdmap e290: 3 total, 3 up, 3 in
Nov 29 08:14:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:39.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:39.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:40 compute-2 ceph-mon[77138]: pgmap v2245: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Nov 29 08:14:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3784317977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:40 compute-2 nova_compute[232428]: 2025-11-29 08:14:40.728 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:41.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1395427114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:41 compute-2 nova_compute[232428]: 2025-11-29 08:14:41.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:42 compute-2 ceph-mon[77138]: pgmap v2246: 305 pgs: 305 active+clean; 366 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Nov 29 08:14:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2928880347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:43.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:43 compute-2 ceph-mon[77138]: pgmap v2247: 305 pgs: 305 active+clean; 366 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.8 MiB/s wr, 261 op/s
Nov 29 08:14:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:43 compute-2 ovn_controller[134375]: 2025-11-29T08:14:43Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:6a:60 10.100.0.9
Nov 29 08:14:43 compute-2 ovn_controller[134375]: 2025-11-29T08:14:43Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:6a:60 10.100.0.9
Nov 29 08:14:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:45 compute-2 ceph-mon[77138]: pgmap v2248: 305 pgs: 305 active+clean; 392 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.1 MiB/s wr, 269 op/s
Nov 29 08:14:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:45 compute-2 nova_compute[232428]: 2025-11-29 08:14:45.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:45 compute-2 nova_compute[232428]: 2025-11-29 08:14:45.916 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:46 compute-2 nova_compute[232428]: 2025-11-29 08:14:46.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:46 compute-2 nova_compute[232428]: 2025-11-29 08:14:46.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:47 compute-2 nova_compute[232428]: 2025-11-29 08:14:47.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:47 compute-2 ceph-mon[77138]: pgmap v2249: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.7 MiB/s wr, 380 op/s
Nov 29 08:14:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2785337149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:48 compute-2 sudo[282393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:48 compute-2 sudo[282393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:48 compute-2 sudo[282393]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-2 sudo[282418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:48 compute-2 sudo[282418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:48 compute-2 sudo[282418]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-2 sudo[282443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:48 compute-2 sudo[282443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:48 compute-2 sudo[282443]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-2 sudo[282468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:14:48 compute-2 sudo[282468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:49 compute-2 nova_compute[232428]: 2025-11-29 08:14:49.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:49 compute-2 sudo[282468]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:49 compute-2 ceph-mon[77138]: pgmap v2250: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.0 MiB/s wr, 373 op/s
Nov 29 08:14:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1897348982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/525671678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:50 compute-2 nova_compute[232428]: 2025-11-29 08:14:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:50 compute-2 nova_compute[232428]: 2025-11-29 08:14:50.735 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/544943348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:51.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:51 compute-2 nova_compute[232428]: 2025-11-29 08:14:51.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:51 compute-2 sudo[282525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:51 compute-2 sudo[282525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:51 compute-2 sudo[282525]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:51.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:51 compute-2 podman[282524]: 2025-11-29 08:14:51.686493828 +0000 UTC m=+0.084806785 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 08:14:51 compute-2 sudo[282570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:51 compute-2 sudo[282570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:51 compute-2 sudo[282570]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:51 compute-2 ceph-mon[77138]: pgmap v2251: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.0 MiB/s wr, 347 op/s
Nov 29 08:14:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3882917752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:51 compute-2 nova_compute[232428]: 2025-11-29 08:14:51.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:51.972 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:51.974 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:14:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1868820674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2406103316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:53 compute-2 nova_compute[232428]: 2025-11-29 08:14:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:53 compute-2 nova_compute[232428]: 2025-11-29 08:14:53.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:14:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:53 compute-2 nova_compute[232428]: 2025-11-29 08:14:53.573 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:53 compute-2 nova_compute[232428]: 2025-11-29 08:14:53.574 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:53 compute-2 nova_compute[232428]: 2025-11-29 08:14:53.574 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:14:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:53 compute-2 ceph-mon[77138]: pgmap v2252: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 269 op/s
Nov 29 08:14:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/185540646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2143360591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:54 compute-2 podman[282596]: 2025-11-29 08:14:54.691872602 +0000 UTC m=+0.092780693 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:14:54 compute-2 ovn_controller[134375]: 2025-11-29T08:14:54Z|00538|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 08:14:54 compute-2 ovn_controller[134375]: 2025-11-29T08:14:54Z|00539|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:14:54 compute-2 nova_compute[232428]: 2025-11-29 08:14:54.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:14:54.977 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:55 compute-2 nova_compute[232428]: 2025-11-29 08:14:55.230 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:55 compute-2 nova_compute[232428]: 2025-11-29 08:14:55.252 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:55 compute-2 nova_compute[232428]: 2025-11-29 08:14:55.253 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:14:55 compute-2 nova_compute[232428]: 2025-11-29 08:14:55.254 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:14:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:14:55 compute-2 nova_compute[232428]: 2025-11-29 08:14:55.738 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:56 compute-2 ceph-mon[77138]: pgmap v2253: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 281 op/s
Nov 29 08:14:56 compute-2 nova_compute[232428]: 2025-11-29 08:14:56.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:57 compute-2 sudo[282617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:57 compute-2 sudo[282617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:57 compute-2 sudo[282617]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2679506722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:57 compute-2 ceph-mon[77138]: pgmap v2254: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 228 op/s
Nov 29 08:14:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:14:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:14:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3638998199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:57 compute-2 sudo[282642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:14:57 compute-2 sudo[282642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:57 compute-2 sudo[282642]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:14:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:14:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.265 232432 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.265 232432 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing instance network info cache due to event network-changed-bd853c6d-a3b6-4414-8e4e-24d926fd6692. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.266 232432 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.266 232432 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:59 compute-2 nova_compute[232428]: 2025-11-29 08:14:59.267 232432 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Refreshing network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:14:59 compute-2 ceph-mon[77138]: pgmap v2255: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Nov 29 08:14:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:14:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:14:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:00 compute-2 nova_compute[232428]: 2025-11-29 08:15:00.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:00 compute-2 nova_compute[232428]: 2025-11-29 08:15:00.740 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.230 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/179056350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:01 compute-2 ceph-mon[77138]: pgmap v2256: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Nov 29 08:15:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:01.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.704 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.822 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.824 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.829 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:01 compute-2 nova_compute[232428]: 2025-11-29 08:15:01.829 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:01 compute-2 anacron[34256]: Job `cron.monthly' started
Nov 29 08:15:01 compute-2 anacron[34256]: Job `cron.monthly' terminated
Nov 29 08:15:01 compute-2 anacron[34256]: Normal exit (3 jobs run)
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.153 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.156 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3983MB free_disk=20.78500747680664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.156 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.157 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.291 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 35f7492d-e1a0-4369-bf32-ba8fa094036a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.291 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.292 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.292 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.350 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.394 232432 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated VIF entry in instance network info cache for port bd853c6d-a3b6-4414-8e4e-24d926fd6692. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.396 232432 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.422 232432 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3163182385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.791 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.800 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.833 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.877 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:15:02 compute-2 nova_compute[232428]: 2025-11-29 08:15:02.878 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/179056350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:03.321 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:03.322 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:04 compute-2 ceph-mon[77138]: pgmap v2257: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 849 KiB/s wr, 164 op/s
Nov 29 08:15:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/404797161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3163182385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.603 232432 DEBUG nova.compute.manager [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Nov 29 08:15:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.723 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.723 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.742 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.747 232432 DEBUG nova.objects.instance [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_requests' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.761 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.762 232432 INFO nova.compute.claims [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.762 232432 DEBUG nova.objects.instance [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.773 232432 DEBUG nova.objects.instance [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.809 232432 INFO nova.compute.resource_tracker [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating resource usage from migration d5e9521b-b26c-49ce-91ea-e2426f6e4989
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.907 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:05 compute-2 nova_compute[232428]: 2025-11-29 08:15:05.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:06 compute-2 ceph-mon[77138]: pgmap v2258: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 849 KiB/s wr, 165 op/s
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/242281138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.387 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.393 232432 DEBUG nova.compute.provider_tree [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.421 232432 DEBUG nova.scheduler.client.report [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.453 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.454 232432 INFO nova.compute.manager [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Migrating
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.507 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.507 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:06 compute-2 nova_compute[232428]: 2025-11-29 08:15:06.509 232432 DEBUG nova.network.neutron [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:06 compute-2 podman[282742]: 2025-11-29 08:15:06.713977557 +0000 UTC m=+0.122807539 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:15:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:07.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/242281138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:07.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:08 compute-2 ceph-mon[77138]: pgmap v2259: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 153 op/s
Nov 29 08:15:08 compute-2 nova_compute[232428]: 2025-11-29 08:15:08.953 232432 DEBUG nova.network.neutron [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:08 compute-2 nova_compute[232428]: 2025-11-29 08:15:08.976 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:09 compute-2 nova_compute[232428]: 2025-11-29 08:15:09.079 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 08:15:09 compute-2 nova_compute[232428]: 2025-11-29 08:15:09.085 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:15:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:09 compute-2 ceph-mon[77138]: pgmap v2260: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 962 KiB/s wr, 148 op/s
Nov 29 08:15:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:10 compute-2 nova_compute[232428]: 2025-11-29 08:15:10.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:11.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.317 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 kernel: tap7abc93d6-b9 (unregistering): left promiscuous mode
Nov 29 08:15:11 compute-2 NetworkManager[48993]: <info>  [1764404111.5850] device (tap7abc93d6-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:15:11 compute-2 ceph-mon[77138]: pgmap v2261: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 174 op/s
Nov 29 08:15:11 compute-2 ovn_controller[134375]: 2025-11-29T08:15:11Z|00540|binding|INFO|Releasing lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 from this chassis (sb_readonly=0)
Nov 29 08:15:11 compute-2 ovn_controller[134375]: 2025-11-29T08:15:11Z|00541|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 down in Southbound
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 ovn_controller[134375]: 2025-11-29T08:15:11Z|00542|binding|INFO|Removing iface tap7abc93d6-b9 ovn-installed in OVS
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.605 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.606 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.608 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.609 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[53f94642-c4a3-4bab-833c-cc48f9d68614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.609 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 29 08:15:11 compute-2 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000071.scope: Consumed 16.849s CPU time.
Nov 29 08:15:11 compute-2 systemd-machined[194747]: Machine qemu-51-instance-00000071 terminated.
Nov 29 08:15:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [NOTICE]   (282297) : haproxy version is 2.8.14-c23fe91
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [NOTICE]   (282297) : path to executable is /usr/sbin/haproxy
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [WARNING]  (282297) : Exiting Master process...
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [WARNING]  (282297) : Exiting Master process...
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [ALERT]    (282297) : Current worker (282299) exited with code 143 (Terminated)
Nov 29 08:15:11 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[282293]: [WARNING]  (282297) : All workers exited. Exiting... (0)
Nov 29 08:15:11 compute-2 systemd[1]: libpod-a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619.scope: Deactivated successfully.
Nov 29 08:15:11 compute-2 podman[282798]: 2025-11-29 08:15:11.745953227 +0000 UTC m=+0.046834421 container died a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:15:11 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619-userdata-shm.mount: Deactivated successfully.
Nov 29 08:15:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-85155341ab52d2e11a0b4965f1529fbc65db71d9fb747d0204d50330ec0a4804-merged.mount: Deactivated successfully.
Nov 29 08:15:11 compute-2 podman[282798]: 2025-11-29 08:15:11.805192023 +0000 UTC m=+0.106073267 container cleanup a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:15:11 compute-2 systemd[1]: libpod-conmon-a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619.scope: Deactivated successfully.
Nov 29 08:15:11 compute-2 sudo[282830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:11 compute-2 sudo[282830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:11 compute-2 sudo[282830]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:11 compute-2 podman[282840]: 2025-11-29 08:15:11.911505347 +0000 UTC m=+0.063930563 container remove a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.920 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d43081-4bb5-4853-8fcf-1276dea393dc]: (4, ('Sat Nov 29 08:15:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619)\na58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619\nSat Nov 29 08:15:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (a58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619)\na58cdb02500e60dcd31bf6ddc495f2699c0bcce1f57f64a98342e3e9a41f7619\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.923 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a386fdf2-eb6a-4ecd-93da-b58dfe4812e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.925 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:11 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 nova_compute[232428]: 2025-11-29 08:15:11.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:11 compute-2 sudo[282877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.964 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3391b107-a793-49c1-86c7-34f158c38e5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:11 compute-2 sudo[282877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:11 compute-2 sudo[282877]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.982 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[380a12e2-e147-451e-a5f5-4c47866dfb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:11.984 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a38e3d-2c32-4629-9773-a4258677a7f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:12.007 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7328444d-d69e-43f7-8bff-b17e4654a8d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703309, 'reachable_time': 15527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282907, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:12.010 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:15:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:12.010 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd901d6-ae84-4ec5-acb4-2c8420ceffd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:12 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.109 232432 INFO nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance shutdown successfully after 3 seconds.
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.118 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance destroyed successfully.
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.120 232432 DEBUG nova.virt.libvirt.vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.121 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.123 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.124 232432 DEBUG os_vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.129 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7abc93d6-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.135 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.138 232432 INFO os_vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.145 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.146 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.535 232432 DEBUG nova.network.neutron [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.725 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.725 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.726 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.746 232432 DEBUG nova.compute.manager [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.747 232432 DEBUG oslo_concurrency.lockutils [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.747 232432 DEBUG oslo_concurrency.lockutils [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.747 232432 DEBUG oslo_concurrency.lockutils [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.747 232432 DEBUG nova.compute.manager [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.747 232432 WARNING nova.compute.manager [req-988a77ce-43cd-4b62-90da-063499d9aad8 req-e8d5a5a7-2a27-4ef7-9254-9ab0c326e208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state active and task_state resize_migrated.
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.921 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.921 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:12 compute-2 nova_compute[232428]: 2025-11-29 08:15:12.922 232432 DEBUG nova.network.neutron [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:13.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:13 compute-2 nova_compute[232428]: 2025-11-29 08:15:13.842 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:14 compute-2 ceph-mon[77138]: pgmap v2262: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.759 232432 DEBUG nova.network.neutron [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.779 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.870 232432 DEBUG nova.compute.manager [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.871 232432 DEBUG oslo_concurrency.lockutils [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.871 232432 DEBUG oslo_concurrency.lockutils [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.872 232432 DEBUG oslo_concurrency.lockutils [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.872 232432 DEBUG nova.compute.manager [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.873 232432 WARNING nova.compute.manager [req-be516278-3013-49d0-aa16-be83feea1739 req-bde23d05-3e1d-465c-a796-b8d9394b4037 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state active and task_state resize_migrated.
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.922 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.925 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.925 232432 INFO nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Creating image(s)
Nov 29 08:15:14 compute-2 nova_compute[232428]: 2025-11-29 08:15:14.974 232432 DEBUG nova.storage.rbd_utils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] creating snapshot(nova-resize) on rbd image(4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:15:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 08:15:15 compute-2 ceph-mon[77138]: pgmap v2263: 305 pgs: 305 active+clean; 513 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.234 232432 DEBUG nova.objects.instance [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.364 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.365 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Ensure instance console log exists: /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.365 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.366 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.366 232432 DEBUG oslo_concurrency.lockutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.369 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start _get_guest_xml network_info=[{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.374 232432 WARNING nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.379 232432 DEBUG nova.virt.libvirt.host [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.380 232432 DEBUG nova.virt.libvirt.host [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.383 232432 DEBUG nova.virt.libvirt.host [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.383 232432 DEBUG nova.virt.libvirt.host [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.384 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.385 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.385 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.386 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.386 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.386 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.386 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.387 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.387 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.387 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.388 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.388 232432 DEBUG nova.virt.hardware [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.388 232432 DEBUG nova.objects.instance [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.423 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1230608790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.931 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:15 compute-2 nova_compute[232428]: 2025-11-29 08:15:15.991 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.319 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 ceph-mon[77138]: osdmap e291: 3 total, 3 up, 3 in
Nov 29 08:15:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1230608790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2538684056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.507 232432 DEBUG oslo_concurrency.processutils [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.510 232432 DEBUG nova.virt.libvirt.vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.510 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.512 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.517 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <uuid>4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</uuid>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <name>instance-00000071</name>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <memory>196608</memory>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1739027816</nova:name>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:15:15</nova:creationTime>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:flavor name="m1.micro">
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:memory>192</nova:memory>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <nova:port uuid="7abc93d6-b92a-4d55-849e-3a607a8de2e4">
Nov 29 08:15:16 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <system>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="serial">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="uuid">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </system>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <os>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </os>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <features>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </features>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk">
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config">
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:16 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e2:6a:60"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <target dev="tap7abc93d6-b9"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/console.log" append="off"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <video>
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </video>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:15:16 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:15:16 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:15:16 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:15:16 compute-2 nova_compute[232428]: </domain>
Nov 29 08:15:16 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.520 232432 DEBUG nova.virt.libvirt.vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.522 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-2021825713-network", "vif_mac": "fa:16:3e:e2:6a:60"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.524 232432 DEBUG nova.network.os_vif_util [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.525 232432 DEBUG os_vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.528 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.529 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.533 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7abc93d6-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.534 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7abc93d6-b9, col_values=(('external_ids', {'iface-id': '7abc93d6-b92a-4d55-849e-3a607a8de2e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:6a:60', 'vm-uuid': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.5381] manager: (tap7abc93d6-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.539 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.545 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.548 232432 INFO os_vif [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.645 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.646 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.646 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:e2:6a:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.646 232432 INFO nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Using config drive
Nov 29 08:15:16 compute-2 kernel: tap7abc93d6-b9: entered promiscuous mode
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.7670] manager: (tap7abc93d6-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 ovn_controller[134375]: 2025-11-29T08:15:16Z|00543|binding|INFO|Claiming lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 for this chassis.
Nov 29 08:15:16 compute-2 ovn_controller[134375]: 2025-11-29T08:15:16Z|00544|binding|INFO|7abc93d6-b92a-4d55-849e-3a607a8de2e4: Claiming fa:16:3e:e2:6a:60 10.100.0.9
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.774 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.775 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.776 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ee6e75-77bd-42e1-84df-1fdd5d869615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.794 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.796 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.796 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[47df7d65-f8de-41ae-9045-ab665b7eda7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.797 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[30541a5e-1b01-4764-8acf-a90d3c15b4af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_controller[134375]: 2025-11-29T08:15:16Z|00545|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 ovn-installed in OVS
Nov 29 08:15:16 compute-2 ovn_controller[134375]: 2025-11-29T08:15:16Z|00546|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 up in Southbound
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.805 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 systemd-machined[194747]: New machine qemu-52-instance-00000071.
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.810 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ca1046-c1a9-4ccc-9ad9-0b1aa33a502f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 nova_compute[232428]: 2025-11-29 08:15:16.811 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:16 compute-2 systemd[1]: Started Virtual Machine qemu-52-instance-00000071.
Nov 29 08:15:16 compute-2 systemd-udevd[283080]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.839 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b873e091-7c2c-499f-b748-ba573f071c06]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.8438] device (tap7abc93d6-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.8458] device (tap7abc93d6-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.876 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4eb67-4fef-4580-b56d-dff0a99b09f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.882 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[84d18f89-1242-470a-a91a-af4b41c4a2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 systemd-udevd[283084]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.8837] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.925 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2821b933-1874-4313-9edf-437dd3d7cca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.928 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf5acc5-b747-44fb-bd77-f38005a681d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 NetworkManager[48993]: <info>  [1764404116.9623] device (tap988c10fa-90): carrier: link connected
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.973 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[51d00191-961c-404c-9e54-8f3634dd3ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:16.998 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5e183a9f-89a0-4b7c-b946-6dcb464fbb4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708323, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283110, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.014 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e8d008-bc0b-47a6-82f3-50dbab778a2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708323, 'tstamp': 708323}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283111, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.025 232432 DEBUG nova.compute.manager [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.027 232432 DEBUG oslo_concurrency.lockutils [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.027 232432 DEBUG oslo_concurrency.lockutils [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.027 232432 DEBUG oslo_concurrency.lockutils [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.028 232432 DEBUG nova.compute.manager [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.028 232432 WARNING nova.compute.manager [req-5aabdbab-d827-4971-9f3f-39102123d557 req-182ee53c-fd22-4822-b4cc-a9c97e0eff6b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state active and task_state resize_finish.
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.035 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[03211e33-593f-4e23-b477-dbee58f502b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708323, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283112, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.072 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dbd085-674d-41bd-8a25-fc3e978c8e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.149 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74269dd0-f133-4cd6-8acc-24b3037d3aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.150 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.150 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.151 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:17 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:15:17 compute-2 NetworkManager[48993]: <info>  [1764404117.1539] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.154 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.156 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.157 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:17 compute-2 ovn_controller[134375]: 2025-11-29T08:15:17Z|00547|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.180 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.181 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.182 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0d2d3d-5b12-4985-9723-cfb69c73806e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.183 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:15:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:17.184 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:15:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.292 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.293 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404117.2915635, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.294 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Resumed (Lifecycle Event)
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.298 232432 DEBUG nova.compute.manager [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.303 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance running successfully.
Nov 29 08:15:17 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.307 232432 DEBUG nova.virt.libvirt.guest [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.307 232432 DEBUG nova.virt.libvirt.driver [None req-67e366a3-0762-44a9-b4f2-92037346a762 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.315 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.320 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.363 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] During sync_power_state the instance has a pending task (resize_finish). Skip.
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.364 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404117.2946482, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.364 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Started (Lifecycle Event)
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.510 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:17 compute-2 nova_compute[232428]: 2025-11-29 08:15:17.525 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:17 compute-2 ceph-mon[77138]: pgmap v2265: 305 pgs: 305 active+clean; 517 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Nov 29 08:15:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2538684056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:17 compute-2 podman[283184]: 2025-11-29 08:15:17.658134434 +0000 UTC m=+0.085445745 container create 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:15:17 compute-2 systemd[1]: Started libpod-conmon-4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201.scope.
Nov 29 08:15:17 compute-2 podman[283184]: 2025-11-29 08:15:17.618809118 +0000 UTC m=+0.046120509 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:15:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:17 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:15:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95a993fb20cadd2578b972ec54b40069b518027bf28b461e81353a16e061e86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:17 compute-2 podman[283184]: 2025-11-29 08:15:17.763706164 +0000 UTC m=+0.191017495 container init 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:15:17 compute-2 podman[283184]: 2025-11-29 08:15:17.771212388 +0000 UTC m=+0.198523699 container start 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:15:17 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [NOTICE]   (283204) : New worker (283206) forked
Nov 29 08:15:17 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [NOTICE]   (283204) : Loading success.
Nov 29 08:15:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2565780543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.170 232432 DEBUG nova.compute.manager [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.170 232432 DEBUG oslo_concurrency.lockutils [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.171 232432 DEBUG oslo_concurrency.lockutils [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.171 232432 DEBUG oslo_concurrency.lockutils [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.171 232432 DEBUG nova.compute.manager [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.172 232432 WARNING nova.compute.manager [req-f594b2ae-eb83-497e-847d-41a1006e1ed9 req-9c2a5d5f-8cbd-4572-8d00-4625bed7128e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state resized and task_state resize_reverting.
Nov 29 08:15:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.906 232432 DEBUG nova.network.neutron [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.906 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.906 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:19 compute-2 nova_compute[232428]: 2025-11-29 08:15:19.906 232432 DEBUG nova.network.neutron [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:20 compute-2 ceph-mon[77138]: pgmap v2266: 305 pgs: 305 active+clean; 517 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.0 MiB/s wr, 145 op/s
Nov 29 08:15:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/377378725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.460 232432 DEBUG nova.network.neutron [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.481 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 kernel: tap7abc93d6-b9 (unregistering): left promiscuous mode
Nov 29 08:15:21 compute-2 NetworkManager[48993]: <info>  [1764404121.6535] device (tap7abc93d6-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:15:21 compute-2 ovn_controller[134375]: 2025-11-29T08:15:21Z|00548|binding|INFO|Releasing lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 from this chassis (sb_readonly=0)
Nov 29 08:15:21 compute-2 ovn_controller[134375]: 2025-11-29T08:15:21Z|00549|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 down in Southbound
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.662 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 ovn_controller[134375]: 2025-11-29T08:15:21Z|00550|binding|INFO|Removing iface tap7abc93d6-b9 ovn-installed in OVS
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.666 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.671 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.672 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.674 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.675 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d451408c-e930-4f38-b9ab-b23989c36e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.676 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 29 08:15:21 compute-2 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000071.scope: Consumed 4.802s CPU time.
Nov 29 08:15:21 compute-2 systemd-machined[194747]: Machine qemu-52-instance-00000071 terminated.
Nov 29 08:15:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:21.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:21 compute-2 podman[283228]: 2025-11-29 08:15:21.792168137 +0000 UTC m=+0.061138507 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [NOTICE]   (283204) : haproxy version is 2.8.14-c23fe91
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [NOTICE]   (283204) : path to executable is /usr/sbin/haproxy
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [WARNING]  (283204) : Exiting Master process...
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [WARNING]  (283204) : Exiting Master process...
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [ALERT]    (283204) : Current worker (283206) exited with code 143 (Terminated)
Nov 29 08:15:21 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283200]: [WARNING]  (283204) : All workers exited. Exiting... (0)
Nov 29 08:15:21 compute-2 systemd[1]: libpod-4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201.scope: Deactivated successfully.
Nov 29 08:15:21 compute-2 podman[283258]: 2025-11-29 08:15:21.834852818 +0000 UTC m=+0.047996948 container died 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201-userdata-shm.mount: Deactivated successfully.
Nov 29 08:15:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-f95a993fb20cadd2578b972ec54b40069b518027bf28b461e81353a16e061e86-merged.mount: Deactivated successfully.
Nov 29 08:15:21 compute-2 podman[283258]: 2025-11-29 08:15:21.869506237 +0000 UTC m=+0.082650377 container cleanup 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:15:21 compute-2 systemd[1]: libpod-conmon-4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201.scope: Deactivated successfully.
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.904 232432 DEBUG nova.compute.manager [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.906 232432 DEBUG oslo_concurrency.lockutils [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.906 232432 DEBUG oslo_concurrency.lockutils [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.907 232432 DEBUG oslo_concurrency.lockutils [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.907 232432 DEBUG nova.compute.manager [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.907 232432 WARNING nova.compute.manager [req-c865066c-c0d0-4ac7-aac7-0f12d51adb71 req-ca7f2376-94b4-4daf-b3cb-0d7f3fcebefe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state resized and task_state resize_reverting.
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.944 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance destroyed successfully.
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.944 232432 DEBUG nova.objects.instance [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:21 compute-2 podman[283290]: 2025-11-29 08:15:21.956292172 +0000 UTC m=+0.063846301 container remove 4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.960 232432 DEBUG nova.virt.libvirt.vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.961 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.961 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.961 232432 DEBUG os_vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.963 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7abc93d6-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.964 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0731e9-1ece-49fb-a1c2-8543aee59af7]: (4, ('Sat Nov 29 08:15:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201)\n4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201\nSat Nov 29 08:15:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201)\n4c321c7573a359d5e758b3eef37c491778b93321856048c39d0a1ea733a65201\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.967 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fda4e24c-14bb-4121-b025-51e382c6fe2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.968 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.969 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.972 232432 INFO os_vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.976 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.976 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:21 compute-2 nova_compute[232428]: 2025-11-29 08:15:21.988 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:21.991 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7b896f4d-8888-4919-90fa-d05be5df8b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.000 232432 DEBUG nova.objects.instance [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:22.004 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c90aa3d1-864c-408d-ab54-28b602ebca2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:22.005 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[98ae3be3-9853-4927-acf8-1583b44b8044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:22.030 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[33be83ec-aa99-41cd-bbad-494762209280]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708313, 'reachable_time': 19426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283318, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:22 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:15:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:22.034 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:15:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:22.034 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[73b67ecf-4a63-42aa-baeb-ec17dcbd60b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.097 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/497315551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.561 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.571 232432 DEBUG nova.compute.provider_tree [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.590 232432 DEBUG nova.scheduler.client.report [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:22 compute-2 ceph-mon[77138]: pgmap v2267: 305 pgs: 305 active+clean; 549 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 08:15:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3990925621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.680 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.828 232432 INFO nova.compute.manager [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Swapping old allocation on dict_keys(['77f31ad1-818f-4610-8dd1-3fbcd25133f2']) held by migration d5e9521b-b26c-49ce-91ea-e2426f6e4989 for instance
Nov 29 08:15:22 compute-2 nova_compute[232428]: 2025-11-29 08:15:22.880 232432 DEBUG nova.scheduler.client.report [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Overwriting current allocation {'allocations': {'77f31ad1-818f-4610-8dd1-3fbcd25133f2': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 68}}, 'project_id': 'd72b5448be0e463f80dca118feb42d3b', 'user_id': '661b6600a32b40d8a48db16cb71c7e75', 'consumer_generation': 1} on consumer 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Nov 29 08:15:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:23 compute-2 nova_compute[232428]: 2025-11-29 08:15:23.297 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:23 compute-2 nova_compute[232428]: 2025-11-29 08:15:23.298 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:23 compute-2 nova_compute[232428]: 2025-11-29 08:15:23.299 232432 DEBUG nova.network.neutron [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:23.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:24 compute-2 ceph-mon[77138]: pgmap v2268: 305 pgs: 305 active+clean; 549 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 08:15:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/497315551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1172514175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.087 232432 DEBUG nova.compute.manager [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.087 232432 DEBUG oslo_concurrency.lockutils [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.088 232432 DEBUG oslo_concurrency.lockutils [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.088 232432 DEBUG oslo_concurrency.lockutils [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.088 232432 DEBUG nova.compute.manager [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:24 compute-2 nova_compute[232428]: 2025-11-29 08:15:24.089 232432 WARNING nova.compute.manager [req-802d157f-e1dd-4db5-8e0f-125d22ea68b4 req-115771b6-3fec-400f-8f38-6a0bb6d21486 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state resized and task_state resize_reverting.
Nov 29 08:15:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3343366361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1107604286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:25 compute-2 podman[283342]: 2025-11-29 08:15:25.730020694 +0000 UTC m=+0.116979803 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:15:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:26 compute-2 nova_compute[232428]: 2025-11-29 08:15:26.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:26 compute-2 ceph-mon[77138]: pgmap v2269: 305 pgs: 305 active+clean; 568 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Nov 29 08:15:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2166702727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/477985426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:26 compute-2 nova_compute[232428]: 2025-11-29 08:15:26.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:27.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:27 compute-2 nova_compute[232428]: 2025-11-29 08:15:27.325 232432 DEBUG nova.network.neutron [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:27 compute-2 nova_compute[232428]: 2025-11-29 08:15:27.355 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:27 compute-2 nova_compute[232428]: 2025-11-29 08:15:27.356 232432 DEBUG nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Nov 29 08:15:27 compute-2 nova_compute[232428]: 2025-11-29 08:15:27.460 232432 DEBUG nova.storage.rbd_utils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rolling back rbd image(4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Nov 29 08:15:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:27 compute-2 ceph-mon[77138]: pgmap v2270: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 29 08:15:27 compute-2 nova_compute[232428]: 2025-11-29 08:15:27.841 232432 DEBUG nova.storage.rbd_utils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] removing snapshot(nova-resize) on rbd image(4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:15:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 08:15:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1951469474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1951469474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.852 232432 DEBUG nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start _get_guest_xml network_info=[{"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.859 232432 WARNING nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.866 232432 DEBUG nova.virt.libvirt.host [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.867 232432 DEBUG nova.virt.libvirt.host [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.871 232432 DEBUG nova.virt.libvirt.host [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.872 232432 DEBUG nova.virt.libvirt.host [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.874 232432 DEBUG nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.874 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.875 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.876 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.876 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.877 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.877 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.878 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.878 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.879 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.879 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.880 232432 DEBUG nova.virt.hardware [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.880 232432 DEBUG nova.objects.instance [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:28 compute-2 nova_compute[232428]: 2025-11-29 08:15:28.905 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2673359964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.361 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.433 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:29.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:29 compute-2 ceph-mon[77138]: pgmap v2271: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Nov 29 08:15:29 compute-2 ceph-mon[77138]: osdmap e292: 3 total, 3 up, 3 in
Nov 29 08:15:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2673359964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/374438767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.941 232432 DEBUG oslo_concurrency.processutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.945 232432 DEBUG nova.virt.libvirt.vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.946 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.948 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.953 232432 DEBUG nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <uuid>4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</uuid>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <name>instance-00000071</name>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1739027816</nova:name>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:15:28</nova:creationTime>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <nova:port uuid="7abc93d6-b92a-4d55-849e-3a607a8de2e4">
Nov 29 08:15:29 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <system>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="serial">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="uuid">4f58fbe8-7445-4bbe-a6b5-65008b6c43f3</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </system>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <os>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </os>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <features>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </features>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk">
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_disk.config">
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:29 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e2:6a:60"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <target dev="tap7abc93d6-b9"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3/console.log" append="off"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <video>
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </video>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:15:29 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:15:29 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:15:29 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:15:29 compute-2 nova_compute[232428]: </domain>
Nov 29 08:15:29 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.956 232432 DEBUG nova.compute.manager [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Preparing to wait for external event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.957 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.957 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.957 232432 DEBUG oslo_concurrency.lockutils [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.958 232432 DEBUG nova.virt.libvirt.vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.958 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.959 232432 DEBUG nova.network.os_vif_util [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.960 232432 DEBUG os_vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.961 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.961 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.962 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.966 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7abc93d6-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.966 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7abc93d6-b9, col_values=(('external_ids', {'iface-id': '7abc93d6-b92a-4d55-849e-3a607a8de2e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:6a:60', 'vm-uuid': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:29 compute-2 NetworkManager[48993]: <info>  [1764404129.9707] manager: (tap7abc93d6-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.977 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:29 compute-2 nova_compute[232428]: 2025-11-29 08:15:29.978 232432 INFO os_vif [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:15:30 compute-2 kernel: tap7abc93d6-b9: entered promiscuous mode
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.0942] manager: (tap7abc93d6-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.094 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 ovn_controller[134375]: 2025-11-29T08:15:30Z|00551|binding|INFO|Claiming lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 for this chassis.
Nov 29 08:15:30 compute-2 ovn_controller[134375]: 2025-11-29T08:15:30Z|00552|binding|INFO|7abc93d6-b92a-4d55-849e-3a607a8de2e4: Claiming fa:16:3e:e2:6a:60 10.100.0.9
Nov 29 08:15:30 compute-2 ovn_controller[134375]: 2025-11-29T08:15:30Z|00553|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 ovn-installed in OVS
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.119 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 systemd-udevd[283494]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.1457] device (tap7abc93d6-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.1464] device (tap7abc93d6-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:15:30 compute-2 ovn_controller[134375]: 2025-11-29T08:15:30Z|00554|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 up in Southbound
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.189 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.191 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.194 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.208 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[20e06bea-4282-4f9c-b59b-13051a5011e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.209 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.212 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba2cc0f-ab46-4cf0-bd1c-56823b1331d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eba85c5a-c525-4918-84cf-8aef8cc8cce1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 systemd-machined[194747]: New machine qemu-53-instance-00000071.
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.228 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[96328c64-307a-42c7-8070-087d3d0187ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 systemd[1]: Started Virtual Machine qemu-53-instance-00000071.
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.244 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e08ea49-0388-4330-98ef-62dcb0920a76]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.296 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c644f4c-dd4e-45a1-af90-8a217d0cab37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 systemd-udevd[283496]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.3094] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.308 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7b6ef9-da53-4fd4-b208-fbd9dd2ba9bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.361 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d52d4d0c-26d6-40dc-9570-f317f4917187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.366 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d40dbd64-8ea4-46e0-a575-39bd2bbe301a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.3916] device (tap988c10fa-90): carrier: link connected
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.396 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[72db7b57-fb3a-48c9-adf3-a71bcfadbfec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.430 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8068e04c-09db-4349-a20c-9c9dd886a6ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709665, 'reachable_time': 37120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283530, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.445 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eef7bc8a-9303-448b-a0fa-50eef6f83a52]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709665, 'tstamp': 709665}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283531, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.471 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[55e21923-5d9d-4266-8eeb-71d9a7491416]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709665, 'reachable_time': 37120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283532, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.482 232432 DEBUG nova.compute.manager [req-761d18c8-3d94-422a-8fe0-32b5fc0f2c96 req-6ee94584-b529-4c70-9672-ac212d85699c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.483 232432 DEBUG oslo_concurrency.lockutils [req-761d18c8-3d94-422a-8fe0-32b5fc0f2c96 req-6ee94584-b529-4c70-9672-ac212d85699c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.483 232432 DEBUG oslo_concurrency.lockutils [req-761d18c8-3d94-422a-8fe0-32b5fc0f2c96 req-6ee94584-b529-4c70-9672-ac212d85699c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.483 232432 DEBUG oslo_concurrency.lockutils [req-761d18c8-3d94-422a-8fe0-32b5fc0f2c96 req-6ee94584-b529-4c70-9672-ac212d85699c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.483 232432 DEBUG nova.compute.manager [req-761d18c8-3d94-422a-8fe0-32b5fc0f2c96 req-6ee94584-b529-4c70-9672-ac212d85699c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Processing event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.533 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc078316-6725-4d5e-bcf8-e9db4cf20012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.645 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbffdfe-84f8-4a12-95a7-4c20c10694a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.647 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.647 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.648 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.651 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 NetworkManager[48993]: <info>  [1764404130.6534] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 08:15:30 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.656 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 ovn_controller[134375]: 2025-11-29T08:15:30Z|00555|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.672 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.672 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.673 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9c39addc-f5e1-46da-ac1a-d483238f9ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.675 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:15:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:30.676 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:15:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/374438767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.848 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.849 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404130.8482606, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.850 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Started (Lifecycle Event)
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.853 232432 DEBUG nova.compute.manager [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.867 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance running successfully.
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.868 232432 DEBUG nova.virt.libvirt.driver [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.890 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.894 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.942 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.944 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404130.8485937, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.944 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Paused (Lifecycle Event)
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.976 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:30 compute-2 sshd-session[283561]: Invalid user sol from 45.148.10.240 port 41164
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.985 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404130.8584492, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:30 compute-2 nova_compute[232428]: 2025-11-29 08:15:30.986 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Resumed (Lifecycle Event)
Nov 29 08:15:31 compute-2 nova_compute[232428]: 2025-11-29 08:15:31.014 232432 INFO nova.compute.manager [None req-988ac414-5a27-4638-a894-711271ca7499 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance to original state: 'active'
Nov 29 08:15:31 compute-2 nova_compute[232428]: 2025-11-29 08:15:31.021 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:31 compute-2 nova_compute[232428]: 2025-11-29 08:15:31.027 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:31 compute-2 nova_compute[232428]: 2025-11-29 08:15:31.062 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Nov 29 08:15:31 compute-2 sshd-session[283561]: Connection closed by invalid user sol 45.148.10.240 port 41164 [preauth]
Nov 29 08:15:31 compute-2 podman[283608]: 2025-11-29 08:15:31.170914732 +0000 UTC m=+0.078032943 container create cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:31 compute-2 podman[283608]: 2025-11-29 08:15:31.131255331 +0000 UTC m=+0.038373602 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:15:31 compute-2 systemd[1]: Started libpod-conmon-cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3.scope.
Nov 29 08:15:31 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:15:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535afea716f7d57467aab6f98905c76202a442c4882c9f1602f9e1ef9c25d13e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:31.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:31 compute-2 podman[283608]: 2025-11-29 08:15:31.308072896 +0000 UTC m=+0.215191087 container init cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:15:31 compute-2 podman[283608]: 2025-11-29 08:15:31.316009494 +0000 UTC m=+0.223127655 container start cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:15:31 compute-2 nova_compute[232428]: 2025-11-29 08:15:31.332 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:31 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [NOTICE]   (283628) : New worker (283630) forked
Nov 29 08:15:31 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [NOTICE]   (283628) : Loading success.
Nov 29 08:15:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:31 compute-2 ceph-mon[77138]: pgmap v2273: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Nov 29 08:15:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/760031319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:32 compute-2 sudo[283640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:32 compute-2 sudo[283640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:32 compute-2 sudo[283640]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:32 compute-2 sudo[283665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:32 compute-2 sudo[283665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:32 compute-2 sudo[283665]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.582 232432 DEBUG nova.compute.manager [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.584 232432 DEBUG oslo_concurrency.lockutils [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.586 232432 DEBUG oslo_concurrency.lockutils [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.586 232432 DEBUG oslo_concurrency.lockutils [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.587 232432 DEBUG nova.compute.manager [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:32 compute-2 nova_compute[232428]: 2025-11-29 08:15:32.588 232432 WARNING nova.compute.manager [req-cab295ad-41d7-4161-ae67-d8bc3f60885b req-c6e0a5ef-ae4d-41c3-9d05-393b8b03fb0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state active and task_state None.
Nov 29 08:15:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:33 compute-2 ceph-mon[77138]: pgmap v2274: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Nov 29 08:15:34 compute-2 nova_compute[232428]: 2025-11-29 08:15:34.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.820 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.821 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.822 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.823 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.823 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.826 232432 INFO nova.compute.manager [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Terminating instance
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.829 232432 DEBUG nova.compute.manager [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:15:35 compute-2 kernel: tap7abc93d6-b9 (unregistering): left promiscuous mode
Nov 29 08:15:35 compute-2 NetworkManager[48993]: <info>  [1764404135.8812] device (tap7abc93d6-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:15:35 compute-2 ovn_controller[134375]: 2025-11-29T08:15:35Z|00556|binding|INFO|Releasing lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 from this chassis (sb_readonly=0)
Nov 29 08:15:35 compute-2 ovn_controller[134375]: 2025-11-29T08:15:35Z|00557|binding|INFO|Setting lport 7abc93d6-b92a-4d55-849e-3a607a8de2e4 down in Southbound
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-2 ovn_controller[134375]: 2025-11-29T08:15:35Z|00558|binding|INFO|Removing iface tap7abc93d6-b9 ovn-installed in OVS
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:35.909 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:6a:60 10.100.0.9'], port_security=['fa:16:3e:e2:6a:60 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f58fbe8-7445-4bbe-a6b5-65008b6c43f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7abc93d6-b92a-4d55-849e-3a607a8de2e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:35.912 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7abc93d6-b92a-4d55-849e-3a607a8de2e4 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:15:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:35.915 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:15:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:35.916 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cecfaf35-f260-42a5-a397-5cbffa144594]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:35.917 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:15:35 compute-2 nova_compute[232428]: 2025-11-29 08:15:35.917 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-2 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 29 08:15:35 compute-2 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000071.scope: Consumed 5.663s CPU time.
Nov 29 08:15:35 compute-2 systemd-machined[194747]: Machine qemu-53-instance-00000071 terminated.
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [NOTICE]   (283628) : haproxy version is 2.8.14-c23fe91
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.080 232432 INFO nova.virt.libvirt.driver [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Instance destroyed successfully.
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [NOTICE]   (283628) : path to executable is /usr/sbin/haproxy
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.081 232432 DEBUG nova.objects.instance [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [WARNING]  (283628) : Exiting Master process...
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [WARNING]  (283628) : Exiting Master process...
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [ALERT]    (283628) : Current worker (283630) exited with code 143 (Terminated)
Nov 29 08:15:36 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[283624]: [WARNING]  (283628) : All workers exited. Exiting... (0)
Nov 29 08:15:36 compute-2 systemd[1]: libpod-cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3.scope: Deactivated successfully.
Nov 29 08:15:36 compute-2 podman[283716]: 2025-11-29 08:15:36.092627838 +0000 UTC m=+0.060831485 container died cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.107 232432 DEBUG nova.virt.libvirt.vif [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1739027816',display_name='tempest-ServerActionsTestJSON-server-1739027816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1739027816',id=113,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-9yotpa81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=4f58fbe8-7445-4bbe-a6b5-65008b6c43f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.109 232432 DEBUG nova.network.os_vif_util [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "address": "fa:16:3e:e2:6a:60", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7abc93d6-b9", "ovs_interfaceid": "7abc93d6-b92a-4d55-849e-3a607a8de2e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.111 232432 DEBUG nova.network.os_vif_util [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.112 232432 DEBUG os_vif [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.115 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.116 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7abc93d6-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.126 232432 INFO os_vif [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:6a:60,bridge_name='br-int',has_traffic_filtering=True,id=7abc93d6-b92a-4d55-849e-3a607a8de2e4,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7abc93d6-b9')
Nov 29 08:15:36 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:15:36 compute-2 systemd[1]: var-lib-containers-storage-overlay-535afea716f7d57467aab6f98905c76202a442c4882c9f1602f9e1ef9c25d13e-merged.mount: Deactivated successfully.
Nov 29 08:15:36 compute-2 podman[283716]: 2025-11-29 08:15:36.155473085 +0000 UTC m=+0.123676692 container cleanup cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:15:36 compute-2 systemd[1]: libpod-conmon-cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3.scope: Deactivated successfully.
Nov 29 08:15:36 compute-2 ceph-mon[77138]: pgmap v2275: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 241 op/s
Nov 29 08:15:36 compute-2 podman[283772]: 2025-11-29 08:15:36.254466524 +0000 UTC m=+0.064029235 container remove cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.265 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09a84999-ca35-4895-b0a0-d681b1034306]: (4, ('Sat Nov 29 08:15:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3)\ncf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3\nSat Nov 29 08:15:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (cf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3)\ncf4d527920012133411705a7d66aa57568cad260401e197f2c98f2ad4f3afff3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.268 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ff760257-46e5-4f69-95e6-de41ee0a9c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.270 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:36 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.274 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.288 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.291 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5d07819e-0886-485a-993b-cdbb202aad9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.310 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0d9e9e-9646-4964-a9ad-267ec715bb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.312 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a411bd56-080b-4e0c-9087-4a9ea94a8dbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.331 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.342 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[56552404-f0bf-46a5-aacb-7cb9d3f6b3a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709655, 'reachable_time': 29199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283790, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.347 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:15:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:36.348 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4c81281d-0388-445e-822b-379f856b4589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.811 232432 DEBUG nova.compute.manager [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.813 232432 DEBUG oslo_concurrency.lockutils [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.814 232432 DEBUG oslo_concurrency.lockutils [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.814 232432 DEBUG oslo_concurrency.lockutils [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.815 232432 DEBUG nova.compute.manager [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.815 232432 DEBUG nova.compute.manager [req-9f82498c-f84b-4cc2-a07e-40a2609795bf req-7de8c5b5-48d3-416a-bdeb-5703a2d7fe5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-unplugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.996 232432 INFO nova.virt.libvirt.driver [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Deleting instance files /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_del
Nov 29 08:15:36 compute-2 nova_compute[232428]: 2025-11-29 08:15:36.997 232432 INFO nova.virt.libvirt.driver [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Deletion of /var/lib/nova/instances/4f58fbe8-7445-4bbe-a6b5-65008b6c43f3_del complete
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.058 232432 INFO nova.compute.manager [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Took 1.23 seconds to destroy the instance on the hypervisor.
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.059 232432 DEBUG oslo.service.loopingcall [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.059 232432 DEBUG nova.compute.manager [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.059 232432 DEBUG nova.network.neutron [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:15:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 08:15:37 compute-2 ceph-mon[77138]: pgmap v2276: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 32 KiB/s wr, 352 op/s
Nov 29 08:15:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 08:15:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:37.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:37.694 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:37.694 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.698 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:37 compute-2 podman[283792]: 2025-11-29 08:15:37.728050163 +0000 UTC m=+0.115861099 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:15:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.874 232432 DEBUG nova.network.neutron [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.894 232432 INFO nova.compute.manager [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Took 0.83 seconds to deallocate network for instance.
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.959 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.960 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.964 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:37 compute-2 nova_compute[232428]: 2025-11-29 08:15:37.977 232432 DEBUG nova.compute.manager [req-f6173ac9-aca3-4df4-ab68-7844d3d72409 req-8a47305b-73de-4029-904d-8f735f283d6e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-deleted-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.005 232432 INFO nova.scheduler.client.report [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Deleted allocations for instance 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.069 232432 DEBUG oslo_concurrency.lockutils [None req-6b2fd2ec-d3a9-4255-85be-776f978f6b38 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:38 compute-2 ceph-mon[77138]: osdmap e293: 3 total, 3 up, 3 in
Nov 29 08:15:38 compute-2 ceph-mon[77138]: osdmap e294: 3 total, 3 up, 3 in
Nov 29 08:15:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.911 232432 DEBUG nova.compute.manager [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.912 232432 DEBUG oslo_concurrency.lockutils [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.912 232432 DEBUG oslo_concurrency.lockutils [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.913 232432 DEBUG oslo_concurrency.lockutils [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4f58fbe8-7445-4bbe-a6b5-65008b6c43f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.913 232432 DEBUG nova.compute.manager [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] No waiting events found dispatching network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:38 compute-2 nova_compute[232428]: 2025-11-29 08:15:38.914 232432 WARNING nova.compute.manager [req-8cbf1859-815d-44d3-98c8-d22402ac3742 req-3aa3e031-7b51-47d4-a836-44471c113e5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Received unexpected event network-vif-plugged-7abc93d6-b92a-4d55-849e-3a607a8de2e4 for instance with vm_state deleted and task_state None.
Nov 29 08:15:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:39 compute-2 ceph-mon[77138]: pgmap v2279: 305 pgs: 305 active+clean; 589 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 1020 KiB/s wr, 352 op/s
Nov 29 08:15:39 compute-2 ceph-mon[77138]: osdmap e295: 3 total, 3 up, 3 in
Nov 29 08:15:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/621339171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 08:15:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:39.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:40 compute-2 ceph-mon[77138]: osdmap e296: 3 total, 3 up, 3 in
Nov 29 08:15:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:41 compute-2 nova_compute[232428]: 2025-11-29 08:15:41.119 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:41.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:41 compute-2 nova_compute[232428]: 2025-11-29 08:15:41.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:41 compute-2 ceph-mon[77138]: pgmap v2282: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.4 MiB/s wr, 468 op/s
Nov 29 08:15:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.064 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.066 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.082 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.176 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.177 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.182 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.183 232432 INFO nova.compute.claims [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:15:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:43.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.345 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:43 compute-2 ceph-mon[77138]: pgmap v2283: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 373 op/s
Nov 29 08:15:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2307149494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.807 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.815 232432 DEBUG nova.compute.provider_tree [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.834 232432 DEBUG nova.scheduler.client.report [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.855 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.856 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.898 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.899 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.921 232432 INFO nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:15:43 compute-2 nova_compute[232428]: 2025-11-29 08:15:43.935 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.024 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.026 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.026 232432 INFO nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Creating image(s)
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.055 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.082 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.117 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.121 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.199 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.201 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.202 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.202 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.233 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.237 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2307149494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.641 232432 DEBUG nova.policy [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '661b6600a32b40d8a48db16cb71c7e75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd72b5448be0e463f80dca118feb42d3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:15:44 compute-2 ovn_controller[134375]: 2025-11-29T08:15:44Z|00559|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 08:15:44 compute-2 nova_compute[232428]: 2025-11-29 08:15:44.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.030 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.122 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] resizing rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.230 232432 DEBUG nova.objects.instance [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'migration_context' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.247 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.248 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Ensure instance console log exists: /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.248 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.249 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.249 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.575 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Successfully created port: ac65f355-2912-480c-acab-c38c1ec48dc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:15:45 compute-2 ceph-mon[77138]: pgmap v2284: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 272 op/s
Nov 29 08:15:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:45.696 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:45 compute-2 nova_compute[232428]: 2025-11-29 08:15:45.880 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2761208253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.616 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Successfully updated port: ac65f355-2912-480c-acab-c38c1ec48dc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.641 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.642 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.642 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.762 232432 DEBUG nova.compute.manager [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-changed-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.762 232432 DEBUG nova.compute.manager [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Refreshing instance network info cache due to event network-changed-ac65f355-2912-480c-acab-c38c1ec48dc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.763 232432 DEBUG oslo_concurrency.lockutils [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:46 compute-2 nova_compute[232428]: 2025-11-29 08:15:46.868 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 08:15:47 compute-2 ceph-mon[77138]: pgmap v2285: 305 pgs: 305 active+clean; 539 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 218 op/s
Nov 29 08:15:47 compute-2 ceph-mon[77138]: osdmap e297: 3 total, 3 up, 3 in
Nov 29 08:15:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.875 232432 DEBUG nova.network.neutron [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.913 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.913 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance network_info: |[{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.914 232432 DEBUG oslo_concurrency.lockutils [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.914 232432 DEBUG nova.network.neutron [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Refreshing network info cache for port ac65f355-2912-480c-acab-c38c1ec48dc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.917 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start _get_guest_xml network_info=[{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.923 232432 WARNING nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.930 232432 DEBUG nova.virt.libvirt.host [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.931 232432 DEBUG nova.virt.libvirt.host [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.936 232432 DEBUG nova.virt.libvirt.host [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.938 232432 DEBUG nova.virt.libvirt.host [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.940 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.941 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.941 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.942 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.942 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.943 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.943 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.944 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.944 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.945 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.945 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.946 232432 DEBUG nova.virt.hardware [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:15:47 compute-2 nova_compute[232428]: 2025-11-29 08:15:47.951 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1575453415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.410 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.465 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.471 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1575453415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2756251975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.918 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.921 232432 DEBUG nova.virt.libvirt.vif [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.922 232432 DEBUG nova.network.os_vif_util [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.924 232432 DEBUG nova.network.os_vif_util [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.926 232432 DEBUG nova.objects.instance [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.949 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <uuid>1d2f015e-9584-47c8-a0c6-76e84d368cb6</uuid>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <name>instance-00000077</name>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1127187577</nova:name>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:15:47</nova:creationTime>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <nova:port uuid="ac65f355-2912-480c-acab-c38c1ec48dc9">
Nov 29 08:15:48 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <system>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="serial">1d2f015e-9584-47c8-a0c6-76e84d368cb6</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="uuid">1d2f015e-9584-47c8-a0c6-76e84d368cb6</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </system>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <os>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </os>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <features>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </features>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk">
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config">
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </source>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:15:48 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:91:c5:e5"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <target dev="tapac65f355-29"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/console.log" append="off"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <video>
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </video>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:15:48 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:15:48 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:15:48 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:15:48 compute-2 nova_compute[232428]: </domain>
Nov 29 08:15:48 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.951 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Preparing to wait for external event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.952 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.953 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.953 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.954 232432 DEBUG nova.virt.libvirt.vif [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.954 232432 DEBUG nova.network.os_vif_util [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.956 232432 DEBUG nova.network.os_vif_util [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.956 232432 DEBUG os_vif [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.958 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.958 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.962 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac65f355-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.963 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac65f355-29, col_values=(('external_ids', {'iface-id': 'ac65f355-2912-480c-acab-c38c1ec48dc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:c5:e5', 'vm-uuid': '1d2f015e-9584-47c8-a0c6-76e84d368cb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:48 compute-2 NetworkManager[48993]: <info>  [1764404148.9670] manager: (tapac65f355-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.975 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:48 compute-2 nova_compute[232428]: 2025-11-29 08:15:48.977 232432 INFO os_vif [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29')
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.057 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.058 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.058 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] No VIF found with MAC fa:16:3e:91:c5:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.059 232432 INFO nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Using config drive
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.099 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:49.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:49 compute-2 ceph-mon[77138]: pgmap v2287: 305 pgs: 305 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Nov 29 08:15:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2756251975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:49.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.833 232432 INFO nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Creating config drive at /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.842 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqx1kwp71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:49 compute-2 nova_compute[232428]: 2025-11-29 08:15:49.993 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqx1kwp71" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.028 232432 DEBUG nova.storage.rbd_utils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] rbd image 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.032 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.088 232432 DEBUG nova.network.neutron [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updated VIF entry in instance network info cache for port ac65f355-2912-480c-acab-c38c1ec48dc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.089 232432 DEBUG nova.network.neutron [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.104 232432 DEBUG oslo_concurrency.lockutils [req-c9a39923-9ff7-413e-9bcd-949820b98343 req-575b29b4-342f-4c1e-ac91-1e899cf8e881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.222 232432 DEBUG oslo_concurrency.processutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config 1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.223 232432 INFO nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Deleting local config drive /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/disk.config because it was imported into RBD.
Nov 29 08:15:50 compute-2 kernel: tapac65f355-29: entered promiscuous mode
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.2941] manager: (tapac65f355-29): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.294 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 ovn_controller[134375]: 2025-11-29T08:15:50Z|00560|binding|INFO|Claiming lport ac65f355-2912-480c-acab-c38c1ec48dc9 for this chassis.
Nov 29 08:15:50 compute-2 ovn_controller[134375]: 2025-11-29T08:15:50Z|00561|binding|INFO|ac65f355-2912-480c-acab-c38c1ec48dc9: Claiming fa:16:3e:91:c5:e5 10.100.0.4
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.305 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.308 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.312 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:50 compute-2 ovn_controller[134375]: 2025-11-29T08:15:50Z|00562|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 ovn-installed in OVS
Nov 29 08:15:50 compute-2 ovn_controller[134375]: 2025-11-29T08:15:50Z|00563|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 up in Southbound
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.316 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.333 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c669fe86-1b65-4cf8-a055-dbbb670dd378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.334 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.338 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e219e551-c6f6-417b-805b-1f0042806582]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 systemd-udevd[284150]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.339 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb6e759-c1da-49d9-9c42-1127a041d4b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 systemd-machined[194747]: New machine qemu-54-instance-00000077.
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.352 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bf2ef9-8393-4676-a5d1-2330088d48f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.3562] device (tapac65f355-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.3569] device (tapac65f355-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:15:50 compute-2 systemd[1]: Started Virtual Machine qemu-54-instance-00000077.
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.380 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[48dfbb6b-2d27-4b23-921a-a3779f6df608]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.411 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ece9c3ee-5048-44d0-bbaf-4ab63d3bd917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.418 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3956d9-2a3d-457c-8d66-a8ee38e82df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.4191] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.447 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cb25a9ca-1526-4e62-8630-0415073b9ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.454 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d15589aa-1f11-42cd-8e87-ef1c85661840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.4835] device (tap988c10fa-90): carrier: link connected
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.490 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a4923c69-7c8b-49c1-85a3-55b7813487fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.510 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[be103bec-991b-47c0-bc27-8f070aff62be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711675, 'reachable_time': 33234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284183, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.527 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1902c416-c923-41fa-8e8f-38a0442870cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711675, 'tstamp': 711675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284184, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.553 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cce6b513-4b4e-4e7a-bbb7-44b57c62c96c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711675, 'reachable_time': 33234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284185, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.586 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a6416e8f-7997-4eda-98ad-45a2c98901b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.649 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd4d724-d80b-4307-8415-f88509533e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.651 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.651 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.651 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.653 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 NetworkManager[48993]: <info>  [1764404150.6540] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 08:15:50 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.662 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.663 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.664 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 ovn_controller[134375]: 2025-11-29T08:15:50Z|00564|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.667 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.667 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6140d83d-1a2b-47bd-b301-b64731e8841f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.668 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:15:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:15:50.668 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.947 232432 DEBUG nova.compute.manager [req-fcfef4cc-09c9-4aff-a329-d55d7e83d6c2 req-8a349659-15bf-4b01-9b54-a1c40e8afaac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.948 232432 DEBUG oslo_concurrency.lockutils [req-fcfef4cc-09c9-4aff-a329-d55d7e83d6c2 req-8a349659-15bf-4b01-9b54-a1c40e8afaac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.948 232432 DEBUG oslo_concurrency.lockutils [req-fcfef4cc-09c9-4aff-a329-d55d7e83d6c2 req-8a349659-15bf-4b01-9b54-a1c40e8afaac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.948 232432 DEBUG oslo_concurrency.lockutils [req-fcfef4cc-09c9-4aff-a329-d55d7e83d6c2 req-8a349659-15bf-4b01-9b54-a1c40e8afaac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:50 compute-2 nova_compute[232428]: 2025-11-29 08:15:50.948 232432 DEBUG nova.compute.manager [req-fcfef4cc-09c9-4aff-a329-d55d7e83d6c2 req-8a349659-15bf-4b01-9b54-a1c40e8afaac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Processing event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:15:51 compute-2 podman[284233]: 2025-11-29 08:15:51.03743712 +0000 UTC m=+0.057549193 container create 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:15:51 compute-2 systemd[1]: Started libpod-conmon-4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254.scope.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.075 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404136.0746737, 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.075 232432 INFO nova.compute.manager [-] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] VM Stopped (Lifecycle Event)
Nov 29 08:15:51 compute-2 podman[284233]: 2025-11-29 08:15:51.008847904 +0000 UTC m=+0.028959987 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:15:51 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:15:51 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618d37df1aa15735ca1e9a69c9e4db5541dfc12b4cbfa289eaa42490be2d95f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.120 232432 DEBUG nova.compute.manager [None req-f7108e4a-5a30-4881-8e36-f4d5b298cb5d - - - - - -] [instance: 4f58fbe8-7445-4bbe-a6b5-65008b6c43f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:51 compute-2 podman[284233]: 2025-11-29 08:15:51.130859523 +0000 UTC m=+0.150971676 container init 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:15:51 compute-2 podman[284233]: 2025-11-29 08:15:51.142094575 +0000 UTC m=+0.162206678 container start 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:15:51 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [NOTICE]   (284252) : New worker (284254) forked
Nov 29 08:15:51 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [NOTICE]   (284252) : Loading success.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:51 compute-2 ceph-mon[77138]: pgmap v2288: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 29 08:15:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:51.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.338 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.367 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404151.3662384, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.367 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Started (Lifecycle Event)
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.370 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.376 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.380 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance spawned successfully.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.381 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.411 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.419 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.425 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.425 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.426 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.427 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.428 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.429 232432 DEBUG nova.virt.libvirt.driver [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.475 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.476 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404151.3666396, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.476 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Paused (Lifecycle Event)
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.500 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.505 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404151.3745608, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.506 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Resumed (Lifecycle Event)
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.513 232432 INFO nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Took 7.49 seconds to spawn the instance on the hypervisor.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.513 232432 DEBUG nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.522 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.529 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.560 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.601 232432 INFO nova.compute.manager [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Took 8.46 seconds to build instance.
Nov 29 08:15:51 compute-2 nova_compute[232428]: 2025-11-29 08:15:51.629 232432 DEBUG oslo_concurrency.lockutils [None req-103bb346-794d-40d4-9ccb-da46db8422d8 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:51.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3458064120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:52 compute-2 sudo[284290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:52 compute-2 sudo[284290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-2 sudo[284290]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:52 compute-2 podman[284314]: 2025-11-29 08:15:52.418552542 +0000 UTC m=+0.071997904 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:15:52 compute-2 sudo[284321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:52 compute-2 sudo[284321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-2 sudo[284321]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.064 232432 DEBUG nova.compute.manager [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.065 232432 DEBUG oslo_concurrency.lockutils [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.066 232432 DEBUG oslo_concurrency.lockutils [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.066 232432 DEBUG oslo_concurrency.lockutils [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.067 232432 DEBUG nova.compute.manager [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.067 232432 WARNING nova.compute.manager [req-3e29db97-65fc-45db-b58e-556b17522f11 req-46d0fa47-cf27-4584-a642-cbe08da84788 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state active and task_state None.
Nov 29 08:15:53 compute-2 ceph-mon[77138]: pgmap v2289: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 29 08:15:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2877820590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:53.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:53.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:53 compute-2 nova_compute[232428]: 2025-11-29 08:15:53.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.577 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.577 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.577 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:15:54 compute-2 nova_compute[232428]: 2025-11-29 08:15:54.578 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 35f7492d-e1a0-4369-bf32-ba8fa094036a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:55 compute-2 nova_compute[232428]: 2025-11-29 08:15:55.151 232432 DEBUG nova.compute.manager [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-changed-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:55 compute-2 nova_compute[232428]: 2025-11-29 08:15:55.152 232432 DEBUG nova.compute.manager [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Refreshing instance network info cache due to event network-changed-ac65f355-2912-480c-acab-c38c1ec48dc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:15:55 compute-2 nova_compute[232428]: 2025-11-29 08:15:55.152 232432 DEBUG oslo_concurrency.lockutils [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:55 compute-2 nova_compute[232428]: 2025-11-29 08:15:55.153 232432 DEBUG oslo_concurrency.lockutils [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:55 compute-2 nova_compute[232428]: 2025-11-29 08:15:55.153 232432 DEBUG nova.network.neutron [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Refreshing network info cache for port ac65f355-2912-480c-acab-c38c1ec48dc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:15:55 compute-2 ceph-mon[77138]: pgmap v2290: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Nov 29 08:15:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:15:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 08:15:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:56 compute-2 ceph-mon[77138]: osdmap e298: 3 total, 3 up, 3 in
Nov 29 08:15:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2076297525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2076297525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.341 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.346 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [{"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.375589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156375646, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2459, "num_deletes": 255, "total_data_size": 5489838, "memory_usage": 5558464, "flush_reason": "Manual Compaction"}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.390 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-35f7492d-e1a0-4369-bf32-ba8fa094036a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.390 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.390 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156399521, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3564009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45718, "largest_seqno": 48172, "table_properties": {"data_size": 3554189, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21375, "raw_average_key_size": 20, "raw_value_size": 3534194, "raw_average_value_size": 3441, "num_data_blocks": 264, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403978, "oldest_key_time": 1764403978, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 24176 microseconds, and 10320 cpu microseconds.
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.399747) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3564009 bytes OK
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.399846) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.401543) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.401567) EVENT_LOG_v1 {"time_micros": 1764404156401558, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.401593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5478982, prev total WAL file size 5478982, number of live WAL files 2.
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.404192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3480KB)], [87(10171KB)]
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156404263, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13979207, "oldest_snapshot_seqno": -1}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7810 keys, 12113437 bytes, temperature: kUnknown
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156487987, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12113437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12060467, "index_size": 32314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 202032, "raw_average_key_size": 25, "raw_value_size": 11920322, "raw_average_value_size": 1526, "num_data_blocks": 1272, "num_entries": 7810, "num_filter_entries": 7810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.488507) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12113437 bytes
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.489635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.6 rd, 144.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8338, records dropped: 528 output_compression: NoCompression
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.489666) EVENT_LOG_v1 {"time_micros": 1764404156489652, "job": 54, "event": "compaction_finished", "compaction_time_micros": 83908, "compaction_time_cpu_micros": 46077, "output_level": 6, "num_output_files": 1, "total_output_size": 12113437, "num_input_records": 8338, "num_output_records": 7810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156491253, "job": 54, "event": "table_file_deletion", "file_number": 89}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156494970, "job": 54, "event": "table_file_deletion", "file_number": 87}
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.404129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.495079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.495086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.495089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.495092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:15:56.495095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:15:56 compute-2 podman[284361]: 2025-11-29 08:15:56.678460743 +0000 UTC m=+0.068027051 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.884 232432 DEBUG nova.network.neutron [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updated VIF entry in instance network info cache for port ac65f355-2912-480c-acab-c38c1ec48dc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.884 232432 DEBUG nova.network.neutron [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:56 compute-2 nova_compute[232428]: 2025-11-29 08:15:56.916 232432 DEBUG oslo_concurrency.lockutils [req-78a94b5e-d372-44de-945b-dddfbe349e67 req-c4f7eec1-c470-495e-8117-2bc5062e3fdf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:57 compute-2 nova_compute[232428]: 2025-11-29 08:15:57.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:15:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:57.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:15:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 08:15:57 compute-2 ceph-mon[77138]: pgmap v2292: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 180 op/s
Nov 29 08:15:57 compute-2 ceph-mon[77138]: osdmap e299: 3 total, 3 up, 3 in
Nov 29 08:15:57 compute-2 sudo[284381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:57 compute-2 sudo[284381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:57 compute-2 sudo[284381]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:57 compute-2 sudo[284406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:57 compute-2 sudo[284406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:57 compute-2 sudo[284406]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:57 compute-2 sudo[284431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:57 compute-2 sudo[284431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:57 compute-2 sudo[284431]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:57 compute-2 sudo[284457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:15:57 compute-2 sudo[284457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:57 compute-2 sudo[284457]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:58 compute-2 sudo[284501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:58 compute-2 sudo[284501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:58 compute-2 sudo[284501]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:58 compute-2 sudo[284526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:58 compute-2 sudo[284526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:58 compute-2 sudo[284526]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:58 compute-2 sudo[284551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:58 compute-2 sudo[284551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:58 compute-2 sudo[284551]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3437211877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:15:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:15:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:15:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:15:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 08:15:58 compute-2 sudo[284576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:15:58 compute-2 sudo[284576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:58 compute-2 nova_compute[232428]: 2025-11-29 08:15:58.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:59 compute-2 sudo[284576]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:59 compute-2 nova_compute[232428]: 2025-11-29 08:15:59.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:59 compute-2 nova_compute[232428]: 2025-11-29 08:15:59.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:15:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:15:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:59.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:15:59 compute-2 ceph-mon[77138]: pgmap v2294: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Nov 29 08:15:59 compute-2 ceph-mon[77138]: osdmap e300: 3 total, 3 up, 3 in
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3880472690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2667284630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:15:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:15:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:15:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/315555466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/315555466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.923 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.924 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.924 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.925 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.925 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.927 232432 INFO nova.compute.manager [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Terminating instance
Nov 29 08:16:00 compute-2 nova_compute[232428]: 2025-11-29 08:16:00.929 232432 DEBUG nova.compute.manager [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:16:00 compute-2 kernel: tapbd853c6d-a3 (unregistering): left promiscuous mode
Nov 29 08:16:01 compute-2 NetworkManager[48993]: <info>  [1764404161.0065] device (tapbd853c6d-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:16:01 compute-2 ovn_controller[134375]: 2025-11-29T08:16:01Z|00565|binding|INFO|Releasing lport bd853c6d-a3b6-4414-8e4e-24d926fd6692 from this chassis (sb_readonly=0)
Nov 29 08:16:01 compute-2 ovn_controller[134375]: 2025-11-29T08:16:01Z|00566|binding|INFO|Setting lport bd853c6d-a3b6-4414-8e4e-24d926fd6692 down in Southbound
Nov 29 08:16:01 compute-2 ovn_controller[134375]: 2025-11-29T08:16:01Z|00567|binding|INFO|Removing iface tapbd853c6d-a3 ovn-installed in OVS
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.032 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.036 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:f6:4f 10.100.0.8'], port_security=['fa:16:3e:d6:f6:4f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '35f7492d-e1a0-4369-bf32-ba8fa094036a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=bd853c6d-a3b6-4414-8e4e-24d926fd6692) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.038 143801 INFO neutron.agent.ovn.metadata.agent [-] Port bd853c6d-a3b6-4414-8e4e-24d926fd6692 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.040 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.042 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf01583-c325-4fc1-b1bf-ac7e3cb7cf78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.043 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore
Nov 29 08:16:01 compute-2 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 29 08:16:01 compute-2 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000068.scope: Consumed 30.350s CPU time.
Nov 29 08:16:01 compute-2 systemd-machined[194747]: Machine qemu-45-instance-00000068 terminated.
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.172 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.183 232432 INFO nova.virt.libvirt.driver [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Instance destroyed successfully.
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.184 232432 DEBUG nova.objects.instance [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid 35f7492d-e1a0-4369-bf32-ba8fa094036a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.200 232432 DEBUG nova.virt.libvirt.vif [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-738978804',display_name='tempest-ServerActionsTestOtherA-server-738978804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-738978804',id=104,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-zmcr4jkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=35f7492d-e1a0-4369-bf32-ba8fa094036a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.201 232432 DEBUG nova.network.os_vif_util [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "address": "fa:16:3e:d6:f6:4f", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd853c6d-a3", "ovs_interfaceid": "bd853c6d-a3b6-4414-8e4e-24d926fd6692", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.204 232432 DEBUG nova.network.os_vif_util [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.205 232432 DEBUG os_vif [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.211 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd853c6d-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.214 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.221 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.230 232432 INFO os_vif [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f6:4f,bridge_name='br-int',has_traffic_filtering=True,id=bd853c6d-a3b6-4414-8e4e-24d926fd6692,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd853c6d-a3')
Nov 29 08:16:01 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [NOTICE]   (275974) : haproxy version is 2.8.14-c23fe91
Nov 29 08:16:01 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [NOTICE]   (275974) : path to executable is /usr/sbin/haproxy
Nov 29 08:16:01 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [WARNING]  (275974) : Exiting Master process...
Nov 29 08:16:01 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [ALERT]    (275974) : Current worker (275976) exited with code 143 (Terminated)
Nov 29 08:16:01 compute-2 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[275966]: [WARNING]  (275974) : All workers exited. Exiting... (0)
Nov 29 08:16:01 compute-2 systemd[1]: libpod-c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964.scope: Deactivated successfully.
Nov 29 08:16:01 compute-2 podman[284663]: 2025-11-29 08:16:01.279467429 +0000 UTC m=+0.090241776 container died c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:01 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964-userdata-shm.mount: Deactivated successfully.
Nov 29 08:16:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-bceed7345574fe0b4c417a1d4581b93ba5b1227588f4fd769b421c11eae02893-merged.mount: Deactivated successfully.
Nov 29 08:16:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:01.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:01 compute-2 podman[284663]: 2025-11-29 08:16:01.335218764 +0000 UTC m=+0.145993071 container cleanup c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 systemd[1]: libpod-conmon-c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964.scope: Deactivated successfully.
Nov 29 08:16:01 compute-2 podman[284715]: 2025-11-29 08:16:01.422521617 +0000 UTC m=+0.045405912 container remove c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.433 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5538ec-5638-4847-90a4-ee6d01fcd5a1]: (4, ('Sat Nov 29 08:16:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964)\nc8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964\nSat Nov 29 08:16:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (c8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964)\nc8562fd4e101d773d226db8aa735762e780e711f97719de34b49acc9f30cc964\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.435 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4211df88-7913-4d87-95ed-2770cbd7f59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ceph-mon[77138]: pgmap v2296: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 7.8 MiB/s wr, 357 op/s
Nov 29 08:16:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1044889036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.437 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.439 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.441 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.446 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3faf99-247f-4eb9-869e-32125a28f213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c79ea3-2264-4ff6-a893-b4172d894257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.468 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dccc8788-ac1f-47b8-be1a-1426159dc330]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7a396267-8c8a-485f-bf78-361dfe93aa0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681445, 'reachable_time': 24857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284733, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.495 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:16:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:01.495 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a93ccbe2-195b-41a8-a3a6-b5e245bd5113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:01 compute-2 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.588 232432 DEBUG nova.compute.manager [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-unplugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.589 232432 DEBUG oslo_concurrency.lockutils [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.590 232432 DEBUG oslo_concurrency.lockutils [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.590 232432 DEBUG oslo_concurrency.lockutils [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.591 232432 DEBUG nova.compute.manager [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] No waiting events found dispatching network-vif-unplugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.592 232432 DEBUG nova.compute.manager [req-782784c2-ad94-49f5-ad07-ed6a4f5aa517 req-ada432c2-c41c-4e61-bb0d-5d9266f2f638 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-unplugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.720 232432 INFO nova.virt.libvirt.driver [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Deleting instance files /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a_del
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.721 232432 INFO nova.virt.libvirt.driver [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Deletion of /var/lib/nova/instances/35f7492d-e1a0-4369-bf32-ba8fa094036a_del complete
Nov 29 08:16:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.802 232432 INFO nova.compute.manager [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.803 232432 DEBUG oslo.service.loopingcall [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.803 232432 DEBUG nova.compute.manager [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:16:01 compute-2 nova_compute[232428]: 2025-11-29 08:16:01.804 232432 DEBUG nova.network.neutron [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:16:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.674 232432 DEBUG nova.network.neutron [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.706 232432 INFO nova.compute.manager [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Took 0.90 seconds to deallocate network for instance.
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.765 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.766 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.782 232432 DEBUG nova.compute.manager [req-9cd74a3b-643a-4017-925d-afbc84e91b3f req-db17fe1c-49fd-4208-a4fc-1268d5bce272 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-deleted-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:02 compute-2 nova_compute[232428]: 2025-11-29 08:16:02.882 232432 DEBUG oslo_concurrency.processutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4072759590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:03.324 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:03.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:03 compute-2 ceph-mon[77138]: pgmap v2297: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.7 MiB/s wr, 224 op/s
Nov 29 08:16:03 compute-2 ceph-mon[77138]: osdmap e301: 3 total, 3 up, 3 in
Nov 29 08:16:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4072759590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.344 232432 DEBUG oslo_concurrency.processutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.349 232432 DEBUG nova.compute.provider_tree [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.365 232432 DEBUG nova.scheduler.client.report [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.388 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.391 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.391 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.392 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.392 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.573 232432 INFO nova.scheduler.client.report [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocations for instance 35f7492d-e1a0-4369-bf32-ba8fa094036a
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.674 232432 DEBUG oslo_concurrency.lockutils [None req-c5840df7-66bc-4d35-a6bb-8dc933ae4922 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.684 232432 DEBUG nova.compute.manager [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.685 232432 DEBUG oslo_concurrency.lockutils [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.685 232432 DEBUG oslo_concurrency.lockutils [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.686 232432 DEBUG oslo_concurrency.lockutils [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "35f7492d-e1a0-4369-bf32-ba8fa094036a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.686 232432 DEBUG nova.compute.manager [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] No waiting events found dispatching network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.687 232432 WARNING nova.compute.manager [req-89de253b-af61-445b-b114-f0409eaec666 req-627b426e-ecc3-4578-b28b-4114479af2e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Received unexpected event network-vif-plugged-bd853c6d-a3b6-4414-8e4e-24d926fd6692 for instance with vm_state deleted and task_state None.
Nov 29 08:16:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/695917735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.819 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.899 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:16:03 compute-2 nova_compute[232428]: 2025-11-29 08:16:03.900 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.129 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.131 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4222MB free_disk=20.83061981201172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.131 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.131 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.209 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 1d2f015e-9584-47c8-a0c6-76e84d368cb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.210 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.210 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.251 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/695917735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/54634430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2012905774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.705 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.712 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.729 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.756 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:16:04 compute-2 nova_compute[232428]: 2025-11-29 08:16:04.757 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:05.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:05 compute-2 ceph-mon[77138]: pgmap v2299: 305 pgs: 305 active+clean; 435 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.9 MiB/s wr, 267 op/s
Nov 29 08:16:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2012905774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:05.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:05 compute-2 ovn_controller[134375]: 2025-11-29T08:16:05Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:c5:e5 10.100.0.4
Nov 29 08:16:05 compute-2 ovn_controller[134375]: 2025-11-29T08:16:05Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:c5:e5 10.100.0.4
Nov 29 08:16:06 compute-2 nova_compute[232428]: 2025-11-29 08:16:06.215 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:06 compute-2 nova_compute[232428]: 2025-11-29 08:16:06.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2047163776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:16:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:16:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/827431039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:06 compute-2 sudo[284805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:06 compute-2 sudo[284805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:06 compute-2 sudo[284805]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:06 compute-2 sudo[284830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:16:06 compute-2 sudo[284830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:06 compute-2 sudo[284830]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:07 compute-2 ceph-mon[77138]: pgmap v2300: 305 pgs: 305 active+clean; 375 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 8.1 MiB/s wr, 363 op/s
Nov 29 08:16:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:08 compute-2 podman[284856]: 2025-11-29 08:16:08.712143795 +0000 UTC m=+0.113342008 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:16:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:09 compute-2 ceph-mon[77138]: pgmap v2301: 305 pgs: 305 active+clean; 362 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.2 MiB/s wr, 310 op/s
Nov 29 08:16:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:11 compute-2 nova_compute[232428]: 2025-11-29 08:16:11.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:11.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:11 compute-2 nova_compute[232428]: 2025-11-29 08:16:11.350 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:11 compute-2 ovn_controller[134375]: 2025-11-29T08:16:11Z|00568|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:16:11 compute-2 nova_compute[232428]: 2025-11-29 08:16:11.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:11 compute-2 ceph-mon[77138]: pgmap v2302: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 29 08:16:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3233351715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:12 compute-2 sudo[284883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:12 compute-2 sudo[284883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:12 compute-2 sudo[284883]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:12 compute-2 sudo[284908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:12 compute-2 sudo[284908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:12 compute-2 sudo[284908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:13 compute-2 ceph-mon[77138]: pgmap v2303: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 29 08:16:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:13.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.202 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.203 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.204 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.204 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.205 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.205 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.238 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.260 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.261 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Image id 1be11678-cfa4-4dee-b54c-6c7e547e5a6a yields fingerprint 9b6c4a62e987670abc3ce4c57f88bd403b2af8bf _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.261 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] image 1be11678-cfa4-4dee-b54c-6c7e547e5a6a at (/var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf): checking
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.261 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] image 1be11678-cfa4-4dee-b54c-6c7e547e5a6a at (/var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.265 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.266 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] 1d2f015e-9584-47c8-a0c6-76e84d368cb6 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.266 232432 WARNING nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.266 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Active base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.267 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Removable base files: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.268 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.269 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.269 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.270 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 29 08:16:14 compute-2 nova_compute[232428]: 2025-11-29 08:16:14.270 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 29 08:16:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:15 compute-2 ceph-mon[77138]: pgmap v2304: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Nov 29 08:16:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:15.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:16 compute-2 nova_compute[232428]: 2025-11-29 08:16:16.182 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404161.1811411, 35f7492d-e1a0-4369-bf32-ba8fa094036a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:16 compute-2 nova_compute[232428]: 2025-11-29 08:16:16.183 232432 INFO nova.compute.manager [-] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] VM Stopped (Lifecycle Event)
Nov 29 08:16:16 compute-2 nova_compute[232428]: 2025-11-29 08:16:16.210 232432 DEBUG nova.compute.manager [None req-534bf8e2-b16c-421d-93fa-a6f9be9963f3 - - - - - -] [instance: 35f7492d-e1a0-4369-bf32-ba8fa094036a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:16 compute-2 nova_compute[232428]: 2025-11-29 08:16:16.222 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:16 compute-2 nova_compute[232428]: 2025-11-29 08:16:16.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.557 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.557 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.583 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.649 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.649 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.656 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.656 232432 INFO nova.compute.claims [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:16:17 compute-2 nova_compute[232428]: 2025-11-29 08:16:17.779 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:17 compute-2 ceph-mon[77138]: pgmap v2305: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 210 op/s
Nov 29 08:16:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/982415657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2378779951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:17.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3970869047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.268 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.276 232432 DEBUG nova.compute.provider_tree [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.291 232432 DEBUG nova.scheduler.client.report [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.316 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.317 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.383 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.401 232432 INFO nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.421 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.514 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.515 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.515 232432 INFO nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Creating image(s)
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.555 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.592 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.630 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.635 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.741 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.742 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.744 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.745 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.793 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:18 compute-2 nova_compute[232428]: 2025-11-29 08:16:18.799 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3970869047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.170 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.282 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] resizing rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:16:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.448 232432 DEBUG nova.objects.instance [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lazy-loading 'migration_context' on Instance uuid a8df16cb-c311-456a-b3d1-ab964b4e8bf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.467 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.468 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Ensure instance console log exists: /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.469 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.470 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.471 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.474 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.481 232432 WARNING nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.490 232432 DEBUG nova.virt.libvirt.host [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.491 232432 DEBUG nova.virt.libvirt.host [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.495 232432 DEBUG nova.virt.libvirt.host [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.495 232432 DEBUG nova.virt.libvirt.host [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.497 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.498 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.498 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.499 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.499 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.499 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.500 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.500 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.501 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.501 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.501 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.502 232432 DEBUG nova.virt.hardware [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:16:19 compute-2 nova_compute[232428]: 2025-11-29 08:16:19.507 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:19 compute-2 ceph-mon[77138]: pgmap v2306: 305 pgs: 305 active+clean; 386 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 29 08:16:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:19.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:16:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4025987123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.030 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.065 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.070 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:16:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/973100625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.535 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.538 232432 DEBUG nova.objects.instance [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lazy-loading 'pci_devices' on Instance uuid a8df16cb-c311-456a-b3d1-ab964b4e8bf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.560 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <uuid>a8df16cb-c311-456a-b3d1-ab964b4e8bf4</uuid>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <name>instance-00000079</name>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersAaction247Test-server-739373701</nova:name>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:16:19</nova:creationTime>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:user uuid="75aa7d805acd4cf29b76ffd4333a104f">tempest-ServersAaction247Test-530446929-project-member</nova:user>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <nova:project uuid="450ea210ad0e4364901c0d605c869a2c">tempest-ServersAaction247Test-530446929</nova:project>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <system>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="serial">a8df16cb-c311-456a-b3d1-ab964b4e8bf4</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="uuid">a8df16cb-c311-456a-b3d1-ab964b4e8bf4</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </system>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <os>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </os>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <features>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </features>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk">
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config">
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:16:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/console.log" append="off"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <video>
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </video>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:16:20 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:16:20 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:16:20 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:16:20 compute-2 nova_compute[232428]: </domain>
Nov 29 08:16:20 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.611 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.611 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.612 232432 INFO nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Using config drive
Nov 29 08:16:20 compute-2 nova_compute[232428]: 2025-11-29 08:16:20.639 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4025987123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/973100625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.224 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.727 232432 INFO nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Creating config drive at /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.734 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpujve_0ql execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:21.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:21 compute-2 ceph-mon[77138]: pgmap v2307: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 186 op/s
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.896 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpujve_0ql" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.929 232432 DEBUG nova.storage.rbd_utils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] rbd image a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:16:21 compute-2 nova_compute[232428]: 2025-11-29 08:16:21.934 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:22 compute-2 nova_compute[232428]: 2025-11-29 08:16:22.145 232432 DEBUG oslo_concurrency.processutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config a8df16cb-c311-456a-b3d1-ab964b4e8bf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:22 compute-2 nova_compute[232428]: 2025-11-29 08:16:22.147 232432 INFO nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Deleting local config drive /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4/disk.config because it was imported into RBD.
Nov 29 08:16:22 compute-2 systemd-machined[194747]: New machine qemu-55-instance-00000079.
Nov 29 08:16:22 compute-2 systemd[1]: Started Virtual Machine qemu-55-instance-00000079.
Nov 29 08:16:22 compute-2 podman[285261]: 2025-11-29 08:16:22.721854074 +0000 UTC m=+0.111148130 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:16:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.054 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404183.0537546, a8df16cb-c311-456a-b3d1-ab964b4e8bf4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.055 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] VM Resumed (Lifecycle Event)
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.061 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.062 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.068 232432 INFO nova.virt.libvirt.driver [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance spawned successfully.
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.069 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.079 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.083 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.095 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.095 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.096 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.096 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.097 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.097 232432 DEBUG nova.virt.libvirt.driver [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.106 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.106 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404183.0578916, a8df16cb-c311-456a-b3d1-ab964b4e8bf4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.107 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] VM Started (Lifecycle Event)
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.165 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.170 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.185 232432 INFO nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Took 4.67 seconds to spawn the instance on the hypervisor.
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.185 232432 DEBUG nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.198 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.259 232432 INFO nova.compute.manager [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Took 5.64 seconds to build instance.
Nov 29 08:16:23 compute-2 nova_compute[232428]: 2025-11-29 08:16:23.278 232432 DEBUG oslo_concurrency.lockutils [None req-acd4f16f-a953-4cc0-ba3e-adea7ec56c3d 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:23.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:23.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:23 compute-2 ceph-mon[77138]: pgmap v2308: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 150 op/s
Nov 29 08:16:23 compute-2 ceph-mon[77138]: osdmap e302: 3 total, 3 up, 3 in
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.288 232432 DEBUG nova.compute.manager [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.337 232432 INFO nova.compute.manager [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] instance snapshotting
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.339 232432 DEBUG nova.objects.instance [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lazy-loading 'flavor' on Instance uuid a8df16cb-c311-456a-b3d1-ab964b4e8bf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.514 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.515 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.517 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.517 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.518 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.520 232432 INFO nova.compute.manager [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Terminating instance
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.523 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "refresh_cache-a8df16cb-c311-456a-b3d1-ab964b4e8bf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.523 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquired lock "refresh_cache-a8df16cb-c311-456a-b3d1-ab964b4e8bf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.524 232432 DEBUG nova.network.neutron [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.662 232432 INFO nova.virt.libvirt.driver [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Beginning live snapshot process
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.699 232432 DEBUG nova.network.neutron [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.718 232432 DEBUG nova.compute.manager [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Nov 29 08:16:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:25 compute-2 ceph-mon[77138]: pgmap v2310: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 189 op/s
Nov 29 08:16:25 compute-2 nova_compute[232428]: 2025-11-29 08:16:25.996 232432 DEBUG nova.network.neutron [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.021 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Releasing lock "refresh_cache-a8df16cb-c311-456a-b3d1-ab964b4e8bf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.023 232432 DEBUG nova.compute.manager [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:16:26 compute-2 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 29 08:16:26 compute-2 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000079.scope: Consumed 3.948s CPU time.
Nov 29 08:16:26 compute-2 systemd-machined[194747]: Machine qemu-55-instance-00000079 terminated.
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.227 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.263 232432 INFO nova.virt.libvirt.driver [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance destroyed successfully.
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.263 232432 DEBUG nova.objects.instance [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lazy-loading 'resources' on Instance uuid a8df16cb-c311-456a-b3d1-ab964b4e8bf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.317 232432 DEBUG nova.compute.manager [None req-f060e5d6-c0c9-402e-99ee-59c9635688d1 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.694 232432 DEBUG oslo_concurrency.lockutils [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.696 232432 DEBUG oslo_concurrency.lockutils [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.696 232432 DEBUG nova.compute.manager [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.704 232432 DEBUG nova.compute.manager [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.706 232432 DEBUG nova.objects.instance [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.742 232432 DEBUG nova.virt.libvirt.driver [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.835 232432 INFO nova.virt.libvirt.driver [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Deleting instance files /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4_del
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.837 232432 INFO nova.virt.libvirt.driver [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Deletion of /var/lib/nova/instances/a8df16cb-c311-456a-b3d1-ab964b4e8bf4_del complete
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.903 232432 INFO nova.compute.manager [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.904 232432 DEBUG oslo.service.loopingcall [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.904 232432 DEBUG nova.compute.manager [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:16:26 compute-2 nova_compute[232428]: 2025-11-29 08:16:26.904 232432 DEBUG nova.network.neutron [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:16:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.053 232432 DEBUG nova.network.neutron [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.082 232432 DEBUG nova.network.neutron [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.101 232432 INFO nova.compute.manager [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Took 0.20 seconds to deallocate network for instance.
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.155 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.155 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.229 232432 DEBUG oslo_concurrency.processutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3996041412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.689 232432 DEBUG oslo_concurrency.processutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.700 232432 DEBUG nova.compute.provider_tree [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.726 232432 DEBUG nova.scheduler.client.report [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:27 compute-2 podman[285365]: 2025-11-29 08:16:27.740579686 +0000 UTC m=+0.126577123 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.757 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.800 232432 INFO nova.scheduler.client.report [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Deleted allocations for instance a8df16cb-c311-456a-b3d1-ab964b4e8bf4
Nov 29 08:16:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:27 compute-2 nova_compute[232428]: 2025-11-29 08:16:27.870 232432 DEBUG oslo_concurrency.lockutils [None req-a314c4c5-90ac-476a-a6a6-8501dd82ba66 75aa7d805acd4cf29b76ffd4333a104f 450ea210ad0e4364901c0d605c869a2c - - default default] Lock "a8df16cb-c311-456a-b3d1-ab964b4e8bf4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:27 compute-2 ceph-mon[77138]: pgmap v2311: 305 pgs: 305 active+clean; 459 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 8.4 MiB/s wr, 335 op/s
Nov 29 08:16:27 compute-2 ceph-mon[77138]: osdmap e303: 3 total, 3 up, 3 in
Nov 29 08:16:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3996041412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647978736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647978736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3647978736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3647978736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:29 compute-2 kernel: tapac65f355-29 (unregistering): left promiscuous mode
Nov 29 08:16:29 compute-2 NetworkManager[48993]: <info>  [1764404189.1366] device (tapac65f355-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:16:29 compute-2 ovn_controller[134375]: 2025-11-29T08:16:29Z|00569|binding|INFO|Releasing lport ac65f355-2912-480c-acab-c38c1ec48dc9 from this chassis (sb_readonly=0)
Nov 29 08:16:29 compute-2 ovn_controller[134375]: 2025-11-29T08:16:29Z|00570|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 down in Southbound
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:29 compute-2 ovn_controller[134375]: 2025-11-29T08:16:29Z|00571|binding|INFO|Removing iface tapac65f355-29 ovn-installed in OVS
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.165 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.169 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.171 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.173 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[83647454-8343-4d0e-8456-5852f0a33186]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.174 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.181 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:29 compute-2 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 08:16:29 compute-2 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000077.scope: Consumed 16.098s CPU time.
Nov 29 08:16:29 compute-2 systemd-machined[194747]: Machine qemu-54-instance-00000077 terminated.
Nov 29 08:16:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:16:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [NOTICE]   (284252) : haproxy version is 2.8.14-c23fe91
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [NOTICE]   (284252) : path to executable is /usr/sbin/haproxy
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [WARNING]  (284252) : Exiting Master process...
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [WARNING]  (284252) : Exiting Master process...
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [ALERT]    (284252) : Current worker (284254) exited with code 143 (Terminated)
Nov 29 08:16:29 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[284248]: [WARNING]  (284252) : All workers exited. Exiting... (0)
Nov 29 08:16:29 compute-2 systemd[1]: libpod-4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254.scope: Deactivated successfully.
Nov 29 08:16:29 compute-2 podman[285414]: 2025-11-29 08:16:29.414911527 +0000 UTC m=+0.080682967 container died 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:16:29 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254-userdata-shm.mount: Deactivated successfully.
Nov 29 08:16:29 compute-2 systemd[1]: var-lib-containers-storage-overlay-618d37df1aa15735ca1e9a69c9e4db5541dfc12b4cbfa289eaa42490be2d95f1-merged.mount: Deactivated successfully.
Nov 29 08:16:29 compute-2 podman[285414]: 2025-11-29 08:16:29.47602309 +0000 UTC m=+0.141794520 container cleanup 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:16:29 compute-2 systemd[1]: libpod-conmon-4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254.scope: Deactivated successfully.
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.492 232432 DEBUG nova.compute.manager [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.494 232432 DEBUG oslo_concurrency.lockutils [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.495 232432 DEBUG oslo_concurrency.lockutils [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.496 232432 DEBUG oslo_concurrency.lockutils [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.496 232432 DEBUG nova.compute.manager [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.497 232432 WARNING nova.compute.manager [req-893ef48f-f8c4-4cc5-b33d-f8c30ce97ee6 req-b671d281-6e36-448d-b59c-aa5a5d2c2ad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state active and task_state powering-off.
Nov 29 08:16:29 compute-2 podman[285453]: 2025-11-29 08:16:29.578598051 +0000 UTC m=+0.067927827 container remove 4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.586 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[92105587-9327-4ad9-9a2d-32771bee5118]: (4, ('Sat Nov 29 08:16:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254)\n4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254\nSat Nov 29 08:16:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254)\n4fa43f4cf436997690a363515caaf530ef39fa7462c2e7a370e67d2c4e55e254\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.588 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d629ee49-91d9-4d13-9c5b-57021e13ccbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.590 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.593 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:29 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.619 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.623 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[05010a08-8188-45be-a061-4eadf228fc8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.641 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1427f843-6116-42a7-a8dc-217d048b5c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.642 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6cafc5-575c-4040-aec6-90e34b7ea0cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.669 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[03ae7d7c-9a88-4481-8b24-012b05b6470d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711667, 'reachable_time': 16817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285473, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.672 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:16:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:29.672 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[2de29817-2b5e-49dc-8b75-c1011095e736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:29 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.786 232432 INFO nova.virt.libvirt.driver [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance shutdown successfully after 3 seconds.
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.795 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance destroyed successfully.
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.796 232432 DEBUG nova.objects.instance [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.810 232432 DEBUG nova.compute.manager [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:29 compute-2 nova_compute[232428]: 2025-11-29 08:16:29.862 232432 DEBUG oslo_concurrency.lockutils [None req-4a1a03d6-bfe4-44cd-aa66-fec313dd4b70 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:29 compute-2 ceph-mon[77138]: pgmap v2313: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 375 op/s
Nov 29 08:16:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2360559406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:31.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.556 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.580 232432 DEBUG oslo_concurrency.lockutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.581 232432 DEBUG oslo_concurrency.lockutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.581 232432 DEBUG nova.network.neutron [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.582 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'info_cache' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.789 232432 DEBUG nova.compute.manager [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.790 232432 DEBUG oslo_concurrency.lockutils [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.790 232432 DEBUG oslo_concurrency.lockutils [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.791 232432 DEBUG oslo_concurrency.lockutils [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.791 232432 DEBUG nova.compute.manager [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:31 compute-2 nova_compute[232428]: 2025-11-29 08:16:31.792 232432 WARNING nova.compute.manager [req-46cd278f-1b93-47c0-aaff-90ce041dd26a req-eb9bc9e5-7b4a-410c-9ba7-4dec87cf6463 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state stopped and task_state powering-on.
Nov 29 08:16:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:31.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:32 compute-2 ceph-mon[77138]: pgmap v2314: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.7 MiB/s wr, 471 op/s
Nov 29 08:16:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/221074366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4252430010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 08:16:32 compute-2 sudo[285475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:32 compute-2 sudo[285475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:32 compute-2 sudo[285475]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:32 compute-2 sudo[285500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:32 compute-2 sudo[285500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:32 compute-2 sudo[285500]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:32 compute-2 nova_compute[232428]: 2025-11-29 08:16:32.990 232432 DEBUG nova.network.neutron [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.016 232432 DEBUG oslo_concurrency.lockutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.043 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance destroyed successfully.
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.043 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.056 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.087 232432 DEBUG nova.virt.libvirt.vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.088 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.089 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.089 232432 DEBUG os_vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.090 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.091 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac65f355-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.092 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.097 232432 INFO os_vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29')
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.104 232432 DEBUG nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start _get_guest_xml network_info=[{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.109 232432 WARNING nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.115 232432 DEBUG nova.virt.libvirt.host [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.116 232432 DEBUG nova.virt.libvirt.host [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.120 232432 DEBUG nova.virt.libvirt.host [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.120 232432 DEBUG nova.virt.libvirt.host [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.121 232432 DEBUG nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.121 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.122 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.122 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.122 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.123 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.123 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.123 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.123 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.124 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.124 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.124 232432 DEBUG nova.virt.hardware [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.125 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.139 232432 DEBUG oslo_concurrency.processutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:33 compute-2 ceph-mon[77138]: pgmap v2315: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 400 op/s
Nov 29 08:16:33 compute-2 ceph-mon[77138]: osdmap e304: 3 total, 3 up, 3 in
Nov 29 08:16:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3846744555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:33.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:16:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2678725267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.611 232432 DEBUG oslo_concurrency.processutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:33 compute-2 nova_compute[232428]: 2025-11-29 08:16:33.677 232432 DEBUG oslo_concurrency.processutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120942918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120942918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:16:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1998022600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.238 232432 DEBUG oslo_concurrency.processutils [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.240 232432 DEBUG nova.virt.libvirt.vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.240 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.241 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.243 232432 DEBUG nova.objects.instance [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.262 232432 DEBUG nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <uuid>1d2f015e-9584-47c8-a0c6-76e84d368cb6</uuid>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <name>instance-00000077</name>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerActionsTestJSON-server-1127187577</nova:name>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:16:33</nova:creationTime>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:user uuid="661b6600a32b40d8a48db16cb71c7e75">tempest-ServerActionsTestJSON-1048555325-project-member</nova:user>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:project uuid="d72b5448be0e463f80dca118feb42d3b">tempest-ServerActionsTestJSON-1048555325</nova:project>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <nova:port uuid="ac65f355-2912-480c-acab-c38c1ec48dc9">
Nov 29 08:16:34 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <system>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="serial">1d2f015e-9584-47c8-a0c6-76e84d368cb6</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="uuid">1d2f015e-9584-47c8-a0c6-76e84d368cb6</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </system>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <os>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </os>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <features>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </features>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk">
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/1d2f015e-9584-47c8-a0c6-76e84d368cb6_disk.config">
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:16:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:91:c5:e5"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <target dev="tapac65f355-29"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6/console.log" append="off"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <video>
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </video>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:16:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:16:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:16:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:16:34 compute-2 nova_compute[232428]: </domain>
Nov 29 08:16:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.264 232432 DEBUG nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.264 232432 DEBUG nova.virt.libvirt.driver [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.268 232432 DEBUG nova.virt.libvirt.vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.268 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.268 232432 DEBUG nova.network.os_vif_util [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.269 232432 DEBUG os_vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.270 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.271 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.274 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.274 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac65f355-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.274 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac65f355-29, col_values=(('external_ids', {'iface-id': 'ac65f355-2912-480c-acab-c38c1ec48dc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:c5:e5', 'vm-uuid': '1d2f015e-9584-47c8-a0c6-76e84d368cb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.276 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.2778] manager: (tapac65f355-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.279 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.287 232432 INFO os_vif [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29')
Nov 29 08:16:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2678725267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1120942918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1120942918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1998022600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:34 compute-2 kernel: tapac65f355-29: entered promiscuous mode
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.4158] manager: (tapac65f355-29): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 08:16:34 compute-2 ovn_controller[134375]: 2025-11-29T08:16:34Z|00572|binding|INFO|Claiming lport ac65f355-2912-480c-acab-c38c1ec48dc9 for this chassis.
Nov 29 08:16:34 compute-2 ovn_controller[134375]: 2025-11-29T08:16:34Z|00573|binding|INFO|ac65f355-2912-480c-acab-c38c1ec48dc9: Claiming fa:16:3e:91:c5:e5 10.100.0.4
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.418 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.425 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.426 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.427 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.445 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bef2b347-7d36-499f-8b5f-8656bf699a2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.446 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.449 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.449 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[51603a0a-9d8a-44aa-92ba-69914ee85435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.450 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2423fd-b839-4d88-970c-ae94d9c83e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_controller[134375]: 2025-11-29T08:16:34Z|00574|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 ovn-installed in OVS
Nov 29 08:16:34 compute-2 ovn_controller[134375]: 2025-11-29T08:16:34Z|00575|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 up in Southbound
Nov 29 08:16:34 compute-2 systemd-udevd[285605]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.471 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcdef66-341f-496c-9512-cd658eba0a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 systemd-machined[194747]: New machine qemu-56-instance-00000077.
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.4845] device (tapac65f355-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.4861] device (tapac65f355-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:16:34 compute-2 systemd[1]: Started Virtual Machine qemu-56-instance-00000077.
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.497 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[76172025-2161-4170-b17a-be7ad04aadb3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.542 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca24513-597b-42a7-8c06-169b15de753c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.549 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5e6e6d-ba2d-4021-aa32-caef532baa36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.5515] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.586 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[76513b16-74a9-41fd-9071-ad8b4f854737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.589 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef50b4c-7e31-41eb-bd38-693ea950a7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.6199] device (tap988c10fa-90): carrier: link connected
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.625 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[efd87ffb-2ae1-4c59-a9c1-262459812027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.651 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4520b441-26d9-4654-99d3-cdc434701a47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716088, 'reachable_time': 33406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285636, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.676 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3130314d-e56e-4710-9212-c62bb3bc80f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716088, 'tstamp': 716088}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285637, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.703 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a8451a6b-8161-40b0-a081-1fd8dcf8441f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716088, 'reachable_time': 33406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285638, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.752 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b2af0fd9-f60a-4eb4-9f99-673d558f1980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.839 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[70cde1b8-81c6-4150-8041-df14919db7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.842 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.842 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.843 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 NetworkManager[48993]: <info>  [1764404194.8477] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 08:16:34 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.851 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:34 compute-2 ovn_controller[134375]: 2025-11-29T08:16:34Z|00576|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.875 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.876 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.878 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[39516083-8534-43a8-af4e-3af9c068b0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.879 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:16:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:34.881 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.924 232432 DEBUG nova.compute.manager [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.924 232432 DEBUG oslo_concurrency.lockutils [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.925 232432 DEBUG oslo_concurrency.lockutils [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.925 232432 DEBUG oslo_concurrency.lockutils [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.925 232432 DEBUG nova.compute.manager [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:34 compute-2 nova_compute[232428]: 2025-11-29 08:16:34.925 232432 WARNING nova.compute.manager [req-c72f41df-0932-4a25-983b-ae4f18ed705b req-f85670ef-6de2-43c1-a9d2-9f6a22ad75fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state stopped and task_state powering-on.
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.284 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 1d2f015e-9584-47c8-a0c6-76e84d368cb6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.286 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404195.2840915, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.287 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Resumed (Lifecycle Event)
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.290 232432 DEBUG nova.compute.manager [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.294 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance rebooted successfully.
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.295 232432 DEBUG nova.compute.manager [None req-77250b7b-2746-4730-9535-e3c2bdbce37b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.322 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.326 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:35 compute-2 podman[285712]: 2025-11-29 08:16:35.327951775 +0000 UTC m=+0.070061814 container create b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:16:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:35.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:35 compute-2 systemd[1]: Started libpod-conmon-b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc.scope.
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.367 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.369 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404195.2857783, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.369 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Started (Lifecycle Event)
Nov 29 08:16:35 compute-2 ceph-mon[77138]: pgmap v2317: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Nov 29 08:16:35 compute-2 podman[285712]: 2025-11-29 08:16:35.299404141 +0000 UTC m=+0.041514220 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.398 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:35 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.403 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:35 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c783c851820532e4750d13f41430d4c92f84ce3face3c453f27e7a0b162a28da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:35 compute-2 podman[285712]: 2025-11-29 08:16:35.418401557 +0000 UTC m=+0.160511616 container init b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:16:35 compute-2 podman[285712]: 2025-11-29 08:16:35.426074397 +0000 UTC m=+0.168184436 container start b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:16:35 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [NOTICE]   (285731) : New worker (285733) forked
Nov 29 08:16:35 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [NOTICE]   (285731) : Loading success.
Nov 29 08:16:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:35 compute-2 nova_compute[232428]: 2025-11-29 08:16:35.772 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:36 compute-2 nova_compute[232428]: 2025-11-29 08:16:36.364 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.330 232432 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.331 232432 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.332 232432 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.332 232432 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.333 232432 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.333 232432 WARNING nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state active and task_state None.
Nov 29 08:16:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:37.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:37 compute-2 ceph-mon[77138]: pgmap v2318: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 262 op/s
Nov 29 08:16:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:37 compute-2 nova_compute[232428]: 2025-11-29 08:16:37.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:37.918 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:37.921 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:16:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4158594648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1182341685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.278 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:39.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:39 compute-2 ceph-mon[77138]: pgmap v2319: 305 pgs: 305 active+clean; 299 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 265 op/s
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.645 232432 DEBUG nova.objects.instance [None req-fca55b98-46b9-4f88-af71-63450400ad1b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.680 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404199.6797545, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.680 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Paused (Lifecycle Event)
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.705 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.708 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:39 compute-2 podman[285744]: 2025-11-29 08:16:39.717546834 +0000 UTC m=+0.119026646 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:16:39 compute-2 nova_compute[232428]: 2025-11-29 08:16:39.730 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 29 08:16:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:40 compute-2 kernel: tapac65f355-29 (unregistering): left promiscuous mode
Nov 29 08:16:40 compute-2 NetworkManager[48993]: <info>  [1764404200.1572] device (tapac65f355-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:16:40 compute-2 ovn_controller[134375]: 2025-11-29T08:16:40Z|00577|binding|INFO|Releasing lport ac65f355-2912-480c-acab-c38c1ec48dc9 from this chassis (sb_readonly=0)
Nov 29 08:16:40 compute-2 ovn_controller[134375]: 2025-11-29T08:16:40Z|00578|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 down in Southbound
Nov 29 08:16:40 compute-2 ovn_controller[134375]: 2025-11-29T08:16:40Z|00579|binding|INFO|Removing iface tapac65f355-29 ovn-installed in OVS
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.175 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.185 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.187 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.190 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.191 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de8afa7d-7162-434d-ac48-a661eab829e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.193 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.222 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 08:16:40 compute-2 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Consumed 5.616s CPU time.
Nov 29 08:16:40 compute-2 systemd-machined[194747]: Machine qemu-56-instance-00000077 terminated.
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.340 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.353 232432 DEBUG nova.compute.manager [None req-fca55b98-46b9-4f88-af71-63450400ad1b 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [NOTICE]   (285731) : haproxy version is 2.8.14-c23fe91
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [NOTICE]   (285731) : path to executable is /usr/sbin/haproxy
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [WARNING]  (285731) : Exiting Master process...
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [WARNING]  (285731) : Exiting Master process...
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [ALERT]    (285731) : Current worker (285733) exited with code 143 (Terminated)
Nov 29 08:16:40 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285727]: [WARNING]  (285731) : All workers exited. Exiting... (0)
Nov 29 08:16:40 compute-2 systemd[1]: libpod-b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc.scope: Deactivated successfully.
Nov 29 08:16:40 compute-2 conmon[285727]: conmon b4d97ecd30ce556f6353 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc.scope/container/memory.events
Nov 29 08:16:40 compute-2 podman[285799]: 2025-11-29 08:16:40.397169949 +0000 UTC m=+0.068764263 container died b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:16:40 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc-userdata-shm.mount: Deactivated successfully.
Nov 29 08:16:40 compute-2 systemd[1]: var-lib-containers-storage-overlay-c783c851820532e4750d13f41430d4c92f84ce3face3c453f27e7a0b162a28da-merged.mount: Deactivated successfully.
Nov 29 08:16:40 compute-2 podman[285799]: 2025-11-29 08:16:40.445659247 +0000 UTC m=+0.117253591 container cleanup b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.450 232432 DEBUG nova.compute.manager [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.451 232432 DEBUG oslo_concurrency.lockutils [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.452 232432 DEBUG oslo_concurrency.lockutils [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.452 232432 DEBUG oslo_concurrency.lockutils [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.453 232432 DEBUG nova.compute.manager [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.453 232432 WARNING nova.compute.manager [req-f126aa3f-faa4-4df0-9c87-c97090a75a7f req-cf2c051f-75d1-4376-8027-0f15b67a2065 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state suspended and task_state None.
Nov 29 08:16:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:40 compute-2 systemd[1]: libpod-conmon-b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc.scope: Deactivated successfully.
Nov 29 08:16:40 compute-2 podman[285838]: 2025-11-29 08:16:40.545562924 +0000 UTC m=+0.054912520 container remove b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.557 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[76bb2c7e-d8c2-4bc5-a7a4-77fd3a31b96a]: (4, ('Sat Nov 29 08:16:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc)\nb4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc\nSat Nov 29 08:16:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (b4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc)\nb4d97ecd30ce556f63530630883513f02e463b9ee1b765dd97e06033a64d53bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.559 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aea576-851d-40a9-b563-20524917fd8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.560 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:16:40 compute-2 nova_compute[232428]: 2025-11-29 08:16:40.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.593 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aa513ca3-172a-46de-81e0-7f8e025da00c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.612 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[449ef248-772b-4877-82f0-6ef163b53c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.614 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc05b8cf-1983-48b5-af1d-d1bbddb512fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.636 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2ac8a2-5b3a-40b5-ad81-695c3355753d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716080, 'reachable_time': 18895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285859, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.640 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:16:40 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:16:40 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:40.640 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ebcc9ab7-732d-4bbf-804d-c90ed6ef5ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:41 compute-2 nova_compute[232428]: 2025-11-29 08:16:41.047 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:41 compute-2 nova_compute[232428]: 2025-11-29 08:16:41.260 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404186.2579312, a8df16cb-c311-456a-b3d1-ab964b4e8bf4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:41 compute-2 nova_compute[232428]: 2025-11-29 08:16:41.261 232432 INFO nova.compute.manager [-] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] VM Stopped (Lifecycle Event)
Nov 29 08:16:41 compute-2 nova_compute[232428]: 2025-11-29 08:16:41.290 232432 DEBUG nova.compute.manager [None req-a454c015-c2ff-43b6-ba29-9a9b0fa96c9e - - - - - -] [instance: a8df16cb-c311-456a-b3d1-ab964b4e8bf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:41 compute-2 nova_compute[232428]: 2025-11-29 08:16:41.368 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:41.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:41 compute-2 ceph-mon[77138]: pgmap v2320: 305 pgs: 305 active+clean; 269 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 301 op/s
Nov 29 08:16:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.010 232432 INFO nova.compute.manager [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Resuming
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.012 232432 DEBUG nova.objects.instance [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'flavor' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.065 232432 DEBUG oslo_concurrency.lockutils [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.066 232432 DEBUG oslo_concurrency.lockutils [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquired lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.067 232432 DEBUG nova.network.neutron [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.530 232432 DEBUG nova.compute.manager [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.531 232432 DEBUG oslo_concurrency.lockutils [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.531 232432 DEBUG oslo_concurrency.lockutils [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.532 232432 DEBUG oslo_concurrency.lockutils [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.532 232432 DEBUG nova.compute.manager [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:42 compute-2 nova_compute[232428]: 2025-11-29 08:16:42.533 232432 WARNING nova.compute.manager [req-482cd2e9-fe4a-42ab-82d7-6cf55bc25db4 req-d60bfe65-de32-469a-93e4-8ea575c47191 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state suspended and task_state resuming.
Nov 29 08:16:43 compute-2 nova_compute[232428]: 2025-11-29 08:16:43.178 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:43 compute-2 nova_compute[232428]: 2025-11-29 08:16:43.430 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:43 compute-2 ceph-mon[77138]: pgmap v2321: 305 pgs: 305 active+clean; 269 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 301 op/s
Nov 29 08:16:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/725989067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1791800583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:43 compute-2 nova_compute[232428]: 2025-11-29 08:16:43.992 232432 DEBUG nova.network.neutron [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [{"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.014 232432 DEBUG oslo_concurrency.lockutils [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Releasing lock "refresh_cache-1d2f015e-9584-47c8-a0c6-76e84d368cb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.021 232432 DEBUG nova.virt.libvirt.vif [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.022 232432 DEBUG nova.network.os_vif_util [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.023 232432 DEBUG nova.network.os_vif_util [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.023 232432 DEBUG os_vif [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.024 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.025 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.025 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.029 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.029 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac65f355-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.030 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac65f355-29, col_values=(('external_ids', {'iface-id': 'ac65f355-2912-480c-acab-c38c1ec48dc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:c5:e5', 'vm-uuid': '1d2f015e-9584-47c8-a0c6-76e84d368cb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.030 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.031 232432 INFO os_vif [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29')
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.052 232432 DEBUG nova.objects.instance [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:44 compute-2 kernel: tapac65f355-29: entered promiscuous mode
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.1376] manager: (tapac65f355-29): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 08:16:44 compute-2 ovn_controller[134375]: 2025-11-29T08:16:44Z|00580|binding|INFO|Claiming lport ac65f355-2912-480c-acab-c38c1ec48dc9 for this chassis.
Nov 29 08:16:44 compute-2 ovn_controller[134375]: 2025-11-29T08:16:44Z|00581|binding|INFO|ac65f355-2912-480c-acab-c38c1ec48dc9: Claiming fa:16:3e:91:c5:e5 10.100.0.4
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.147 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.149 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.1561] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.1566] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.162 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.163 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af bound to our chassis
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.164 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:44 compute-2 systemd-udevd[285877]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.176 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f00c04-afca-4d9a-9c42-149a9571c4ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.177 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap988c10fa-91 in ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.179 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap988c10fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.179 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fb51ef-a909-4bd0-8443-d6c00ce89f35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.180 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb2736e-d729-4741-aa30-cadd0e1e8817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 systemd-machined[194747]: New machine qemu-57-instance-00000077.
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.1939] device (tapac65f355-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.1957] device (tapac65f355-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.196 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[31cca362-a2a3-446f-8636-418afc77747d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 systemd[1]: Started Virtual Machine qemu-57-instance-00000077.
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.222 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de55280a-49c5-41a8-bae3-a593cff141bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.256 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2f42d593-c3e2-4809-abdd-315a8654ea26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 systemd-udevd[285881]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.2671] manager: (tap988c10fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.265 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e7d106-4e8a-4727-b704-7be170a274dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.325 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bc39f08a-d8ae-4bdb-bc96-bb01b765c38d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.330 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3313472e-86a6-4d49-9025-59dc6beb9250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 ovn_controller[134375]: 2025-11-29T08:16:44Z|00582|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 ovn-installed in OVS
Nov 29 08:16:44 compute-2 ovn_controller[134375]: 2025-11-29T08:16:44Z|00583|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 up in Southbound
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.3765] device (tap988c10fa-90): carrier: link connected
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.388 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0d5442-40e1-4d64-9bf5-cffcf59f14c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.415 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[549ecf11-7b5e-473a-99c7-6a04fa4398e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717064, 'reachable_time': 39572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285910, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.436 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2687f7a8-3c00-4ea6-9571-8fd3d038f715]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:abcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717064, 'tstamp': 717064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285911, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.455 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2d717a88-0a01-473f-9e59-d594f3c0ad1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap988c10fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ab:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717064, 'reachable_time': 39572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285912, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.492 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d6162af3-f546-48ba-a583-dbd1da443ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.602 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4df7b29f-38f1-457c-a364-dbff1b8eaba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.604 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.604 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.604 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap988c10fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 NetworkManager[48993]: <info>  [1764404204.6072] manager: (tap988c10fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 08:16:44 compute-2 kernel: tap988c10fa-90: entered promiscuous mode
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.608 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.610 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap988c10fa-90, col_values=(('external_ids', {'iface-id': '616047a9-a4f0-46e2-96a4-3c60e050f64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 ovn_controller[134375]: 2025-11-29T08:16:44Z|00584|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.616 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.618 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf7387a-1344-47f5-afe3-7a9970cd5c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.620 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/988c10fa-9fc6-4223-9f44-61d8377a22af.pid.haproxy
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 988c10fa-9fc6-4223-9f44-61d8377a22af
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:16:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:44.621 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'env', 'PROCESS_TAG=haproxy-988c10fa-9fc6-4223-9f44-61d8377a22af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/988c10fa-9fc6-4223-9f44-61d8377a22af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.738 232432 DEBUG nova.compute.manager [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.739 232432 DEBUG oslo_concurrency.lockutils [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.739 232432 DEBUG oslo_concurrency.lockutils [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.740 232432 DEBUG oslo_concurrency.lockutils [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.740 232432 DEBUG nova.compute.manager [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.740 232432 WARNING nova.compute.manager [req-bace06f2-5716-47bf-b667-c218d8daa0e3 req-de92bdb5-00a1-43b6-81e1-14489df5256b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state suspended and task_state resuming.
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.848 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 1d2f015e-9584-47c8-a0c6-76e84d368cb6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.849 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404204.848379, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.849 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Started (Lifecycle Event)
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.867 232432 DEBUG nova.compute.manager [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.868 232432 DEBUG nova.objects.instance [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.871 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.875 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.902 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.902 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404204.8531137, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.903 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Resumed (Lifecycle Event)
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.905 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance running successfully.
Nov 29 08:16:44 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.910 232432 DEBUG nova.virt.libvirt.guest [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.910 232432 DEBUG nova.compute.manager [None req-49b73616-2511-4daa-8e2d-54719bc37d62 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.938 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.942 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:16:44 compute-2 nova_compute[232428]: 2025-11-29 08:16:44.970 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 29 08:16:45 compute-2 podman[285984]: 2025-11-29 08:16:45.065144593 +0000 UTC m=+0.074925206 container create 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:16:45 compute-2 systemd[1]: Started libpod-conmon-1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955.scope.
Nov 29 08:16:45 compute-2 podman[285984]: 2025-11-29 08:16:45.033719479 +0000 UTC m=+0.043500172 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:16:45 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:16:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a0152372495d226dc73e8f7095de67fbc966dcf7b9338de080077c3924a52f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:45 compute-2 podman[285984]: 2025-11-29 08:16:45.158176035 +0000 UTC m=+0.167956648 container init 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:16:45 compute-2 podman[285984]: 2025-11-29 08:16:45.167134066 +0000 UTC m=+0.176914679 container start 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:16:45 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [NOTICE]   (286003) : New worker (286005) forked
Nov 29 08:16:45 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [NOTICE]   (286003) : Loading success.
Nov 29 08:16:45 compute-2 nova_compute[232428]: 2025-11-29 08:16:45.217 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:45 compute-2 ovn_controller[134375]: 2025-11-29T08:16:45Z|00585|binding|INFO|Releasing lport 616047a9-a4f0-46e2-96a4-3c60e050f64e from this chassis (sb_readonly=0)
Nov 29 08:16:45 compute-2 nova_compute[232428]: 2025-11-29 08:16:45.372 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:45 compute-2 ceph-mon[77138]: pgmap v2322: 305 pgs: 305 active+clean; 293 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 252 op/s
Nov 29 08:16:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:45.923 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.372 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.897 232432 DEBUG nova.compute.manager [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.897 232432 DEBUG oslo_concurrency.lockutils [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.898 232432 DEBUG oslo_concurrency.lockutils [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.898 232432 DEBUG oslo_concurrency.lockutils [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.898 232432 DEBUG nova.compute.manager [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:46 compute-2 nova_compute[232428]: 2025-11-29 08:16:46.899 232432 WARNING nova.compute.manager [req-a8a462e6-bca7-454d-afe6-2e9f5daea4a4 req-ae5c74fb-f5bc-4adc-be00-6db18e423930 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state active and task_state None.
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.024 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.026 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.027 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.028 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.028 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.031 232432 INFO nova.compute.manager [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Terminating instance
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.033 232432 DEBUG nova.compute.manager [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:16:47 compute-2 kernel: tapac65f355-29 (unregistering): left promiscuous mode
Nov 29 08:16:47 compute-2 NetworkManager[48993]: <info>  [1764404207.0858] device (tapac65f355-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:16:47 compute-2 ovn_controller[134375]: 2025-11-29T08:16:47Z|00586|binding|INFO|Releasing lport ac65f355-2912-480c-acab-c38c1ec48dc9 from this chassis (sb_readonly=0)
Nov 29 08:16:47 compute-2 ovn_controller[134375]: 2025-11-29T08:16:47Z|00587|binding|INFO|Setting lport ac65f355-2912-480c-acab-c38c1ec48dc9 down in Southbound
Nov 29 08:16:47 compute-2 ovn_controller[134375]: 2025-11-29T08:16:47Z|00588|binding|INFO|Removing iface tapac65f355-29 ovn-installed in OVS
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.098 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.101 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.105 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:c5:e5 10.100.0.4'], port_security=['fa:16:3e:91:c5:e5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1d2f015e-9584-47c8-a0c6-76e84d368cb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-988c10fa-9fc6-4223-9f44-61d8377a22af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd72b5448be0e463f80dca118feb42d3b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '63b8ac19-9624-40a7-8a40-9afb5a9b2cf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5545b159-de19-418c-a240-0de8f58d033a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ac65f355-2912-480c-acab-c38c1ec48dc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.107 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ac65f355-2912-480c-acab-c38c1ec48dc9 in datapath 988c10fa-9fc6-4223-9f44-61d8377a22af unbound from our chassis
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.108 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 988c10fa-9fc6-4223-9f44-61d8377a22af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.109 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[777b0065-45c3-42db-8a86-e6fae28bf903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.110 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af namespace which is not needed anymore
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 08:16:47 compute-2 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000077.scope: Consumed 2.796s CPU time.
Nov 29 08:16:47 compute-2 systemd-machined[194747]: Machine qemu-57-instance-00000077 terminated.
Nov 29 08:16:47 compute-2 NetworkManager[48993]: <info>  [1764404207.2588] manager: (tapac65f355-29): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.282 232432 INFO nova.virt.libvirt.driver [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Instance destroyed successfully.
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.284 232432 DEBUG nova.objects.instance [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lazy-loading 'resources' on Instance uuid 1d2f015e-9584-47c8-a0c6-76e84d368cb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:47 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [NOTICE]   (286003) : haproxy version is 2.8.14-c23fe91
Nov 29 08:16:47 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [NOTICE]   (286003) : path to executable is /usr/sbin/haproxy
Nov 29 08:16:47 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [WARNING]  (286003) : Exiting Master process...
Nov 29 08:16:47 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [ALERT]    (286003) : Current worker (286005) exited with code 143 (Terminated)
Nov 29 08:16:47 compute-2 neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af[285999]: [WARNING]  (286003) : All workers exited. Exiting... (0)
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.299 232432 DEBUG nova.virt.libvirt.vif [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1127187577',display_name='tempest-ServerActionsTestJSON-server-1127187577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1127187577',id=119,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwqstfYrXYbun0uuSbgvpejtigGk9jpx4vQI9bu1o7uC+URtgiNcyuBinKD6mO5jycqURG1Epzx+LnZzHA2nUgD7RuO+yTkrgCzw26FU7QiXQZ4vWVG1Su7LY9RFrUQDQ==',key_name='tempest-keypair-1834616561',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d72b5448be0e463f80dca118feb42d3b',ramdisk_id='',reservation_id='r-amlj4lbl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1048555325',owner_user_name='tempest-ServerActionsTestJSON-1048555325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='661b6600a32b40d8a48db16cb71c7e75',uuid=1d2f015e-9584-47c8-a0c6-76e84d368cb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.300 232432 DEBUG nova.network.os_vif_util [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converting VIF {"id": "ac65f355-2912-480c-acab-c38c1ec48dc9", "address": "fa:16:3e:91:c5:e5", "network": {"id": "988c10fa-9fc6-4223-9f44-61d8377a22af", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2021825713-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d72b5448be0e463f80dca118feb42d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac65f355-29", "ovs_interfaceid": "ac65f355-2912-480c-acab-c38c1ec48dc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:47 compute-2 systemd[1]: libpod-1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955.scope: Deactivated successfully.
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.302 232432 DEBUG nova.network.os_vif_util [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.303 232432 DEBUG os_vif [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.308 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac65f355-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:47 compute-2 podman[286035]: 2025-11-29 08:16:47.308222009 +0000 UTC m=+0.069954401 container died 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.316 232432 INFO os_vif [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:c5:e5,bridge_name='br-int',has_traffic_filtering=True,id=ac65f355-2912-480c-acab-c38c1ec48dc9,network=Network(988c10fa-9fc6-4223-9f44-61d8377a22af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac65f355-29')
Nov 29 08:16:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955-userdata-shm.mount: Deactivated successfully.
Nov 29 08:16:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-6a0152372495d226dc73e8f7095de67fbc966dcf7b9338de080077c3924a52f7-merged.mount: Deactivated successfully.
Nov 29 08:16:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:47.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:47 compute-2 podman[286035]: 2025-11-29 08:16:47.378471408 +0000 UTC m=+0.140203810 container cleanup 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:16:47 compute-2 systemd[1]: libpod-conmon-1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955.scope: Deactivated successfully.
Nov 29 08:16:47 compute-2 podman[286096]: 2025-11-29 08:16:47.456248942 +0000 UTC m=+0.050302495 container remove 1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb4ac4a-f692-4e43-b4a0-84baa9972b58]: (4, ('Sat Nov 29 08:16:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955)\n1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955\nSat Nov 29 08:16:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af (1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955)\n1fc12872354a8286420838b08dafa1f65c81db1774efa57d676c87cc22585955\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09bfd602-a9d5-47af-aa6e-efa3f6b6591a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap988c10fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.474 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 kernel: tap988c10fa-90: left promiscuous mode
Nov 29 08:16:47 compute-2 ceph-mon[77138]: pgmap v2323: 305 pgs: 305 active+clean; 293 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 248 op/s
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.494 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[709cadd9-41ef-4151-8a9f-15d694ff5ee3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.507 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc3d8c8-9875-43f2-a2a1-892e4386f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.509 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2402c7c0-5e9a-453d-ade7-5e2340832026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.529 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6165de34-a920-441e-89d3-efea628595fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717051, 'reachable_time': 20585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286111, 'error': None, 'target': 'ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.532 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-988c10fa-9fc6-4223-9f44-61d8377a22af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:16:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:16:47.532 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[0552deee-e371-4b73-a61c-941f6c559416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:47 compute-2 systemd[1]: run-netns-ovnmeta\x2d988c10fa\x2d9fc6\x2d4223\x2d9f44\x2d61d8377a22af.mount: Deactivated successfully.
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.748 232432 INFO nova.virt.libvirt.driver [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Deleting instance files /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6_del
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.749 232432 INFO nova.virt.libvirt.driver [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Deletion of /var/lib/nova/instances/1d2f015e-9584-47c8-a0c6-76e84d368cb6_del complete
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.817 232432 INFO nova.compute.manager [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.818 232432 DEBUG oslo.service.loopingcall [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.819 232432 DEBUG nova.compute.manager [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:16:47 compute-2 nova_compute[232428]: 2025-11-29 08:16:47.819 232432 DEBUG nova.network.neutron [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:16:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:48 compute-2 nova_compute[232428]: 2025-11-29 08:16:48.889 232432 DEBUG nova.network.neutron [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:48 compute-2 nova_compute[232428]: 2025-11-29 08:16:48.922 232432 INFO nova.compute.manager [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Took 1.10 seconds to deallocate network for instance.
Nov 29 08:16:48 compute-2 nova_compute[232428]: 2025-11-29 08:16:48.983 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:48 compute-2 nova_compute[232428]: 2025-11-29 08:16:48.984 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:48 compute-2 nova_compute[232428]: 2025-11-29 08:16:48.991 232432 DEBUG nova.compute.manager [req-1eee970f-b1f2-4792-b5b6-5cbca8529048 req-2e67c7cb-3c56-4830-bc5c-05717f60bfed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-deleted-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.028 232432 DEBUG nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.029 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.030 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.030 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.031 232432 DEBUG nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.031 232432 WARNING nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-unplugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state deleted and task_state None.
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.032 232432 DEBUG nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.032 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.033 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.034 232432 DEBUG oslo_concurrency.lockutils [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.034 232432 DEBUG nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] No waiting events found dispatching network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.035 232432 WARNING nova.compute.manager [req-b26bc915-3eba-4314-bf33-19996392d088 req-de0485bc-0976-4be8-965d-86a46ba7fb58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Received unexpected event network-vif-plugged-ac65f355-2912-480c-acab-c38c1ec48dc9 for instance with vm_state deleted and task_state None.
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.067 232432 DEBUG oslo_concurrency.processutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:49 compute-2 ceph-mon[77138]: pgmap v2324: 305 pgs: 305 active+clean; 261 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Nov 29 08:16:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1680553038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.569 232432 DEBUG oslo_concurrency.processutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.576 232432 DEBUG nova.compute.provider_tree [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.596 232432 DEBUG nova.scheduler.client.report [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.624 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.650 232432 INFO nova.scheduler.client.report [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Deleted allocations for instance 1d2f015e-9584-47c8-a0c6-76e84d368cb6
Nov 29 08:16:49 compute-2 nova_compute[232428]: 2025-11-29 08:16:49.719 232432 DEBUG oslo_concurrency.lockutils [None req-0c28370a-0d88-4867-9ce3-8b5018914be0 661b6600a32b40d8a48db16cb71c7e75 d72b5448be0e463f80dca118feb42d3b - - default default] Lock "1d2f015e-9584-47c8-a0c6-76e84d368cb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:50 compute-2 nova_compute[232428]: 2025-11-29 08:16:50.071 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:50 compute-2 nova_compute[232428]: 2025-11-29 08:16:50.190 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:50 compute-2 nova_compute[232428]: 2025-11-29 08:16:50.206 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1680553038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:51 compute-2 nova_compute[232428]: 2025-11-29 08:16:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:51 compute-2 nova_compute[232428]: 2025-11-29 08:16:51.375 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:51.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:51 compute-2 ceph-mon[77138]: pgmap v2325: 305 pgs: 305 active+clean; 214 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Nov 29 08:16:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:52 compute-2 nova_compute[232428]: 2025-11-29 08:16:52.311 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:52 compute-2 sudo[286138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:52 compute-2 sudo[286138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:52 compute-2 sudo[286138]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:53 compute-2 sudo[286169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:53 compute-2 podman[286162]: 2025-11-29 08:16:53.058354548 +0000 UTC m=+0.087561542 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:16:53 compute-2 sudo[286169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:53 compute-2 sudo[286169]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:16:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:16:53 compute-2 ceph-mon[77138]: pgmap v2326: 305 pgs: 305 active+clean; 214 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Nov 29 08:16:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/230588409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:54 compute-2 nova_compute[232428]: 2025-11-29 08:16:54.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:54 compute-2 nova_compute[232428]: 2025-11-29 08:16:54.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:16:54 compute-2 nova_compute[232428]: 2025-11-29 08:16:54.217 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:16:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1579080136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:55 compute-2 nova_compute[232428]: 2025-11-29 08:16:55.159 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:55 compute-2 nova_compute[232428]: 2025-11-29 08:16:55.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:55 compute-2 nova_compute[232428]: 2025-11-29 08:16:55.203 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:55 compute-2 ceph-mon[77138]: pgmap v2327: 305 pgs: 305 active+clean; 214 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Nov 29 08:16:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:55.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:56 compute-2 nova_compute[232428]: 2025-11-29 08:16:56.378 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:56 compute-2 nova_compute[232428]: 2025-11-29 08:16:56.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:56 compute-2 nova_compute[232428]: 2025-11-29 08:16:56.701 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:57 compute-2 nova_compute[232428]: 2025-11-29 08:16:57.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:16:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:57.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:16:57 compute-2 ceph-mon[77138]: pgmap v2328: 305 pgs: 305 active+clean; 215 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 167 op/s
Nov 29 08:16:57 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 08:16:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:58 compute-2 podman[286212]: 2025-11-29 08:16:58.686134926 +0000 UTC m=+0.088743159 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.234 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:16:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:16:59 compute-2 ceph-mon[77138]: pgmap v2329: 305 pgs: 305 active+clean; 221 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 833 KiB/s wr, 169 op/s
Nov 29 08:16:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2087445897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.767 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.767 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.793 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.884 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.885 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.892 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:16:59 compute-2 nova_compute[232428]: 2025-11-29 08:16:59.892 232432 INFO nova.compute.claims [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:16:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:16:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:16:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.030 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/534156998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.510 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.520 232432 DEBUG nova.compute.provider_tree [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.540 232432 DEBUG nova.scheduler.client.report [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.565 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.566 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.613 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.614 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.639 232432 INFO nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.659 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:17:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2946922595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/534156998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.791 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.793 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.794 232432 INFO nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Creating image(s)
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.836 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.878 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.915 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.921 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-2 nova_compute[232428]: 2025-11-29 08:17:00.964 232432 DEBUG nova.policy [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f3d87a5e5ad344d1bcc70ba8075f3ca5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '363fe98c689041caaa9c21709efe6d5e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.001 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.002 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.002 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.003 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.031 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.035 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.233 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.234 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.366 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.459 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] resizing rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.596 232432 DEBUG nova.objects.instance [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lazy-loading 'migration_context' on Instance uuid 66d0dad5-8e47-49e0-903b-b8c0152c56b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.613 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.614 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Ensure instance console log exists: /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.615 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.616 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.616 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:01 compute-2 nova_compute[232428]: 2025-11-29 08:17:01.635 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Successfully created port: 605ca4c2-db24-40a9-99c7-d692af254898 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:17:01 compute-2 ceph-mon[77138]: pgmap v2330: 305 pgs: 305 active+clean; 245 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Nov 29 08:17:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:02 compute-2 nova_compute[232428]: 2025-11-29 08:17:02.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:02 compute-2 nova_compute[232428]: 2025-11-29 08:17:02.278 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404207.2767365, 1d2f015e-9584-47c8-a0c6-76e84d368cb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:02 compute-2 nova_compute[232428]: 2025-11-29 08:17:02.279 232432 INFO nova.compute.manager [-] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] VM Stopped (Lifecycle Event)
Nov 29 08:17:02 compute-2 nova_compute[232428]: 2025-11-29 08:17:02.296 232432 DEBUG nova.compute.manager [None req-063888c1-279a-45e8-8906-b2733f7fcd8d - - - - - -] [instance: 1d2f015e-9584-47c8-a0c6-76e84d368cb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:02 compute-2 nova_compute[232428]: 2025-11-29 08:17:02.317 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.010 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Successfully updated port: 605ca4c2-db24-40a9-99c7-d692af254898 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.031 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.031 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquired lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.031 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.140 232432 DEBUG nova.compute.manager [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-changed-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.141 232432 DEBUG nova.compute.manager [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Refreshing instance network info cache due to event network-changed-605ca4c2-db24-40a9-99c7-d692af254898. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.141 232432 DEBUG oslo_concurrency.lockutils [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.226 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.226 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.226 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.226 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.262 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:03.324 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:03.324 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:03.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/285975098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.677 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:03 compute-2 ceph-mon[77138]: pgmap v2331: 305 pgs: 305 active+clean; 245 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Nov 29 08:17:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/223840700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4038493462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/285975098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/859391372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.862 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.864 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4388MB free_disk=20.87643051147461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.864 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.864 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.951 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 66d0dad5-8e47-49e0-903b-b8c0152c56b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.951 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.952 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:17:03 compute-2 nova_compute[232428]: 2025-11-29 08:17:03.989 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.344 232432 DEBUG nova.network.neutron [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Updating instance_info_cache with network_info: [{"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.374 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Releasing lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.374 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Instance network_info: |[{"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.375 232432 DEBUG oslo_concurrency.lockutils [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.375 232432 DEBUG nova.network.neutron [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Refreshing network info cache for port 605ca4c2-db24-40a9-99c7-d692af254898 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.380 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Start _get_guest_xml network_info=[{"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.384 232432 WARNING nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.390 232432 DEBUG nova.virt.libvirt.host [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.390 232432 DEBUG nova.virt.libvirt.host [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.398 232432 DEBUG nova.virt.libvirt.host [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.398 232432 DEBUG nova.virt.libvirt.host [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.399 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.399 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.400 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.400 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.400 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.400 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.400 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.401 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.401 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.401 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.401 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.401 232432 DEBUG nova.virt.hardware [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.404 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/422042532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.438 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.447 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.477 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.506 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.506 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1067979501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.833 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.856 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:04 compute-2 nova_compute[232428]: 2025-11-29 08:17:04.859 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/319407602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/422042532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1067979501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/904085262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.283 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.285 232432 DEBUG nova.virt.libvirt.vif [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-667936073',display_name='tempest-ServerAddressesTestJSON-server-667936073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-667936073',id=123,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='363fe98c689041caaa9c21709efe6d5e',ramdisk_id='',reservation_id='r-34k5etor',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-289358423',owner_user_name='tempest-ServerAddressesTestJSON-289358423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:00Z,user_data=None,user_id='f3d87a5e5ad344d1bcc70ba8075f3ca5',uuid=66d0dad5-8e47-49e0-903b-b8c0152c56b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.286 232432 DEBUG nova.network.os_vif_util [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converting VIF {"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.287 232432 DEBUG nova.network.os_vif_util [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.290 232432 DEBUG nova.objects.instance [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lazy-loading 'pci_devices' on Instance uuid 66d0dad5-8e47-49e0-903b-b8c0152c56b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.309 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <uuid>66d0dad5-8e47-49e0-903b-b8c0152c56b8</uuid>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <name>instance-0000007b</name>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerAddressesTestJSON-server-667936073</nova:name>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:17:04</nova:creationTime>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:user uuid="f3d87a5e5ad344d1bcc70ba8075f3ca5">tempest-ServerAddressesTestJSON-289358423-project-member</nova:user>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:project uuid="363fe98c689041caaa9c21709efe6d5e">tempest-ServerAddressesTestJSON-289358423</nova:project>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <nova:port uuid="605ca4c2-db24-40a9-99c7-d692af254898">
Nov 29 08:17:05 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <system>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="serial">66d0dad5-8e47-49e0-903b-b8c0152c56b8</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="uuid">66d0dad5-8e47-49e0-903b-b8c0152c56b8</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </system>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <os>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </os>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <features>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </features>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk">
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </source>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config">
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </source>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:17:05 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:76:30:9f"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <target dev="tap605ca4c2-db"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/console.log" append="off"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <video>
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </video>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:17:05 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:17:05 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:17:05 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:17:05 compute-2 nova_compute[232428]: </domain>
Nov 29 08:17:05 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.310 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Preparing to wait for external event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.310 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.311 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.311 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.312 232432 DEBUG nova.virt.libvirt.vif [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-667936073',display_name='tempest-ServerAddressesTestJSON-server-667936073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-667936073',id=123,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='363fe98c689041caaa9c21709efe6d5e',ramdisk_id='',reservation_id='r-34k5etor',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-289358423',owner_user_name='tempest-ServerAddressesTestJSON-289358423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:00Z,user_data=None,user_id='f3d87a5e5ad344d1bcc70ba8075f3ca5',uuid=66d0dad5-8e47-49e0-903b-b8c0152c56b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.313 232432 DEBUG nova.network.os_vif_util [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converting VIF {"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.314 232432 DEBUG nova.network.os_vif_util [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.314 232432 DEBUG os_vif [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.316 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.317 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.322 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap605ca4c2-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.323 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap605ca4c2-db, col_values=(('external_ids', {'iface-id': '605ca4c2-db24-40a9-99c7-d692af254898', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:30:9f', 'vm-uuid': '66d0dad5-8e47-49e0-903b-b8c0152c56b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-2 NetworkManager[48993]: <info>  [1764404225.3266] manager: (tap605ca4c2-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.335 232432 INFO os_vif [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db')
Nov 29 08:17:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:05.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.399 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.400 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.400 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] No VIF found with MAC fa:16:3e:76:30:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.400 232432 INFO nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Using config drive
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.423 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.575 232432 DEBUG nova.network.neutron [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Updated VIF entry in instance network info cache for port 605ca4c2-db24-40a9-99c7-d692af254898. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.575 232432 DEBUG nova.network.neutron [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Updating instance_info_cache with network_info: [{"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.589 232432 DEBUG oslo_concurrency.lockutils [req-b6f1ed0a-e134-4ac9-875f-b8483a3c826a req-48b5881f-9ed5-4363-a400-450f9829c9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-66d0dad5-8e47-49e0-903b-b8c0152c56b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.732 232432 INFO nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Creating config drive at /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.742 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7ya0ika execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:05 compute-2 ceph-mon[77138]: pgmap v2332: 305 pgs: 305 active+clean; 270 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 201 KiB/s rd, 3.1 MiB/s wr, 61 op/s
Nov 29 08:17:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/904085262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.910 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7ya0ika" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.958 232432 DEBUG nova.storage.rbd_utils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] rbd image 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:05 compute-2 nova_compute[232428]: 2025-11-29 08:17:05.963 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.176 232432 DEBUG oslo_concurrency.processutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config 66d0dad5-8e47-49e0-903b-b8c0152c56b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.177 232432 INFO nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Deleting local config drive /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8/disk.config because it was imported into RBD.
Nov 29 08:17:06 compute-2 kernel: tap605ca4c2-db: entered promiscuous mode
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.2586] manager: (tap605ca4c2-db): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 08:17:06 compute-2 ovn_controller[134375]: 2025-11-29T08:17:06Z|00589|binding|INFO|Claiming lport 605ca4c2-db24-40a9-99c7-d692af254898 for this chassis.
Nov 29 08:17:06 compute-2 ovn_controller[134375]: 2025-11-29T08:17:06Z|00590|binding|INFO|605ca4c2-db24-40a9-99c7-d692af254898: Claiming fa:16:3e:76:30:9f 10.100.0.8
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.259 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.274 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:30:9f 10.100.0.8'], port_security=['fa:16:3e:76:30:9f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66d0dad5-8e47-49e0-903b-b8c0152c56b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '363fe98c689041caaa9c21709efe6d5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b295f00-902f-4eec-8cb0-20e25e335e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfb5d26a-af3c-4b2a-a318-a59bdca57e0f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=605ca4c2-db24-40a9-99c7-d692af254898) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.276 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 605ca4c2-db24-40a9-99c7-d692af254898 in datapath 82ff266c-4124-4ca8-bd0c-055a10b7a9e6 bound to our chassis
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.279 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82ff266c-4124-4ca8-bd0c-055a10b7a9e6
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.292 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b9e710-2bf9-48ee-a20d-135980024517]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.293 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82ff266c-41 in ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.295 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82ff266c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.295 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c96262-49f4-4a0c-a16f-a8100140c5af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.297 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae89409f-fd2d-42db-ba5e-e399008376db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 systemd-udevd[286603]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.309 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[21f99c2d-3771-4b33-9303-00fb8bae3309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.3199] device (tap605ca4c2-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.3214] device (tap605ca4c2-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:17:06 compute-2 systemd-machined[194747]: New machine qemu-58-instance-0000007b.
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.336 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f29e21-ecea-4113-869e-7bdfce069418]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 systemd[1]: Started Virtual Machine qemu-58-instance-0000007b.
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.383 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.386 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c08eb785-d9ac-4bb4-a3e4-8e0ab6f76541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_controller[134375]: 2025-11-29T08:17:06Z|00591|binding|INFO|Setting lport 605ca4c2-db24-40a9-99c7-d692af254898 ovn-installed in OVS
Nov 29 08:17:06 compute-2 ovn_controller[134375]: 2025-11-29T08:17:06Z|00592|binding|INFO|Setting lport 605ca4c2-db24-40a9-99c7-d692af254898 up in Southbound
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.387 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.394 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.3978] manager: (tap82ff266c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.397 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[56a09a91-bb91-49d1-a700-c01e657985f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.450 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[afe58552-1a6c-4c8d-a256-13d1950fbf3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.453 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[07b72c32-4a07-4f0c-9640-370ad208acbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.4904] device (tap82ff266c-40): carrier: link connected
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.499 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c304d80-9c0c-40e4-a248-ac0545fd9bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.521 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09e03309-d93d-4b7f-9bda-1d46dc9a05fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82ff266c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:11:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719275, 'reachable_time': 27369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286637, 'error': None, 'target': 'ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.535 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1771f1e0-1705-4782-86d6-982ec376a8b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:11c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719275, 'tstamp': 719275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286638, 'error': None, 'target': 'ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.564 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c126f92b-1d9e-44c4-9d40-732048db4690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82ff266c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:11:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719275, 'reachable_time': 27369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286639, 'error': None, 'target': 'ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.619 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fd885a86-e321-428f-8b63-a48e8230fa49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 sudo[286640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:06 compute-2 sudo[286640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:06 compute-2 sudo[286640]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:06 compute-2 sudo[286669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.717 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9b716b-70fe-40cd-a391-ccdfd420fad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.719 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82ff266c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.719 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.719 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82ff266c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.721 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 NetworkManager[48993]: <info>  [1764404226.7224] manager: (tap82ff266c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 08:17:06 compute-2 kernel: tap82ff266c-40: entered promiscuous mode
Nov 29 08:17:06 compute-2 sudo[286669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.727 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82ff266c-40, col_values=(('external_ids', {'iface-id': 'ad6021f6-b226-4bed-aa4e-db11b52e4137'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:06 compute-2 ovn_controller[134375]: 2025-11-29T08:17:06Z|00593|binding|INFO|Releasing lport ad6021f6-b226-4bed-aa4e-db11b52e4137 from this chassis (sb_readonly=0)
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.728 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 sudo[286669]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:06 compute-2 nova_compute[232428]: 2025-11-29 08:17:06.761 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.763 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82ff266c-4124-4ca8-bd0c-055a10b7a9e6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82ff266c-4124-4ca8-bd0c-055a10b7a9e6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.764 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bee846ca-e6d4-42fc-b615-04fbe8a5019a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.765 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-82ff266c-4124-4ca8-bd0c-055a10b7a9e6
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/82ff266c-4124-4ca8-bd0c-055a10b7a9e6.pid.haproxy
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 82ff266c-4124-4ca8-bd0c-055a10b7a9e6
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:17:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:06.767 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'env', 'PROCESS_TAG=haproxy-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82ff266c-4124-4ca8-bd0c-055a10b7a9e6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:17:06 compute-2 sudo[286696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:06 compute-2 sudo[286696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:06 compute-2 sudo[286696]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:06 compute-2 sudo[286724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:17:06 compute-2 sudo[286724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:07 compute-2 podman[286786]: 2025-11-29 08:17:07.268397691 +0000 UTC m=+0.090000749 container create b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:17:07 compute-2 podman[286786]: 2025-11-29 08:17:07.22745664 +0000 UTC m=+0.049059758 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:17:07 compute-2 systemd[1]: Started libpod-conmon-b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48.scope.
Nov 29 08:17:07 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:17:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b23624fe6570e49c56f113852a365ed5bc4c03ddf17c8cc031a589dec20fd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:07 compute-2 podman[286786]: 2025-11-29 08:17:07.388974635 +0000 UTC m=+0.210577673 container init b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:17:07 compute-2 podman[286786]: 2025-11-29 08:17:07.39744098 +0000 UTC m=+0.219044038 container start b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:17:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:07.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:07 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [NOTICE]   (286810) : New worker (286829) forked
Nov 29 08:17:07 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [NOTICE]   (286810) : Loading success.
Nov 29 08:17:07 compute-2 sudo[286724]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.916 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404227.9158547, 66d0dad5-8e47-49e0-903b-b8c0152c56b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.917 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] VM Started (Lifecycle Event)
Nov 29 08:17:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:07.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.956 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.962 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404227.9199586, 66d0dad5-8e47-49e0-903b-b8c0152c56b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.962 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] VM Paused (Lifecycle Event)
Nov 29 08:17:07 compute-2 nova_compute[232428]: 2025-11-29 08:17:07.995 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.000 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.022 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:08 compute-2 ceph-mon[77138]: pgmap v2333: 305 pgs: 305 active+clean; 377 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 6.9 MiB/s wr, 107 op/s
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:17:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.659 232432 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.660 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.661 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.661 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.661 232432 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Processing event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.662 232432 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.662 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.663 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.663 232432 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.663 232432 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] No waiting events found dispatching network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.664 232432 WARNING nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received unexpected event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 for instance with vm_state building and task_state spawning.
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.664 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.669 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404228.668992, 66d0dad5-8e47-49e0-903b-b8c0152c56b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.669 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] VM Resumed (Lifecycle Event)
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.671 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.675 232432 INFO nova.virt.libvirt.driver [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Instance spawned successfully.
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.675 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.701 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.706 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.706 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.707 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.707 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.708 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.708 232432 DEBUG nova.virt.libvirt.driver [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.714 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.752 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.792 232432 INFO nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Took 8.00 seconds to spawn the instance on the hypervisor.
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.792 232432 DEBUG nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.856 232432 INFO nova.compute.manager [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Took 9.01 seconds to build instance.
Nov 29 08:17:08 compute-2 nova_compute[232428]: 2025-11-29 08:17:08.871 232432 DEBUG oslo_concurrency.lockutils [None req-3204587a-30ab-44d6-8ae9-9cbdcf2edf1a f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:09.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:10 compute-2 ceph-mon[77138]: pgmap v2334: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 7.5 MiB/s wr, 143 op/s
Nov 29 08:17:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2812517582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1621604132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:10 compute-2 nova_compute[232428]: 2025-11-29 08:17:10.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:10 compute-2 podman[286877]: 2025-11-29 08:17:10.720648968 +0000 UTC m=+0.115176956 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.390 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:11.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.581 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.582 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.582 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.582 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.582 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.583 232432 INFO nova.compute.manager [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Terminating instance
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.584 232432 DEBUG nova.compute.manager [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:17:11 compute-2 kernel: tap605ca4c2-db (unregistering): left promiscuous mode
Nov 29 08:17:11 compute-2 NetworkManager[48993]: <info>  [1764404231.6240] device (tap605ca4c2-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.631 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 ovn_controller[134375]: 2025-11-29T08:17:11Z|00594|binding|INFO|Releasing lport 605ca4c2-db24-40a9-99c7-d692af254898 from this chassis (sb_readonly=0)
Nov 29 08:17:11 compute-2 ovn_controller[134375]: 2025-11-29T08:17:11Z|00595|binding|INFO|Setting lport 605ca4c2-db24-40a9-99c7-d692af254898 down in Southbound
Nov 29 08:17:11 compute-2 ovn_controller[134375]: 2025-11-29T08:17:11Z|00596|binding|INFO|Removing iface tap605ca4c2-db ovn-installed in OVS
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.639 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:30:9f 10.100.0.8'], port_security=['fa:16:3e:76:30:9f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66d0dad5-8e47-49e0-903b-b8c0152c56b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '363fe98c689041caaa9c21709efe6d5e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b295f00-902f-4eec-8cb0-20e25e335e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfb5d26a-af3c-4b2a-a318-a59bdca57e0f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=605ca4c2-db24-40a9-99c7-d692af254898) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.640 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 605ca4c2-db24-40a9-99c7-d692af254898 in datapath 82ff266c-4124-4ca8-bd0c-055a10b7a9e6 unbound from our chassis
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.642 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82ff266c-4124-4ca8-bd0c-055a10b7a9e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.643 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[10bae8b3-31ad-4d28-929e-ede7cbce463f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.644 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6 namespace which is not needed anymore
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.654 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 29 08:17:11 compute-2 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007b.scope: Consumed 4.301s CPU time.
Nov 29 08:17:11 compute-2 systemd-machined[194747]: Machine qemu-58-instance-0000007b terminated.
Nov 29 08:17:11 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [NOTICE]   (286810) : haproxy version is 2.8.14-c23fe91
Nov 29 08:17:11 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [NOTICE]   (286810) : path to executable is /usr/sbin/haproxy
Nov 29 08:17:11 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [WARNING]  (286810) : Exiting Master process...
Nov 29 08:17:11 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [ALERT]    (286810) : Current worker (286829) exited with code 143 (Terminated)
Nov 29 08:17:11 compute-2 neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6[286804]: [WARNING]  (286810) : All workers exited. Exiting... (0)
Nov 29 08:17:11 compute-2 systemd[1]: libpod-b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48.scope: Deactivated successfully.
Nov 29 08:17:11 compute-2 podman[286925]: 2025-11-29 08:17:11.81228639 +0000 UTC m=+0.051046879 container died b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.819 232432 INFO nova.virt.libvirt.driver [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Instance destroyed successfully.
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.820 232432 DEBUG nova.objects.instance [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lazy-loading 'resources' on Instance uuid 66d0dad5-8e47-49e0-903b-b8c0152c56b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:11 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48-userdata-shm.mount: Deactivated successfully.
Nov 29 08:17:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-b8b23624fe6570e49c56f113852a365ed5bc4c03ddf17c8cc031a589dec20fd7-merged.mount: Deactivated successfully.
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.845 232432 DEBUG nova.virt.libvirt.vif [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-667936073',display_name='tempest-ServerAddressesTestJSON-server-667936073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-667936073',id=123,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:08Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='363fe98c689041caaa9c21709efe6d5e',ramdisk_id='',reservation_id='r-34k5etor',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-289358423',owner_user_name='tempest-ServerAddressesTestJSON-289358423-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:08Z,user_data=None,user_id='f3d87a5e5ad344d1bcc70ba8075f3ca5',uuid=66d0dad5-8e47-49e0-903b-b8c0152c56b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.845 232432 DEBUG nova.network.os_vif_util [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converting VIF {"id": "605ca4c2-db24-40a9-99c7-d692af254898", "address": "fa:16:3e:76:30:9f", "network": {"id": "82ff266c-4124-4ca8-bd0c-055a10b7a9e6", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-480628772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "363fe98c689041caaa9c21709efe6d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap605ca4c2-db", "ovs_interfaceid": "605ca4c2-db24-40a9-99c7-d692af254898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.846 232432 DEBUG nova.network.os_vif_util [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.847 232432 DEBUG os_vif [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.850 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap605ca4c2-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 podman[286925]: 2025-11-29 08:17:11.853460199 +0000 UTC m=+0.092220678 container cleanup b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.856 232432 INFO os_vif [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:30:9f,bridge_name='br-int',has_traffic_filtering=True,id=605ca4c2-db24-40a9-99c7-d692af254898,network=Network(82ff266c-4124-4ca8-bd0c-055a10b7a9e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap605ca4c2-db')
Nov 29 08:17:11 compute-2 systemd[1]: libpod-conmon-b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48.scope: Deactivated successfully.
Nov 29 08:17:11 compute-2 podman[286972]: 2025-11-29 08:17:11.919464196 +0000 UTC m=+0.039240800 container remove b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.926 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[beb16d97-ecde-4c5d-880e-c5764bd55aff]: (4, ('Sat Nov 29 08:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6 (b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48)\nb3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48\nSat Nov 29 08:17:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6 (b3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48)\nb3dd5bf07753bbb3f102fae64a151c921fecfa36fae31662921451e33661bd48\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:11.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.928 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4dedb42c-9c2d-4bfd-9d17-b36a7bdf4a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.929 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82ff266c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 kernel: tap82ff266c-40: left promiscuous mode
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.936 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d41aae6a-9467-447b-8e73-3c29bf96ca5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 nova_compute[232428]: 2025-11-29 08:17:11.950 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.952 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0ff3be-76a1-4533-af41-6267f69f9f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.954 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2c463571-ff2a-4a38-8b17-e89caa21e575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.978 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d47474e4-faca-4a79-88a5-d813e9a1c349]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719264, 'reachable_time': 19655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286998, 'error': None, 'target': 'ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.982 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82ff266c-4124-4ca8-bd0c-055a10b7a9e6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:17:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:11.982 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[76289421-b8c1-41cd-8c77-8f1e1f32b59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:11 compute-2 systemd[1]: run-netns-ovnmeta\x2d82ff266c\x2d4124\x2d4ca8\x2dbd0c\x2d055a10b7a9e6.mount: Deactivated successfully.
Nov 29 08:17:12 compute-2 ceph-mon[77138]: pgmap v2335: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.7 MiB/s wr, 221 op/s
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.294 232432 INFO nova.virt.libvirt.driver [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Deleting instance files /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8_del
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.294 232432 INFO nova.virt.libvirt.driver [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Deletion of /var/lib/nova/instances/66d0dad5-8e47-49e0-903b-b8c0152c56b8_del complete
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.361 232432 INFO nova.compute.manager [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.362 232432 DEBUG oslo.service.loopingcall [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.362 232432 DEBUG nova.compute.manager [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:17:12 compute-2 nova_compute[232428]: 2025-11-29 08:17:12.363 232432 DEBUG nova.network.neutron [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.113 232432 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-unplugged-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.114 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.115 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.115 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.115 232432 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] No waiting events found dispatching network-vif-unplugged-605ca4c2-db24-40a9-99c7-d692af254898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.116 232432 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-unplugged-605ca4c2-db24-40a9-99c7-d692af254898 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.116 232432 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.117 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.117 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.118 232432 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.118 232432 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] No waiting events found dispatching network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:13 compute-2 nova_compute[232428]: 2025-11-29 08:17:13.119 232432 WARNING nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received unexpected event network-vif-plugged-605ca4c2-db24-40a9-99c7-d692af254898 for instance with vm_state active and task_state deleting.
Nov 29 08:17:13 compute-2 sudo[287000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:13 compute-2 sudo[287000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:13 compute-2 sudo[287000]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:13 compute-2 sudo[287025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:13 compute-2 sudo[287025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:13 compute-2 sudo[287025]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:13.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:13.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:14 compute-2 ceph-mon[77138]: pgmap v2336: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Nov 29 08:17:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:17:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.175 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.175 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.178 232432 DEBUG nova.network.neutron [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.198 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.201 232432 INFO nova.compute.manager [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Took 1.84 seconds to deallocate network for instance.
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.273 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.274 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.301 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.479 232432 DEBUG oslo_concurrency.processutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:14 compute-2 sudo[287051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:14 compute-2 sudo[287051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:14 compute-2 sudo[287051]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:14 compute-2 sudo[287077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:17:14 compute-2 sudo[287077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:14 compute-2 sudo[287077]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3481090630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.960 232432 DEBUG oslo_concurrency.processutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.972 232432 DEBUG nova.compute.provider_tree [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:14 compute-2 nova_compute[232428]: 2025-11-29 08:17:14.992 232432 DEBUG nova.scheduler.client.report [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.015 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:15.020 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.021 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:15.022 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.023 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:15.024 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.033 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.034 232432 INFO nova.compute.claims [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.064 232432 INFO nova.scheduler.client.report [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Deleted allocations for instance 66d0dad5-8e47-49e0-903b-b8c0152c56b8
Nov 29 08:17:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3481090630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.158 232432 DEBUG oslo_concurrency.lockutils [None req-7f840d53-6a1b-4451-84e7-ff804d8fa635 f3d87a5e5ad344d1bcc70ba8075f3ca5 363fe98c689041caaa9c21709efe6d5e - - default default] Lock "66d0dad5-8e47-49e0-903b-b8c0152c56b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.185 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.242 232432 DEBUG nova.compute.manager [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Received event network-vif-deleted-605ca4c2-db24-40a9-99c7-d692af254898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3581217561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.695 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.703 232432 DEBUG nova.compute.provider_tree [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.722 232432 DEBUG nova.scheduler.client.report [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.752 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.753 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.813 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.813 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.839 232432 INFO nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.864 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:17:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:15.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.962 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.965 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:17:15 compute-2 nova_compute[232428]: 2025-11-29 08:17:15.966 232432 INFO nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Creating image(s)
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.016 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.062 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.104 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.110 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:16 compute-2 ceph-mon[77138]: pgmap v2337: 305 pgs: 305 active+clean; 365 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 234 op/s
Nov 29 08:17:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3581217561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.161 232432 DEBUG nova.policy [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bceaa894988f47b6870b2a3685a7de6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0ddcf5b6bfeb4db184951e07d296aa62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.212 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.213 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.214 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.215 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.265 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.271 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.393 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.618 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.719 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] resizing rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.855 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.864 232432 DEBUG nova.objects.instance [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lazy-loading 'migration_context' on Instance uuid 58ff3b6f-e4e2-4d9e-acff-792dd885b799 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.879 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.880 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Ensure instance console log exists: /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.881 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.882 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:16 compute-2 nova_compute[232428]: 2025-11-29 08:17:16.882 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:17 compute-2 nova_compute[232428]: 2025-11-29 08:17:17.094 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Successfully created port: 96948a0d-f2e5-4065-932a-3868023a2cfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:17:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2384987786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3727278614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:17.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:17.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:18 compute-2 ceph-mon[77138]: pgmap v2338: 305 pgs: 305 active+clean; 258 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.4 MiB/s wr, 381 op/s
Nov 29 08:17:18 compute-2 nova_compute[232428]: 2025-11-29 08:17:18.351 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Successfully updated port: 96948a0d-f2e5-4065-932a-3868023a2cfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:17:18 compute-2 nova_compute[232428]: 2025-11-29 08:17:18.373 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:18 compute-2 nova_compute[232428]: 2025-11-29 08:17:18.373 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquired lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:18 compute-2 nova_compute[232428]: 2025-11-29 08:17:18.374 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:17:18 compute-2 nova_compute[232428]: 2025-11-29 08:17:18.662 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:17:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:17:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:17:19 compute-2 nova_compute[232428]: 2025-11-29 08:17:19.469 232432 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Received event network-changed-96948a0d-f2e5-4065-932a-3868023a2cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:19 compute-2 nova_compute[232428]: 2025-11-29 08:17:19.469 232432 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Refreshing instance network info cache due to event network-changed-96948a0d-f2e5-4065-932a-3868023a2cfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:17:19 compute-2 nova_compute[232428]: 2025-11-29 08:17:19.470 232432 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:19.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:20 compute-2 ceph-mon[77138]: pgmap v2339: 305 pgs: 305 active+clean; 234 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 399 op/s
Nov 29 08:17:20 compute-2 nova_compute[232428]: 2025-11-29 08:17:20.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.396 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:21.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.845 232432 DEBUG nova.network.neutron [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Updating instance_info_cache with network_info: [{"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.903 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Releasing lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.903 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Instance network_info: |[{"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.904 232432 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.905 232432 DEBUG nova.network.neutron [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Refreshing network info cache for port 96948a0d-f2e5-4065-932a-3868023a2cfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.909 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Start _get_guest_xml network_info=[{"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.916 232432 WARNING nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.922 232432 DEBUG nova.virt.libvirt.host [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.923 232432 DEBUG nova.virt.libvirt.host [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.931 232432 DEBUG nova.virt.libvirt.host [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.932 232432 DEBUG nova.virt.libvirt.host [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.935 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.935 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.936 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.936 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.937 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.937 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.937 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.938 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.938 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.939 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.939 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.940 232432 DEBUG nova.virt.hardware [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:17:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:21 compute-2 nova_compute[232428]: 2025-11-29 08:17:21.947 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:22 compute-2 ceph-mon[77138]: pgmap v2340: 305 pgs: 305 active+clean; 130 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 1.8 MiB/s wr, 466 op/s
Nov 29 08:17:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3916442981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.459 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.494 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.499 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2079218107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.943 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.947 232432 DEBUG nova.virt.libvirt.vif [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1302443237',display_name='tempest-ServerMetadataNegativeTestJSON-server-1302443237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1302443237',id=125,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ddcf5b6bfeb4db184951e07d296aa62',ramdisk_id='',reservation_id='r-ny0961h7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-514892293',owner_user_name='tempest-ServerMetadataNegativeTestJSON-514892293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:15Z,user_data=None,user_id='bceaa894988f47b6870b2a3685a7de6d',uuid=58ff3b6f-e4e2-4d9e-acff-792dd885b799,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.948 232432 DEBUG nova.network.os_vif_util [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converting VIF {"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.950 232432 DEBUG nova.network.os_vif_util [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.953 232432 DEBUG nova.objects.instance [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58ff3b6f-e4e2-4d9e-acff-792dd885b799 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.972 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <uuid>58ff3b6f-e4e2-4d9e-acff-792dd885b799</uuid>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <name>instance-0000007d</name>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1302443237</nova:name>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:17:21</nova:creationTime>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:user uuid="bceaa894988f47b6870b2a3685a7de6d">tempest-ServerMetadataNegativeTestJSON-514892293-project-member</nova:user>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:project uuid="0ddcf5b6bfeb4db184951e07d296aa62">tempest-ServerMetadataNegativeTestJSON-514892293</nova:project>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <nova:port uuid="96948a0d-f2e5-4065-932a-3868023a2cfd">
Nov 29 08:17:22 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <system>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="serial">58ff3b6f-e4e2-4d9e-acff-792dd885b799</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="uuid">58ff3b6f-e4e2-4d9e-acff-792dd885b799</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </system>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <os>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </os>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <features>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </features>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk">
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </source>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config">
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </source>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:17:22 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:ec:85:42"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <target dev="tap96948a0d-f2"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/console.log" append="off"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <video>
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </video>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:17:22 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:17:22 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:17:22 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:17:22 compute-2 nova_compute[232428]: </domain>
Nov 29 08:17:22 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.973 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Preparing to wait for external event network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.973 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.974 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.974 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.975 232432 DEBUG nova.virt.libvirt.vif [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1302443237',display_name='tempest-ServerMetadataNegativeTestJSON-server-1302443237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1302443237',id=125,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ddcf5b6bfeb4db184951e07d296aa62',ramdisk_id='',reservation_id='r-ny0961h7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-514892293',owner_user_name='tempest-ServerMetadataNegativeTestJSON-514892293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:15Z,user_data=None,user_id='bceaa894988f47b6870b2a3685a7de6d',uuid=58ff3b6f-e4e2-4d9e-acff-792dd885b799,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.975 232432 DEBUG nova.network.os_vif_util [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converting VIF {"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.975 232432 DEBUG nova.network.os_vif_util [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.976 232432 DEBUG os_vif [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.977 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.977 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.980 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96948a0d-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.981 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96948a0d-f2, col_values=(('external_ids', {'iface-id': '96948a0d-f2e5-4065-932a-3868023a2cfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:85:42', 'vm-uuid': '58ff3b6f-e4e2-4d9e-acff-792dd885b799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.982 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:22 compute-2 NetworkManager[48993]: <info>  [1764404242.9834] manager: (tap96948a0d-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.984 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.989 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:22 compute-2 nova_compute[232428]: 2025-11-29 08:17:22.989 232432 INFO os_vif [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2')
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.060 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.060 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.060 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] No VIF found with MAC fa:16:3e:ec:85:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.061 232432 INFO nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Using config drive
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.082 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3916442981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1345798867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2079218107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:17:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:23.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.619 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.643 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 58ff3b6f-e4e2-4d9e-acff-792dd885b799 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:17:23 compute-2 nova_compute[232428]: 2025-11-29 08:17:23.643 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:23 compute-2 podman[287398]: 2025-11-29 08:17:23.685241984 +0000 UTC m=+0.080486340 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:17:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:23.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:24 compute-2 ceph-mon[77138]: pgmap v2341: 305 pgs: 305 active+clean; 130 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 384 op/s
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.295 232432 INFO nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Creating config drive at /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.306 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v0g6x6q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.464 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v0g6x6q" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.493 232432 DEBUG nova.storage.rbd_utils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] rbd image 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.497 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.717 232432 DEBUG oslo_concurrency.processutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config 58ff3b6f-e4e2-4d9e-acff-792dd885b799_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.720 232432 INFO nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Deleting local config drive /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799/disk.config because it was imported into RBD.
Nov 29 08:17:24 compute-2 kernel: tap96948a0d-f2: entered promiscuous mode
Nov 29 08:17:24 compute-2 NetworkManager[48993]: <info>  [1764404244.8019] manager: (tap96948a0d-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 08:17:24 compute-2 ovn_controller[134375]: 2025-11-29T08:17:24Z|00597|binding|INFO|Claiming lport 96948a0d-f2e5-4065-932a-3868023a2cfd for this chassis.
Nov 29 08:17:24 compute-2 ovn_controller[134375]: 2025-11-29T08:17:24Z|00598|binding|INFO|96948a0d-f2e5-4065-932a-3868023a2cfd: Claiming fa:16:3e:ec:85:42 10.100.0.4
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.803 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.810 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.820 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:85:42 10.100.0.4'], port_security=['fa:16:3e:ec:85:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58ff3b6f-e4e2-4d9e-acff-792dd885b799', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ddcf5b6bfeb4db184951e07d296aa62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eeea3104-c23b-4667-8b9c-4627e1c80f58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=779ffd2d-5c42-48cd-a5af-2a313808ca5a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=96948a0d-f2e5-4065-932a-3868023a2cfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.821 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 96948a0d-f2e5-4065-932a-3868023a2cfd in datapath a7be1286-184f-4d3f-9448-7dddb0e60a38 bound to our chassis
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.822 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7be1286-184f-4d3f-9448-7dddb0e60a38
Nov 29 08:17:24 compute-2 systemd-machined[194747]: New machine qemu-59-instance-0000007d.
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.838 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[779c512d-6894-4431-b9a4-376cf22eb403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.839 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7be1286-11 in ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.842 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7be1286-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.843 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4538210c-35d3-4fab-a6c6-f2f62e9bdd32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.843 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7beeca73-245d-4721-9e37-54745a67fb16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 systemd[1]: Started Virtual Machine qemu-59-instance-0000007d.
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.862 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[2531f4f5-52bf-4b76-931f-e11207ca011c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 systemd-udevd[287473]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:17:24 compute-2 NetworkManager[48993]: <info>  [1764404244.8819] device (tap96948a0d-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:17:24 compute-2 NetworkManager[48993]: <info>  [1764404244.8826] device (tap96948a0d-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.888 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[575e24b0-bcae-4054-b892-0c079e56dbe9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 ovn_controller[134375]: 2025-11-29T08:17:24Z|00599|binding|INFO|Setting lport 96948a0d-f2e5-4065-932a-3868023a2cfd ovn-installed in OVS
Nov 29 08:17:24 compute-2 ovn_controller[134375]: 2025-11-29T08:17:24Z|00600|binding|INFO|Setting lport 96948a0d-f2e5-4065-932a-3868023a2cfd up in Southbound
Nov 29 08:17:24 compute-2 nova_compute[232428]: 2025-11-29 08:17:24.894 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.930 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbdc020-4f4d-4dd3-9616-60d4ea1a35bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.935 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1646a1-6a53-4e48-abc2-baa93b5259d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 systemd-udevd[287478]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:17:24 compute-2 NetworkManager[48993]: <info>  [1764404244.9370] manager: (tapa7be1286-10): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.969 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8d24b581-ed20-4113-9f3d-b72ab41ec187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:24.973 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e04f7fa1-e055-46a5-9e09-8e5027f636b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 NetworkManager[48993]: <info>  [1764404245.0086] device (tapa7be1286-10): carrier: link connected
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.009 232432 DEBUG nova.network.neutron [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Updated VIF entry in instance network info cache for port 96948a0d-f2e5-4065-932a-3868023a2cfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.011 232432 DEBUG nova.network.neutron [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Updating instance_info_cache with network_info: [{"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.021 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b28f9683-2220-4a9f-8dd4-9f57d61686e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.025 232432 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-58ff3b6f-e4e2-4d9e-acff-792dd885b799" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.046 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d24befb9-cb7c-45b7-86a7-c88005dd9bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7be1286-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:3d:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721127, 'reachable_time': 31048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287504, 'error': None, 'target': 'ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.063 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b18a9b1c-d735-48ac-97b9-ff3a8c50ee91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:3db0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721127, 'tstamp': 721127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287505, 'error': None, 'target': 'ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.083 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[edbd68ff-6ccb-4481-8a63-439089c69858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7be1286-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:3d:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721127, 'reachable_time': 31048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287506, 'error': None, 'target': 'ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.123 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3004c111-2d2a-4ed9-9dda-4a8365b22015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.165 232432 DEBUG nova.compute.manager [req-836db0e8-79f3-4702-a7df-fa0a8202183b req-e9cb42d3-f0a9-47e5-b1ab-cb7c923a5bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Received event network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.166 232432 DEBUG oslo_concurrency.lockutils [req-836db0e8-79f3-4702-a7df-fa0a8202183b req-e9cb42d3-f0a9-47e5-b1ab-cb7c923a5bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.166 232432 DEBUG oslo_concurrency.lockutils [req-836db0e8-79f3-4702-a7df-fa0a8202183b req-e9cb42d3-f0a9-47e5-b1ab-cb7c923a5bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.166 232432 DEBUG oslo_concurrency.lockutils [req-836db0e8-79f3-4702-a7df-fa0a8202183b req-e9cb42d3-f0a9-47e5-b1ab-cb7c923a5bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.167 232432 DEBUG nova.compute.manager [req-836db0e8-79f3-4702-a7df-fa0a8202183b req-e9cb42d3-f0a9-47e5-b1ab-cb7c923a5bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Processing event network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.193 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cfef28f7-71ff-4538-97d0-fd4f40f2fac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.194 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7be1286-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.195 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.195 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7be1286-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.197 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:25 compute-2 NetworkManager[48993]: <info>  [1764404245.1988] manager: (tapa7be1286-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 08:17:25 compute-2 kernel: tapa7be1286-10: entered promiscuous mode
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.210 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7be1286-10, col_values=(('external_ids', {'iface-id': '456b4cb2-898b-441d-8456-94d9f24ce7da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.211 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:25 compute-2 ovn_controller[134375]: 2025-11-29T08:17:25Z|00601|binding|INFO|Releasing lport 456b4cb2-898b-441d-8456-94d9f24ce7da from this chassis (sb_readonly=0)
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.217 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7be1286-184f-4d3f-9448-7dddb0e60a38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7be1286-184f-4d3f-9448-7dddb0e60a38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.218 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a813a880-e4c5-4452-b282-547c49084fc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.219 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-a7be1286-184f-4d3f-9448-7dddb0e60a38
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/a7be1286-184f-4d3f-9448-7dddb0e60a38.pid.haproxy
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID a7be1286-184f-4d3f-9448-7dddb0e60a38
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:17:25 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:25.221 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'env', 'PROCESS_TAG=haproxy-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7be1286-184f-4d3f-9448-7dddb0e60a38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:25 compute-2 ceph-mon[77138]: pgmap v2342: 305 pgs: 305 active+clean; 88 MiB data, 905 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 385 op/s
Nov 29 08:17:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:25.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.475 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404245.4747763, 58ff3b6f-e4e2-4d9e-acff-792dd885b799 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.476 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] VM Started (Lifecycle Event)
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.478 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.482 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.485 232432 INFO nova.virt.libvirt.driver [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Instance spawned successfully.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.485 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:17:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.514 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.520 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.521 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.522 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.522 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.523 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.523 232432 DEBUG nova.virt.libvirt.driver [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.527 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.562 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.563 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404245.4759276, 58ff3b6f-e4e2-4d9e-acff-792dd885b799 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.563 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] VM Paused (Lifecycle Event)
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.589 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.593 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404245.4808793, 58ff3b6f-e4e2-4d9e-acff-792dd885b799 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.593 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] VM Resumed (Lifecycle Event)
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.597 232432 INFO nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Took 9.63 seconds to spawn the instance on the hypervisor.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.597 232432 DEBUG nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.627 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.630 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:25 compute-2 podman[287580]: 2025-11-29 08:17:25.636614479 +0000 UTC m=+0.061792395 container create 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.676 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.683 232432 INFO nova.compute.manager [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Took 11.42 seconds to build instance.
Nov 29 08:17:25 compute-2 systemd[1]: Started libpod-conmon-37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148.scope.
Nov 29 08:17:25 compute-2 podman[287580]: 2025-11-29 08:17:25.60627373 +0000 UTC m=+0.031451676 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:17:25 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.731 232432 DEBUG oslo_concurrency.lockutils [None req-a08ee5f0-e475-4faf-aef4-88f38ef31d98 bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.732 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e761d2690902d00e85693af5ac82c7f7693bf65c8df09dc7f9669d6731cf3ed0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.733 232432 INFO nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:25 compute-2 nova_compute[232428]: 2025-11-29 08:17:25.733 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:25 compute-2 podman[287580]: 2025-11-29 08:17:25.745346003 +0000 UTC m=+0.170523919 container init 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:17:25 compute-2 podman[287580]: 2025-11-29 08:17:25.751246768 +0000 UTC m=+0.176424674 container start 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:17:25 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [NOTICE]   (287600) : New worker (287602) forked
Nov 29 08:17:25 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [NOTICE]   (287600) : Loading success.
Nov 29 08:17:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:25.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:26 compute-2 nova_compute[232428]: 2025-11-29 08:17:26.399 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:26 compute-2 nova_compute[232428]: 2025-11-29 08:17:26.819 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404231.8179646, 66d0dad5-8e47-49e0-903b-b8c0152c56b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:26 compute-2 nova_compute[232428]: 2025-11-29 08:17:26.820 232432 INFO nova.compute.manager [-] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] VM Stopped (Lifecycle Event)
Nov 29 08:17:26 compute-2 nova_compute[232428]: 2025-11-29 08:17:26.841 232432 DEBUG nova.compute.manager [None req-96738c9b-a83d-4098-bd06-cd169d42b3bc - - - - - -] [instance: 66d0dad5-8e47-49e0-903b-b8c0152c56b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:27.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.500 232432 DEBUG nova.compute.manager [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Received event network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.501 232432 DEBUG oslo_concurrency.lockutils [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.503 232432 DEBUG oslo_concurrency.lockutils [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.503 232432 DEBUG oslo_concurrency.lockutils [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.504 232432 DEBUG nova.compute.manager [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] No waiting events found dispatching network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.505 232432 WARNING nova.compute.manager [req-5faaeaf6-fccd-45ab-8463-448b0fc507b1 req-231e6c3b-3de0-4491-8d0a-8a94ad61a895 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Received unexpected event network-vif-plugged-96948a0d-f2e5-4065-932a-3868023a2cfd for instance with vm_state active and task_state None.
Nov 29 08:17:27 compute-2 ceph-mon[77138]: pgmap v2343: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Nov 29 08:17:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:27 compute-2 nova_compute[232428]: 2025-11-29 08:17:27.982 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3702248800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3702248800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:29 compute-2 ceph-mon[77138]: pgmap v2344: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 29 08:17:29 compute-2 podman[287612]: 2025-11-29 08:17:29.733863588 +0000 UTC m=+0.114249549 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:17:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:31 compute-2 nova_compute[232428]: 2025-11-29 08:17:31.402 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:31 compute-2 ceph-mon[77138]: pgmap v2345: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1006 KiB/s wr, 179 op/s
Nov 29 08:17:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.725 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.727 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.728 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.729 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.729 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.732 232432 INFO nova.compute.manager [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Terminating instance
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.734 232432 DEBUG nova.compute.manager [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:17:32 compute-2 kernel: tap96948a0d-f2 (unregistering): left promiscuous mode
Nov 29 08:17:32 compute-2 NetworkManager[48993]: <info>  [1764404252.7946] device (tap96948a0d-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.809 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 ovn_controller[134375]: 2025-11-29T08:17:32Z|00602|binding|INFO|Releasing lport 96948a0d-f2e5-4065-932a-3868023a2cfd from this chassis (sb_readonly=0)
Nov 29 08:17:32 compute-2 ovn_controller[134375]: 2025-11-29T08:17:32Z|00603|binding|INFO|Setting lport 96948a0d-f2e5-4065-932a-3868023a2cfd down in Southbound
Nov 29 08:17:32 compute-2 ovn_controller[134375]: 2025-11-29T08:17:32Z|00604|binding|INFO|Removing iface tap96948a0d-f2 ovn-installed in OVS
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.813 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:32.820 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:85:42 10.100.0.4'], port_security=['fa:16:3e:ec:85:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58ff3b6f-e4e2-4d9e-acff-792dd885b799', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ddcf5b6bfeb4db184951e07d296aa62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eeea3104-c23b-4667-8b9c-4627e1c80f58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=779ffd2d-5c42-48cd-a5af-2a313808ca5a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=96948a0d-f2e5-4065-932a-3868023a2cfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:32.821 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 96948a0d-f2e5-4065-932a-3868023a2cfd in datapath a7be1286-184f-4d3f-9448-7dddb0e60a38 unbound from our chassis
Nov 29 08:17:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:32.822 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7be1286-184f-4d3f-9448-7dddb0e60a38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:17:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:32.824 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bc71e111-b438-414b-a554-44c23570d97d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:32.824 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38 namespace which is not needed anymore
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 29 08:17:32 compute-2 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007d.scope: Consumed 8.153s CPU time.
Nov 29 08:17:32 compute-2 systemd-machined[194747]: Machine qemu-59-instance-0000007d terminated.
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.986 232432 INFO nova.virt.libvirt.driver [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Instance destroyed successfully.
Nov 29 08:17:32 compute-2 nova_compute[232428]: 2025-11-29 08:17:32.989 232432 DEBUG nova.objects.instance [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lazy-loading 'resources' on Instance uuid 58ff3b6f-e4e2-4d9e-acff-792dd885b799 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.004 232432 DEBUG nova.virt.libvirt.vif [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1302443237',display_name='tempest-ServerMetadataNegativeTestJSON-server-1302443237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1302443237',id=125,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ddcf5b6bfeb4db184951e07d296aa62',ramdisk_id='',reservation_id='r-ny0961h7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-514892293',owner_user_name='tempest-ServerMetadataNegativeTestJSON-514892293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:25Z,user_data=None,user_id='bceaa894988f47b6870b2a3685a7de6d',uuid=58ff3b6f-e4e2-4d9e-acff-792dd885b799,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.005 232432 DEBUG nova.network.os_vif_util [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converting VIF {"id": "96948a0d-f2e5-4065-932a-3868023a2cfd", "address": "fa:16:3e:ec:85:42", "network": {"id": "a7be1286-184f-4d3f-9448-7dddb0e60a38", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1035465582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ddcf5b6bfeb4db184951e07d296aa62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96948a0d-f2", "ovs_interfaceid": "96948a0d-f2e5-4065-932a-3868023a2cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.007 232432 DEBUG nova.network.os_vif_util [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.007 232432 DEBUG os_vif [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.010 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96948a0d-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:33 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [NOTICE]   (287600) : haproxy version is 2.8.14-c23fe91
Nov 29 08:17:33 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [NOTICE]   (287600) : path to executable is /usr/sbin/haproxy
Nov 29 08:17:33 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [WARNING]  (287600) : Exiting Master process...
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:17:33 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [ALERT]    (287600) : Current worker (287602) exited with code 143 (Terminated)
Nov 29 08:17:33 compute-2 neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38[287596]: [WARNING]  (287600) : All workers exited. Exiting... (0)
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.018 232432 INFO os_vif [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:85:42,bridge_name='br-int',has_traffic_filtering=True,id=96948a0d-f2e5-4065-932a-3868023a2cfd,network=Network(a7be1286-184f-4d3f-9448-7dddb0e60a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96948a0d-f2')
Nov 29 08:17:33 compute-2 systemd[1]: libpod-37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148.scope: Deactivated successfully.
Nov 29 08:17:33 compute-2 podman[287659]: 2025-11-29 08:17:33.02626706 +0000 UTC m=+0.076812995 container died 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:17:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148-userdata-shm.mount: Deactivated successfully.
Nov 29 08:17:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-e761d2690902d00e85693af5ac82c7f7693bf65c8df09dc7f9669d6731cf3ed0-merged.mount: Deactivated successfully.
Nov 29 08:17:33 compute-2 podman[287659]: 2025-11-29 08:17:33.072773136 +0000 UTC m=+0.123319081 container cleanup 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:17:33 compute-2 systemd[1]: libpod-conmon-37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148.scope: Deactivated successfully.
Nov 29 08:17:33 compute-2 podman[287713]: 2025-11-29 08:17:33.153786882 +0000 UTC m=+0.052064560 container remove 37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.163 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[14ccbfdd-c229-46f5-9f5c-e00797397167]: (4, ('Sat Nov 29 08:17:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38 (37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148)\n37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148\nSat Nov 29 08:17:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38 (37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148)\n37bbcaf817485d540c58abef38d581329249dbbda9bbe126c0da124acdd0c148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.166 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c633a671-7972-4f37-8589-cab97b5eeaf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.167 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7be1286-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.170 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:33 compute-2 kernel: tapa7be1286-10: left promiscuous mode
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.184 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.189 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[99df79a5-e481-4d1b-9df5-905da1c44615]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.208 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4361e05a-e382-457b-8f15-e59e21039b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.210 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[907436b4-2bcd-49bd-9560-85ae931bd2e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.228 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cbacf0a3-505b-4f5d-ab46-64692d024140]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721119, 'reachable_time': 38148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287728, 'error': None, 'target': 'ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.231 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7be1286-184f-4d3f-9448-7dddb0e60a38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:17:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:17:33.231 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0c47ed-e722-45f9-86fc-745dc52e341d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:33 compute-2 systemd[1]: run-netns-ovnmeta\x2da7be1286\x2d184f\x2d4d3f\x2d9448\x2d7dddb0e60a38.mount: Deactivated successfully.
Nov 29 08:17:33 compute-2 sudo[287730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:33 compute-2 sudo[287730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:33 compute-2 sudo[287730]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:33.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:33 compute-2 sudo[287755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:33 compute-2 sudo[287755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:33 compute-2 sudo[287755]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.461 232432 INFO nova.virt.libvirt.driver [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Deleting instance files /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799_del
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.461 232432 INFO nova.virt.libvirt.driver [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Deletion of /var/lib/nova/instances/58ff3b6f-e4e2-4d9e-acff-792dd885b799_del complete
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.565 232432 INFO nova.compute.manager [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.565 232432 DEBUG oslo.service.loopingcall [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.566 232432 DEBUG nova.compute.manager [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:17:33 compute-2 nova_compute[232428]: 2025-11-29 08:17:33.566 232432 DEBUG nova.network.neutron [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:17:33 compute-2 ceph-mon[77138]: pgmap v2346: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 08:17:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:33.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.696 232432 DEBUG nova.network.neutron [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.712 232432 INFO nova.compute.manager [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Took 1.15 seconds to deallocate network for instance.
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.767 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.768 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.801 232432 DEBUG nova.compute.manager [req-872b5d8b-19fd-450d-94a7-f7e5e19c8fbb req-8d7f5e8e-d0a4-49d5-9b8d-a8a87ab872ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Received event network-vif-deleted-96948a0d-f2e5-4065-932a-3868023a2cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:34 compute-2 nova_compute[232428]: 2025-11-29 08:17:34.869 232432 DEBUG oslo_concurrency.processutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3353044761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.396 232432 DEBUG oslo_concurrency.processutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.403 232432 DEBUG nova.compute.provider_tree [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:35.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.445 232432 DEBUG nova.scheduler.client.report [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.480 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.521 232432 INFO nova.scheduler.client.report [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Deleted allocations for instance 58ff3b6f-e4e2-4d9e-acff-792dd885b799
Nov 29 08:17:35 compute-2 nova_compute[232428]: 2025-11-29 08:17:35.586 232432 DEBUG oslo_concurrency.lockutils [None req-e3fa1771-5cf1-419c-9f77-aeabe6b7ebfc bceaa894988f47b6870b2a3685a7de6d 0ddcf5b6bfeb4db184951e07d296aa62 - - default default] Lock "58ff3b6f-e4e2-4d9e-acff-792dd885b799" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:35.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:36 compute-2 ceph-mon[77138]: pgmap v2347: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 08:17:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3353044761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:36 compute-2 nova_compute[232428]: 2025-11-29 08:17:36.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:37.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:38 compute-2 nova_compute[232428]: 2025-11-29 08:17:38.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:38 compute-2 ceph-mon[77138]: pgmap v2348: 305 pgs: 305 active+clean; 60 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 29 08:17:39 compute-2 nova_compute[232428]: 2025-11-29 08:17:39.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:39 compute-2 ceph-mon[77138]: pgmap v2349: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 29 08:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2243143571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2132206677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:41 compute-2 ceph-mon[77138]: pgmap v2350: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Nov 29 08:17:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3468862175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:41 compute-2 nova_compute[232428]: 2025-11-29 08:17:41.407 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:41.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:41 compute-2 podman[287806]: 2025-11-29 08:17:41.782382466 +0000 UTC m=+0.167748592 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:17:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:41.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2859895107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/712387107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:43 compute-2 nova_compute[232428]: 2025-11-29 08:17:43.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-2 ceph-mon[77138]: pgmap v2351: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Nov 29 08:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1262924247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:43.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3260172181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:45 compute-2 sshd-session[287833]: Invalid user sol from 45.148.10.240 port 36844
Nov 29 08:17:45 compute-2 sshd-session[287833]: Connection closed by invalid user sol 45.148.10.240 port 36844 [preauth]
Nov 29 08:17:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:46 compute-2 ceph-mon[77138]: pgmap v2352: 305 pgs: 305 active+clean; 83 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 08:17:46 compute-2 nova_compute[232428]: 2025-11-29 08:17:46.226 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:46 compute-2 nova_compute[232428]: 2025-11-29 08:17:46.409 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3740720003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1151233783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:47.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:47 compute-2 nova_compute[232428]: 2025-11-29 08:17:47.978 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404252.9775972, 58ff3b6f-e4e2-4d9e-acff-792dd885b799 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:47 compute-2 nova_compute[232428]: 2025-11-29 08:17:47.978 232432 INFO nova.compute.manager [-] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] VM Stopped (Lifecycle Event)
Nov 29 08:17:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:47.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:47 compute-2 nova_compute[232428]: 2025-11-29 08:17:47.998 232432 DEBUG nova.compute.manager [None req-6a5456dd-e97b-4c5e-9b54-c582632ca9f3 - - - - - -] [instance: 58ff3b6f-e4e2-4d9e-acff-792dd885b799] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:48 compute-2 nova_compute[232428]: 2025-11-29 08:17:48.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:48 compute-2 ceph-mon[77138]: pgmap v2353: 305 pgs: 305 active+clean; 180 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.3 MiB/s wr, 149 op/s
Nov 29 08:17:49 compute-2 nova_compute[232428]: 2025-11-29 08:17:49.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:49.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:49.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:17:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:50 compute-2 ceph-mon[77138]: pgmap v2354: 305 pgs: 305 active+clean; 180 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 136 op/s
Nov 29 08:17:51 compute-2 nova_compute[232428]: 2025-11-29 08:17:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:51 compute-2 nova_compute[232428]: 2025-11-29 08:17:51.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:51 compute-2 ceph-mon[77138]: pgmap v2355: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 233 op/s
Nov 29 08:17:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:52.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:53 compute-2 nova_compute[232428]: 2025-11-29 08:17:53.018 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:53 compute-2 nova_compute[232428]: 2025-11-29 08:17:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:17:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:17:53 compute-2 sudo[287839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:53 compute-2 sudo[287839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:53 compute-2 sudo[287839]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:53 compute-2 ceph-mon[77138]: pgmap v2356: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Nov 29 08:17:53 compute-2 sudo[287864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:53 compute-2 sudo[287864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:53 compute-2 sudo[287864]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 08:17:54 compute-2 podman[287890]: 2025-11-29 08:17:54.691379139 +0000 UTC m=+0.084828917 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:17:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:55.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:55 compute-2 ceph-mon[77138]: pgmap v2357: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 273 op/s
Nov 29 08:17:55 compute-2 ceph-mon[77138]: osdmap e305: 3 total, 3 up, 3 in
Nov 29 08:17:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/805839888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 08:17:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:56 compute-2 nova_compute[232428]: 2025-11-29 08:17:56.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:56 compute-2 nova_compute[232428]: 2025-11-29 08:17:56.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:17:56 compute-2 nova_compute[232428]: 2025-11-29 08:17:56.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:17:56 compute-2 nova_compute[232428]: 2025-11-29 08:17:56.217 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:17:56 compute-2 nova_compute[232428]: 2025-11-29 08:17:56.413 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 08:17:56 compute-2 ceph-mon[77138]: osdmap e306: 3 total, 3 up, 3 in
Nov 29 08:17:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4066631849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:57 compute-2 nova_compute[232428]: 2025-11-29 08:17:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:17:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:17:57 compute-2 ceph-mon[77138]: pgmap v2360: 305 pgs: 305 active+clean; 152 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 924 KiB/s wr, 298 op/s
Nov 29 08:17:57 compute-2 ceph-mon[77138]: osdmap e307: 3 total, 3 up, 3 in
Nov 29 08:17:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2832711309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:17:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:17:58 compute-2 nova_compute[232428]: 2025-11-29 08:17:58.019 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:17:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:17:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:00 compute-2 ceph-mon[77138]: pgmap v2362: 305 pgs: 305 active+clean; 150 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.7 MiB/s wr, 316 op/s
Nov 29 08:18:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:00 compute-2 podman[287915]: 2025-11-29 08:18:00.725874285 +0000 UTC m=+0.111022946 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 08:18:01 compute-2 nova_compute[232428]: 2025-11-29 08:18:01.416 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:01.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1673223080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:02 compute-2 nova_compute[232428]: 2025-11-29 08:18:02.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:02 compute-2 nova_compute[232428]: 2025-11-29 08:18:02.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:18:03 compute-2 nova_compute[232428]: 2025-11-29 08:18:03.020 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 08:18:03 compute-2 nova_compute[232428]: 2025-11-29 08:18:03.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:03 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 08:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:03.323 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:03.324 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:03.325 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:03.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:03 compute-2 ceph-mon[77138]: pgmap v2363: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 7.7 MiB/s wr, 333 op/s
Nov 29 08:18:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.242 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.243 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:04.386 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.386 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:04.388 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:18:04 compute-2 ceph-mon[77138]: pgmap v2364: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.9 MiB/s wr, 223 op/s
Nov 29 08:18:04 compute-2 ceph-mon[77138]: osdmap e308: 3 total, 3 up, 3 in
Nov 29 08:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/503601174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2427365914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3408200411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:04 compute-2 nova_compute[232428]: 2025-11-29 08:18:04.713 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.003 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.005 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4385MB free_disk=20.92259979248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.005 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.005 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.490 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.490 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:18:05 compute-2 nova_compute[232428]: 2025-11-29 08:18:05.541 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:06.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:06 compute-2 ceph-mon[77138]: pgmap v2366: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 167 op/s
Nov 29 08:18:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3408200411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.418 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2486731525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.633 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.644 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.823 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.874 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:18:06 compute-2 nova_compute[232428]: 2025-11-29 08:18:06.875 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:08.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.446 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.447 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.464 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.525 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.526 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.541 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.542 232432 INFO nova.compute.claims [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:18:08 compute-2 nova_compute[232428]: 2025-11-29 08:18:08.763 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2486731525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3609030786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.244 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.253 232432 DEBUG nova.compute.provider_tree [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.273 232432 DEBUG nova.scheduler.client.report [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.302 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.303 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.363 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.364 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.389 232432 INFO nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.423 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.465 232432 INFO nova.virt.block_device [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Booting with blank volume at /dev/vda
Nov 29 08:18:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:09 compute-2 nova_compute[232428]: 2025-11-29 08:18:09.820 232432 DEBUG nova.policy [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504bc6adabad4f7d8c17b0438c4d9be7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:18:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:10 compute-2 ceph-mon[77138]: pgmap v2367: 305 pgs: 305 active+clean; 233 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 MiB/s wr, 199 op/s
Nov 29 08:18:10 compute-2 ceph-mon[77138]: pgmap v2368: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.7 MiB/s wr, 141 op/s
Nov 29 08:18:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3609030786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3478945521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.600 232432 DEBUG os_brick.utils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.603 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.625 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.625 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c8750-596b-46e2-af0e-d366206ac9f9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.627 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.637 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.638 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[276386cf-f36f-445f-b4ff-147fe50e9f39]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.640 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.657 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.657 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[3af6b23f-a45c-4387-9489-90c70b8be8ce]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.660 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6b06da-25ee-420b-86ea-8ab3cac0c3de]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.661 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.712 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "nvme version" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.717 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.718 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.718 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.719 232432 DEBUG os_brick.utils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] <== get_connector_properties: return (118ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.720 232432 DEBUG nova.virt.block_device [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating existing volume attachment record: cf6ba8d8-b1d6-4916-b30a-2c4b2c8f4ac1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:18:10 compute-2 nova_compute[232428]: 2025-11-29 08:18:10.856 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Successfully created port: be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:18:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/604072690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3279641367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:11.391 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:11 compute-2 nova_compute[232428]: 2025-11-29 08:18:11.420 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:12 compute-2 ceph-mon[77138]: pgmap v2369: 305 pgs: 305 active+clean; 228 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 4.9 MiB/s wr, 149 op/s
Nov 29 08:18:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/565305582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3901207140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.210 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.212 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.213 232432 INFO nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Creating image(s)
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.214 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.215 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Ensure instance console log exists: /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.215 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.216 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.217 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.605 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Successfully updated port: be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.625 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.625 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.625 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:18:12 compute-2 podman[288017]: 2025-11-29 08:18:12.729772269 +0000 UTC m=+0.131579770 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:18:12 compute-2 nova_compute[232428]: 2025-11-29 08:18:12.822 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:18:13 compute-2 nova_compute[232428]: 2025-11-29 08:18:13.024 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:13 compute-2 sudo[288044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:13 compute-2 sudo[288044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:13 compute-2 sudo[288044]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.837276) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293837531, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1844, "num_deletes": 257, "total_data_size": 3947656, "memory_usage": 4005296, "flush_reason": "Manual Compaction"}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293861736, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 1642320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48178, "largest_seqno": 50016, "table_properties": {"data_size": 1636270, "index_size": 3060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16587, "raw_average_key_size": 21, "raw_value_size": 1622724, "raw_average_value_size": 2112, "num_data_blocks": 134, "num_entries": 768, "num_filter_entries": 768, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404156, "oldest_key_time": 1764404156, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 24428 microseconds, and 13959 cpu microseconds.
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.861816) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 1642320 bytes OK
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.861859) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.864362) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.864391) EVENT_LOG_v1 {"time_micros": 1764404293864380, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.864418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3939219, prev total WAL file size 3939219, number of live WAL files 2.
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.866542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353131' seq:72057594037927935, type:22 .. '6D6772737461740031373635' seq:0, type:0; will stop at (end)
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(1603KB)], [90(11MB)]
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293866677, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13755757, "oldest_snapshot_seqno": -1}
Nov 29 08:18:13 compute-2 sudo[288069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:13 compute-2 sudo[288069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:13 compute-2 sudo[288069]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8109 keys, 10792149 bytes, temperature: kUnknown
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293958674, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10792149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10739925, "index_size": 30864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208813, "raw_average_key_size": 25, "raw_value_size": 10597365, "raw_average_value_size": 1306, "num_data_blocks": 1213, "num_entries": 8109, "num_filter_entries": 8109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.958993) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10792149 bytes
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.961220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.4 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(14.9) write-amplify(6.6) OK, records in: 8578, records dropped: 469 output_compression: NoCompression
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.961250) EVENT_LOG_v1 {"time_micros": 1764404293961237, "job": 56, "event": "compaction_finished", "compaction_time_micros": 92085, "compaction_time_cpu_micros": 50817, "output_level": 6, "num_output_files": 1, "total_output_size": 10792149, "num_input_records": 8578, "num_output_records": 8109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293962165, "job": 56, "event": "table_file_deletion", "file_number": 92}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293965102, "job": 56, "event": "table_file_deletion", "file_number": 90}
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.866451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.965261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.965270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.965273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.965276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:13.965279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:14 compute-2 ceph-mon[77138]: pgmap v2370: 305 pgs: 305 active+clean; 228 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 4.9 MiB/s wr, 149 op/s
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.351 232432 DEBUG nova.compute.manager [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-changed-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.351 232432 DEBUG nova.compute.manager [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Refreshing instance network info cache due to event network-changed-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.352 232432 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:14 compute-2 sudo[288094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:14 compute-2 sudo[288094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:14 compute-2 sudo[288094]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:14 compute-2 sudo[288119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:18:14 compute-2 sudo[288119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:14 compute-2 sudo[288119]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:14 compute-2 sudo[288144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:14 compute-2 sudo[288144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.820 232432 DEBUG nova.network.neutron [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:14 compute-2 sudo[288144]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:14 compute-2 sudo[288169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:18:14 compute-2 sudo[288169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.970 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.970 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance network_info: |[{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.971 232432 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.971 232432 DEBUG nova.network.neutron [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Refreshing network info cache for port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.978 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start _get_guest_xml network_info=[{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5d21b16e-ab30-4101-bf21-01197c71cf99', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5d21b16e-ab30-4101-bf21-01197c71cf99', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'attached_at': '', 'detached_at': '', 'volume_id': '5d21b16e-ab30-4101-bf21-01197c71cf99', 'serial': '5d21b16e-ab30-4101-bf21-01197c71cf99'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'cf6ba8d8-b1d6-4916-b30a-2c4b2c8f4ac1', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.986 232432 WARNING nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.997 232432 DEBUG nova.virt.libvirt.host [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:18:14 compute-2 nova_compute[232428]: 2025-11-29 08:18:14.997 232432 DEBUG nova.virt.libvirt.host [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.002 232432 DEBUG nova.virt.libvirt.host [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.003 232432 DEBUG nova.virt.libvirt.host [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.004 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.005 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.005 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.006 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.006 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.006 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.007 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.007 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.007 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.008 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.008 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.008 232432 DEBUG nova.virt.hardware [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.057 232432 DEBUG nova.storage.rbd_utils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.063 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:15 compute-2 sudo[288169]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:15 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:18:15 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:18:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:18:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1861002628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.594 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.621 232432 DEBUG nova.virt.libvirt.vif [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-285202970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-285202970',id=130,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-ri5l0c3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:09Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=6c463a92-8698-4035-b4d0-b1d3db01a43b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.622 232432 DEBUG nova.network.os_vif_util [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.623 232432 DEBUG nova.network.os_vif_util [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.624 232432 DEBUG nova.objects.instance [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.643 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <uuid>6c463a92-8698-4035-b4d0-b1d3db01a43b</uuid>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <name>instance-00000082</name>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-285202970</nova:name>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:18:14</nova:creationTime>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <nova:port uuid="be8a2e4d-8e9b-4eff-a873-d7c8ad350b96">
Nov 29 08:18:15 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <system>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="serial">6c463a92-8698-4035-b4d0-b1d3db01a43b</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="uuid">6c463a92-8698-4035-b4d0-b1d3db01a43b</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </system>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <os>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </os>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <features>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </features>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config">
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </source>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-5d21b16e-ab30-4101-bf21-01197c71cf99">
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </source>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:18:15 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <serial>5d21b16e-ab30-4101-bf21-01197c71cf99</serial>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:75:42:23"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <target dev="tapbe8a2e4d-8e"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/console.log" append="off"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <video>
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </video>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:18:15 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:18:15 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:18:15 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:18:15 compute-2 nova_compute[232428]: </domain>
Nov 29 08:18:15 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.645 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Preparing to wait for external event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.645 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.646 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.646 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.646 232432 DEBUG nova.virt.libvirt.vif [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-285202970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-285202970',id=130,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-ri5l0c3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:09Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=6c463a92-8698-4035-b4d0-b1d3db01a43b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.647 232432 DEBUG nova.network.os_vif_util [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.647 232432 DEBUG nova.network.os_vif_util [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.648 232432 DEBUG os_vif [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.648 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.649 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.649 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.653 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.653 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe8a2e4d-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.654 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe8a2e4d-8e, col_values=(('external_ids', {'iface-id': 'be8a2e4d-8e9b-4eff-a873-d7c8ad350b96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:42:23', 'vm-uuid': '6c463a92-8698-4035-b4d0-b1d3db01a43b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.656 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:15 compute-2 NetworkManager[48993]: <info>  [1764404295.6580] manager: (tapbe8a2e4d-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.658 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.665 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.666 232432 INFO os_vif [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e')
Nov 29 08:18:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.984 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.985 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.985 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:75:42:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:18:15 compute-2 nova_compute[232428]: 2025-11-29 08:18:15.986 232432 INFO nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Using config drive
Nov 29 08:18:16 compute-2 nova_compute[232428]: 2025-11-29 08:18:16.017 232432 DEBUG nova.storage.rbd_utils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:16 compute-2 ceph-mon[77138]: pgmap v2371: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 434 KiB/s rd, 4.6 MiB/s wr, 153 op/s
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1861002628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:18:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:16 compute-2 nova_compute[232428]: 2025-11-29 08:18:16.422 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.137 232432 INFO nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Creating config drive at /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.148 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptms_x0p7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.313 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptms_x0p7" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.363 232432 DEBUG nova.storage.rbd_utils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.369 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:17 compute-2 ceph-mon[77138]: pgmap v2372: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 271 op/s
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.845 232432 DEBUG oslo_concurrency.processutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.845 232432 INFO nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Deleting local config drive /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config because it was imported into RBD.
Nov 29 08:18:17 compute-2 kernel: tapbe8a2e4d-8e: entered promiscuous mode
Nov 29 08:18:17 compute-2 NetworkManager[48993]: <info>  [1764404297.9090] manager: (tapbe8a2e4d-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:17 compute-2 ovn_controller[134375]: 2025-11-29T08:18:17Z|00605|binding|INFO|Claiming lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for this chassis.
Nov 29 08:18:17 compute-2 ovn_controller[134375]: 2025-11-29T08:18:17Z|00606|binding|INFO|be8a2e4d-8e9b-4eff-a873-d7c8ad350b96: Claiming fa:16:3e:75:42:23 10.100.0.13
Nov 29 08:18:17 compute-2 nova_compute[232428]: 2025-11-29 08:18:17.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.941 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.943 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.945 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:18:17 compute-2 systemd-udevd[288341]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:17 compute-2 systemd-machined[194747]: New machine qemu-60-instance-00000082.
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.967 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[56df72a2-4c05-43f7-a8ec-2584853bd8be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.968 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c25940b-e1 in ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.970 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c25940b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.970 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3562acd3-17e1-41fa-87ed-ea72bdecb1f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.971 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a3148622-aa19-4050-bc17-8ead74860ef4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:17 compute-2 NetworkManager[48993]: <info>  [1764404297.9802] device (tapbe8a2e4d-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:18:17 compute-2 NetworkManager[48993]: <info>  [1764404297.9818] device (tapbe8a2e4d-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:18:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:17.993 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[39130ada-3c42-4e6f-b6b2-0a8be0977a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 systemd[1]: Started Virtual Machine qemu-60-instance-00000082.
Nov 29 08:18:18 compute-2 ovn_controller[134375]: 2025-11-29T08:18:18Z|00607|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 ovn-installed in OVS
Nov 29 08:18:18 compute-2 ovn_controller[134375]: 2025-11-29T08:18:18Z|00608|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 up in Southbound
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.021 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.026 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7523b919-c08d-45ab-ba08-fb9d94ee8dc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.064 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[473c9d3f-2c24-4264-9cce-1ca2445f5252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 NetworkManager[48993]: <info>  [1764404298.0727] manager: (tap3c25940b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 08:18:18 compute-2 systemd-udevd[288344]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.071 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d54bb2b8-000e-470e-8101-4718eddd6db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.120 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[17958cd8-3f02-4456-bfb3-e0ab4f8cf065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.123 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[844571cc-9790-4556-853d-26db696f1aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 NetworkManager[48993]: <info>  [1764404298.1520] device (tap3c25940b-e0): carrier: link connected
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.158 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[241290b5-2e37-4829-b76d-3a4aed5044c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.174 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6a8466-031e-4b95-9aba-9661bcf7cac0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726442, 'reachable_time': 43210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288373, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.198 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[41f16c39-b1ad-45b5-9a4b-cec4dd90dd67]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:387b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 726442, 'tstamp': 726442}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288374, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.225 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f94787-d974-4b9b-a0ae-42f1501cef9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726442, 'reachable_time': 43210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288375, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.261 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbcf531-f640-4e5f-bdf0-b7e1bad63937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.346 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c9cb58-c331-4d68-a6ba-a3c4b3153931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.348 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.348 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.349 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:18 compute-2 NetworkManager[48993]: <info>  [1764404298.3514] manager: (tap3c25940b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 08:18:18 compute-2 kernel: tap3c25940b-e0: entered promiscuous mode
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.355 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.357 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-2 ovn_controller[134375]: 2025-11-29T08:18:18Z|00609|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.359 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.360 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[87954e41-ebc4-474e-8688-3bf41d6cf40f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.361 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:18:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:18.362 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'env', 'PROCESS_TAG=haproxy-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c25940b-e63b-4443-a94b-0216a35e8dc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.379 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.481 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404298.4806976, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.481 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Started (Lifecycle Event)
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.485 232432 DEBUG nova.network.neutron [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updated VIF entry in instance network info cache for port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.486 232432 DEBUG nova.network.neutron [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.514 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.516 232432 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.519 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404298.4809594, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.520 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Paused (Lifecycle Event)
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.553 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.556 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:18 compute-2 nova_compute[232428]: 2025-11-29 08:18:18.617 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:18 compute-2 podman[288449]: 2025-11-29 08:18:18.752294384 +0000 UTC m=+0.060273287 container create 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:18:18 compute-2 systemd[1]: Started libpod-conmon-6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56.scope.
Nov 29 08:18:18 compute-2 podman[288449]: 2025-11-29 08:18:18.718660582 +0000 UTC m=+0.026639485 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:18:18 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:18:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03131941ccfab140b65ce3309f09e668f15472879b8df801b5f2b1009a54167e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2182033048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:18 compute-2 podman[288449]: 2025-11-29 08:18:18.854907447 +0000 UTC m=+0.162886400 container init 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:18:18 compute-2 podman[288449]: 2025-11-29 08:18:18.868035828 +0000 UTC m=+0.176014721 container start 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:18:18 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [NOTICE]   (288468) : New worker (288470) forked
Nov 29 08:18:18 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [NOTICE]   (288468) : Loading success.
Nov 29 08:18:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:19 compute-2 ceph-mon[77138]: pgmap v2373: 305 pgs: 305 active+clean; 247 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.3 MiB/s wr, 253 op/s
Nov 29 08:18:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/47726654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.882 232432 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.883 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.883 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.884 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.884 232432 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Processing event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.884 232432 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.885 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.885 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.885 232432 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.886 232432 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.886 232432 WARNING nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state building and task_state spawning.
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.888 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.893 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404299.8934805, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.894 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Resumed (Lifecycle Event)
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.897 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.903 232432 INFO nova.virt.libvirt.driver [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance spawned successfully.
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.904 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.919 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.929 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.937 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.938 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.939 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.940 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.940 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.941 232432 DEBUG nova.virt.libvirt.driver [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:19 compute-2 nova_compute[232428]: 2025-11-29 08:18:19.971 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:20 compute-2 nova_compute[232428]: 2025-11-29 08:18:20.011 232432 INFO nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Took 7.80 seconds to spawn the instance on the hypervisor.
Nov 29 08:18:20 compute-2 nova_compute[232428]: 2025-11-29 08:18:20.012 232432 DEBUG nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:20.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:20 compute-2 nova_compute[232428]: 2025-11-29 08:18:20.091 232432 INFO nova.compute.manager [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Took 11.59 seconds to build instance.
Nov 29 08:18:20 compute-2 nova_compute[232428]: 2025-11-29 08:18:20.107 232432 DEBUG oslo_concurrency.lockutils [None req-d0f1243c-2bb2-427b-897d-15330e38d2e5 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:20 compute-2 nova_compute[232428]: 2025-11-29 08:18:20.656 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:21 compute-2 nova_compute[232428]: 2025-11-29 08:18:21.425 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:21.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:21 compute-2 ceph-mon[77138]: pgmap v2374: 305 pgs: 305 active+clean; 169 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 302 op/s
Nov 29 08:18:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:23 compute-2 sudo[288481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:23 compute-2 sudo[288481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:23 compute-2 sudo[288481]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:23 compute-2 sudo[288506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:18:23 compute-2 sudo[288506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:23 compute-2 sudo[288506]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:23 compute-2 ceph-mon[77138]: pgmap v2375: 305 pgs: 305 active+clean; 169 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 244 op/s
Nov 29 08:18:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:18:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:18:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:23.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:24.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:24 compute-2 nova_compute[232428]: 2025-11-29 08:18:24.971 232432 INFO nova.compute.manager [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Rescuing
Nov 29 08:18:24 compute-2 nova_compute[232428]: 2025-11-29 08:18:24.972 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:24 compute-2 nova_compute[232428]: 2025-11-29 08:18:24.973 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:24 compute-2 nova_compute[232428]: 2025-11-29 08:18:24.974 232432 DEBUG nova.network.neutron [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:18:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:25.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:25 compute-2 ceph-mon[77138]: pgmap v2376: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 29 08:18:25 compute-2 nova_compute[232428]: 2025-11-29 08:18:25.659 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:25 compute-2 podman[288532]: 2025-11-29 08:18:25.682804953 +0000 UTC m=+0.088334457 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 08:18:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:26 compute-2 nova_compute[232428]: 2025-11-29 08:18:26.428 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:26 compute-2 nova_compute[232428]: 2025-11-29 08:18:26.885 232432 DEBUG nova.network.neutron [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:26 compute-2 nova_compute[232428]: 2025-11-29 08:18:26.911 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:27 compute-2 nova_compute[232428]: 2025-11-29 08:18:27.316 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:18:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:27.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:27 compute-2 ceph-mon[77138]: pgmap v2377: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 222 op/s
Nov 29 08:18:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2976345814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:28.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3454248546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3454248546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.670250) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308670348, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 255, "total_data_size": 501889, "memory_usage": 510872, "flush_reason": "Manual Compaction"}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308676617, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 331057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50021, "largest_seqno": 50461, "table_properties": {"data_size": 328498, "index_size": 595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6166, "raw_average_key_size": 18, "raw_value_size": 323375, "raw_average_value_size": 971, "num_data_blocks": 25, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404294, "oldest_key_time": 1764404294, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6424 microseconds, and 2918 cpu microseconds.
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.676677) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 331057 bytes OK
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.676710) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.678519) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.678542) EVENT_LOG_v1 {"time_micros": 1764404308678535, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.678568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 499083, prev total WAL file size 499083, number of live WAL files 2.
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.679253) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353133' seq:72057594037927935, type:22 .. '6C6F676D0031373634' seq:0, type:0; will stop at (end)
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(323KB)], [93(10MB)]
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308679398, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 11123206, "oldest_snapshot_seqno": -1}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7920 keys, 10984372 bytes, temperature: kUnknown
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308783751, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10984372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10932700, "index_size": 30760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205858, "raw_average_key_size": 25, "raw_value_size": 10792701, "raw_average_value_size": 1362, "num_data_blocks": 1206, "num_entries": 7920, "num_filter_entries": 7920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.784157) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10984372 bytes
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.786188) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.5 rd, 105.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(66.8) write-amplify(33.2) OK, records in: 8442, records dropped: 522 output_compression: NoCompression
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.786221) EVENT_LOG_v1 {"time_micros": 1764404308786206, "job": 58, "event": "compaction_finished", "compaction_time_micros": 104472, "compaction_time_cpu_micros": 51425, "output_level": 6, "num_output_files": 1, "total_output_size": 10984372, "num_input_records": 8442, "num_output_records": 7920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308786550, "job": 58, "event": "table_file_deletion", "file_number": 95}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308790674, "job": 58, "event": "table_file_deletion", "file_number": 93}
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.679147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.790767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.790779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.790784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.790788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:18:28.790790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:18:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:29.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:29 compute-2 ceph-mon[77138]: pgmap v2378: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 27 KiB/s wr, 96 op/s
Nov 29 08:18:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:30.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:30 compute-2 nova_compute[232428]: 2025-11-29 08:18:30.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:31 compute-2 nova_compute[232428]: 2025-11-29 08:18:31.429 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:31.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:31 compute-2 podman[288554]: 2025-11-29 08:18:31.661251877 +0000 UTC m=+0.068156324 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:18:31 compute-2 ceph-mon[77138]: pgmap v2379: 305 pgs: 305 active+clean; 197 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 849 KiB/s wr, 91 op/s
Nov 29 08:18:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4056697421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2581710571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/998118513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:33 compute-2 ceph-mon[77138]: pgmap v2380: 305 pgs: 305 active+clean; 197 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 834 KiB/s wr, 29 op/s
Nov 29 08:18:34 compute-2 sudo[288577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:34 compute-2 sudo[288577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:34 compute-2 sudo[288577]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:34 compute-2 sudo[288602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:34 compute-2 sudo[288602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:34 compute-2 sudo[288602]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1124941999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2922267082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1164412559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:35.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:35 compute-2 nova_compute[232428]: 2025-11-29 08:18:35.664 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:36 compute-2 ceph-mon[77138]: pgmap v2381: 305 pgs: 305 active+clean; 213 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 08:18:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3052598900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:36 compute-2 nova_compute[232428]: 2025-11-29 08:18:36.433 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1036624833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:37 compute-2 nova_compute[232428]: 2025-11-29 08:18:37.369 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:18:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:38.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:38 compute-2 ceph-mon[77138]: pgmap v2382: 305 pgs: 305 active+clean; 251 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.2 MiB/s wr, 62 op/s
Nov 29 08:18:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 08:18:39 compute-2 ceph-mon[77138]: osdmap e309: 3 total, 3 up, 3 in
Nov 29 08:18:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 08:18:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:40 compute-2 ceph-mon[77138]: pgmap v2384: 305 pgs: 305 active+clean; 281 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.2 MiB/s wr, 148 op/s
Nov 29 08:18:40 compute-2 ceph-mon[77138]: osdmap e310: 3 total, 3 up, 3 in
Nov 29 08:18:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 08:18:40 compute-2 nova_compute[232428]: 2025-11-29 08:18:40.666 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:41 compute-2 ceph-mon[77138]: osdmap e311: 3 total, 3 up, 3 in
Nov 29 08:18:41 compute-2 ceph-mon[77138]: pgmap v2387: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 340 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 9.6 MiB/s wr, 494 op/s
Nov 29 08:18:41 compute-2 nova_compute[232428]: 2025-11-29 08:18:41.437 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:41.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:43 compute-2 ceph-mon[77138]: pgmap v2388: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 340 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.7 MiB/s wr, 423 op/s
Nov 29 08:18:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:43 compute-2 podman[288631]: 2025-11-29 08:18:43.70456058 +0000 UTC m=+0.108727934 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:18:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:44.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:45 compute-2 ceph-mon[77138]: pgmap v2389: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.4 MiB/s wr, 375 op/s
Nov 29 08:18:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:45.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:45 compute-2 nova_compute[232428]: 2025-11-29 08:18:45.669 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.084 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.084 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:46.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.103 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.208 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.209 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.220 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.221 232432 INFO nova.compute.claims [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.316 232432 DEBUG nova.scheduler.client.report [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.338 232432 DEBUG nova.scheduler.client.report [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.339 232432 DEBUG nova.compute.provider_tree [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.360 232432 DEBUG nova.scheduler.client.report [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.378 232432 DEBUG nova.scheduler.client.report [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.457 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/657935972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.926 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.935 232432 DEBUG nova.compute.provider_tree [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.953 232432 DEBUG nova.scheduler.client.report [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.972 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:46 compute-2 nova_compute[232428]: 2025-11-29 08:18:46.973 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.016 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.017 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.037 232432 INFO nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.056 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.229 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.230 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.231 232432 INFO nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Creating image(s)
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.270 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.313 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.355 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.361 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.400 232432 DEBUG nova.policy [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3d42fe42c8cd46be80de3ef91eaf4aa8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c4fadda976d4920ad442d740dc7159a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.449 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.450 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.451 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.451 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:47 compute-2 ceph-mon[77138]: pgmap v2390: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.2 MiB/s wr, 368 op/s
Nov 29 08:18:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/657935972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.492 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.499 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2fc5a1ee-5656-4751-869b-2246534fbb37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.863 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2fc5a1ee-5656-4751-869b-2246534fbb37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:47.952 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:47.954 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:18:47 compute-2 nova_compute[232428]: 2025-11-29 08:18:47.958 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] resizing rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.061 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Successfully created port: 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.070 232432 DEBUG nova.objects.instance [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lazy-loading 'migration_context' on Instance uuid 2fc5a1ee-5656-4751-869b-2246534fbb37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:18:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:48.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.110 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.111 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Ensure instance console log exists: /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.111 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.112 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.112 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:48 compute-2 nova_compute[232428]: 2025-11-29 08:18:48.429 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:18:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 08:18:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:49.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:49 compute-2 ceph-mon[77138]: pgmap v2391: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 337 op/s
Nov 29 08:18:49 compute-2 ceph-mon[77138]: osdmap e312: 3 total, 3 up, 3 in
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.001 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Successfully updated port: 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.021 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.021 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquired lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.021 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:18:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:50.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.125 232432 DEBUG nova.compute.manager [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-changed-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.126 232432 DEBUG nova.compute.manager [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Refreshing instance network info cache due to event network-changed-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.126 232432 DEBUG oslo_concurrency.lockutils [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.199 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.671 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:50 compute-2 nova_compute[232428]: 2025-11-29 08:18:50.876 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:50.958 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.369 232432 DEBUG nova.network.neutron [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Updating instance_info_cache with network_info: [{"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.417 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Releasing lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.418 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Instance network_info: |[{"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.418 232432 DEBUG oslo_concurrency.lockutils [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.419 232432 DEBUG nova.network.neutron [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Refreshing network info cache for port 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.424 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Start _get_guest_xml network_info=[{"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.432 232432 WARNING nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.443 232432 DEBUG nova.virt.libvirt.host [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.444 232432 DEBUG nova.virt.libvirt.host [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.448 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.452 232432 DEBUG nova.virt.libvirt.host [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.453 232432 DEBUG nova.virt.libvirt.host [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.456 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.456 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.457 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.457 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.458 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.459 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.459 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.459 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.460 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.460 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.461 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.461 232432 DEBUG nova.virt.hardware [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:18:51 compute-2 nova_compute[232428]: 2025-11-29 08:18:51.467 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:51.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:18:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1584268662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:52 compute-2 ceph-mon[77138]: pgmap v2393: 305 pgs: 305 active+clean; 447 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 259 op/s
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.471 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.517 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.522 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:18:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/863670327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.962 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.964 232432 DEBUG nova.virt.libvirt.vif [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-15156658',id=134,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c4fadda976d4920ad442d740dc7159a',ramdisk_id='',reservation_id='r-ckmplwyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-999929112',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-999929112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:47Z,user_data=None,user_id='3d42fe42c8cd46be80de3ef91eaf4aa8',uuid=2fc5a1ee-5656-4751-869b-2246534fbb37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.964 232432 DEBUG nova.network.os_vif_util [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converting VIF {"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.965 232432 DEBUG nova.network.os_vif_util [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:52 compute-2 nova_compute[232428]: 2025-11-29 08:18:52.966 232432 DEBUG nova.objects.instance [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fc5a1ee-5656-4751-869b-2246534fbb37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.139 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <uuid>2fc5a1ee-5656-4751-869b-2246534fbb37</uuid>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <name>instance-00000086</name>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-15156658</nova:name>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:18:51</nova:creationTime>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:user uuid="3d42fe42c8cd46be80de3ef91eaf4aa8">tempest-ServersNegativeTestMultiTenantJSON-999929112-project-member</nova:user>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:project uuid="8c4fadda976d4920ad442d740dc7159a">tempest-ServersNegativeTestMultiTenantJSON-999929112</nova:project>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <nova:port uuid="47bc1fa3-34e1-4a86-a591-c87f20ecaa1f">
Nov 29 08:18:53 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <system>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="serial">2fc5a1ee-5656-4751-869b-2246534fbb37</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="uuid">2fc5a1ee-5656-4751-869b-2246534fbb37</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </system>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <os>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </os>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <features>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </features>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2fc5a1ee-5656-4751-869b-2246534fbb37_disk">
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config">
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:18:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:26:88:79"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <target dev="tap47bc1fa3-34"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/console.log" append="off"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <video>
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </video>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:18:53 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:18:53 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:18:53 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:18:53 compute-2 nova_compute[232428]: </domain>
Nov 29 08:18:53 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.139 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Preparing to wait for external event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.140 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.140 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.140 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.141 232432 DEBUG nova.virt.libvirt.vif [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-15156658',id=134,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c4fadda976d4920ad442d740dc7159a',ramdisk_id='',reservation_id='r-ckmplwyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-999929112',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-999929112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:47Z,user_data=None,user_id='3d42fe42c8cd46be80de3ef91eaf4aa8',uuid=2fc5a1ee-5656-4751-869b-2246534fbb37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.141 232432 DEBUG nova.network.os_vif_util [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converting VIF {"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.142 232432 DEBUG nova.network.os_vif_util [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.143 232432 DEBUG os_vif [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.144 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.145 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.149 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47bc1fa3-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.149 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47bc1fa3-34, col_values=(('external_ids', {'iface-id': '47bc1fa3-34e1-4a86-a591-c87f20ecaa1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:88:79', 'vm-uuid': '2fc5a1ee-5656-4751-869b-2246534fbb37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-2 NetworkManager[48993]: <info>  [1764404333.1529] manager: (tap47bc1fa3-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.163 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.164 232432 INFO os_vif [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34')
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.219 232432 DEBUG nova.network.neutron [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Updated VIF entry in instance network info cache for port 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.220 232432 DEBUG nova.network.neutron [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Updating instance_info_cache with network_info: [{"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.234 232432 DEBUG oslo_concurrency.lockutils [req-92877642-04af-4b27-b491-b44792e017cc req-749d7ec7-4089-47e6-8f28-42b1f141136e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2fc5a1ee-5656-4751-869b-2246534fbb37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.239 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.239 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.239 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] No VIF found with MAC fa:16:3e:26:88:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.240 232432 INFO nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Using config drive
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.280 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1584268662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:53 compute-2 ceph-mon[77138]: pgmap v2394: 305 pgs: 305 active+clean; 447 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 259 op/s
Nov 29 08:18:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/863670327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:53.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.793 232432 INFO nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Creating config drive at /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.799 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgkhywz8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:53 compute-2 nova_compute[232428]: 2025-11-29 08:18:53.972 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgkhywz8" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.018 232432 DEBUG nova.storage.rbd_utils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] rbd image 2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.022 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config 2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:54 compute-2 sudo[288972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:54 compute-2 sudo[288972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:54 compute-2 sudo[288972]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.309 232432 DEBUG oslo_concurrency.processutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config 2fc5a1ee-5656-4751-869b-2246534fbb37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.311 232432 INFO nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Deleting local config drive /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37/disk.config because it was imported into RBD.
Nov 29 08:18:54 compute-2 sudo[289000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:54 compute-2 sudo[289000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:54 compute-2 sudo[289000]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.4034] manager: (tap47bc1fa3-34): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 08:18:54 compute-2 kernel: tap47bc1fa3-34: entered promiscuous mode
Nov 29 08:18:54 compute-2 ovn_controller[134375]: 2025-11-29T08:18:54Z|00610|binding|INFO|Claiming lport 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f for this chassis.
Nov 29 08:18:54 compute-2 ovn_controller[134375]: 2025-11-29T08:18:54Z|00611|binding|INFO|47bc1fa3-34e1-4a86-a591-c87f20ecaa1f: Claiming fa:16:3e:26:88:79 10.100.0.9
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.407 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.425 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:88:79 10.100.0.9'], port_security=['fa:16:3e:26:88:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2fc5a1ee-5656-4751-869b-2246534fbb37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c4fadda976d4920ad442d740dc7159a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c327e5e4-6fc1-4d98-be41-3eb6a4cb7c8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e34c4def-1208-4082-b220-edac1763c6f9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.427 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f in datapath f3f9f779-5762-4d67-a10d-ad7d2b087f15 bound to our chassis
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.430 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3f9f779-5762-4d67-a10d-ad7d2b087f15
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.453 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e19651e8-7f08-4ebc-9b12-45acfda9c445]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.455 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3f9f779-51 in ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.458 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3f9f779-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.458 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe1d76a-071d-40cf-9ee4-fd1dab969079]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.459 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a3334c-6a0f-4de7-897a-04b22b985584]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 systemd-machined[194747]: New machine qemu-61-instance-00000086.
Nov 29 08:18:54 compute-2 systemd-udevd[289039]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.481 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[302ce976-a240-48f0-aa77-3bf0fda26ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.4896] device (tap47bc1fa3-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:18:54 compute-2 systemd[1]: Started Virtual Machine qemu-61-instance-00000086.
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.4921] device (tap47bc1fa3-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_controller[134375]: 2025-11-29T08:18:54Z|00612|binding|INFO|Setting lport 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f ovn-installed in OVS
Nov 29 08:18:54 compute-2 ovn_controller[134375]: 2025-11-29T08:18:54Z|00613|binding|INFO|Setting lport 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f up in Southbound
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.517 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.517 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[08d837e6-a58a-4ee1-aff9-e73f80499167]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.565 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4d47e57c-0c9d-4ecb-a119-fabc38606708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.571 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b89103-6980-46e1-b778-fe43f03444c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 systemd-udevd[289042]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.5737] manager: (tapf3f9f779-50): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.615 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[db18528a-203e-472b-9362-bb4a2bf786d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.619 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9c0520-1c85-4889-a905-a53344718313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.6452] device (tapf3f9f779-50): carrier: link connected
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.653 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a36ee6-e6e2-4075-865d-d13214adf436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.675 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[501f60d3-3ffc-4555-8f5f-edbb0c4cbe88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3f9f779-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:d3:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730091, 'reachable_time': 20737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289071, 'error': None, 'target': 'ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.696 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[69b10b2e-649e-4d70-bcc5-855adc53f910]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:d349'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730091, 'tstamp': 730091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289072, 'error': None, 'target': 'ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.705 232432 DEBUG nova.compute.manager [req-04e1ca88-6121-466f-b885-d4aabc9aa2bc req-25408dcb-1008-4d3f-995d-9701a8c921c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.706 232432 DEBUG oslo_concurrency.lockutils [req-04e1ca88-6121-466f-b885-d4aabc9aa2bc req-25408dcb-1008-4d3f-995d-9701a8c921c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.706 232432 DEBUG oslo_concurrency.lockutils [req-04e1ca88-6121-466f-b885-d4aabc9aa2bc req-25408dcb-1008-4d3f-995d-9701a8c921c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.706 232432 DEBUG oslo_concurrency.lockutils [req-04e1ca88-6121-466f-b885-d4aabc9aa2bc req-25408dcb-1008-4d3f-995d-9701a8c921c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.706 232432 DEBUG nova.compute.manager [req-04e1ca88-6121-466f-b885-d4aabc9aa2bc req-25408dcb-1008-4d3f-995d-9701a8c921c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Processing event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.717 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f34808-df20-4ea8-abad-920021cece25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3f9f779-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:d3:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730091, 'reachable_time': 20737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289073, 'error': None, 'target': 'ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.767 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee84c19d-0f70-475e-a01a-b54398d1da58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.859 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[38eb6b2c-b40f-44fc-ae50-376f44e53cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.861 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3f9f779-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.862 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.863 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3f9f779-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.864 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 NetworkManager[48993]: <info>  [1764404334.8659] manager: (tapf3f9f779-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 08:18:54 compute-2 kernel: tapf3f9f779-50: entered promiscuous mode
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.870 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3f9f779-50, col_values=(('external_ids', {'iface-id': 'ff491758-cb84-4f4e-b705-069411902595'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_controller[134375]: 2025-11-29T08:18:54Z|00614|binding|INFO|Releasing lport ff491758-cb84-4f4e-b705-069411902595 from this chassis (sb_readonly=0)
Nov 29 08:18:54 compute-2 nova_compute[232428]: 2025-11-29 08:18:54.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.892 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3f9f779-5762-4d67-a10d-ad7d2b087f15.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3f9f779-5762-4d67-a10d-ad7d2b087f15.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f20194-fd28-4654-9d89-b5fd095afe5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.896 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-f3f9f779-5762-4d67-a10d-ad7d2b087f15
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/f3f9f779-5762-4d67-a10d-ad7d2b087f15.pid.haproxy
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID f3f9f779-5762-4d67-a10d-ad7d2b087f15
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:18:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:18:54.898 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'env', 'PROCESS_TAG=haproxy-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3f9f779-5762-4d67-a10d-ad7d2b087f15.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.151 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404335.1511536, 2fc5a1ee-5656-4751-869b-2246534fbb37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.152 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] VM Started (Lifecycle Event)
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.154 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.158 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.163 232432 INFO nova.virt.libvirt.driver [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Instance spawned successfully.
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.164 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.177 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.187 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.194 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.195 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.196 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.196 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.196 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.197 232432 DEBUG nova.virt.libvirt.driver [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.205 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.206 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404335.151443, 2fc5a1ee-5656-4751-869b-2246534fbb37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.206 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] VM Paused (Lifecycle Event)
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.234 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.240 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404335.1585143, 2fc5a1ee-5656-4751-869b-2246534fbb37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.241 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] VM Resumed (Lifecycle Event)
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.255 232432 INFO nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Took 8.03 seconds to spawn the instance on the hypervisor.
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.256 232432 DEBUG nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.267 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.271 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.310 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:55 compute-2 podman[289147]: 2025-11-29 08:18:55.322069418 +0000 UTC m=+0.057014246 container create fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.350 232432 INFO nova.compute.manager [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Took 9.18 seconds to build instance.
Nov 29 08:18:55 compute-2 systemd[1]: Started libpod-conmon-fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2.scope.
Nov 29 08:18:55 compute-2 podman[289147]: 2025-11-29 08:18:55.29627338 +0000 UTC m=+0.031218228 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:18:55 compute-2 nova_compute[232428]: 2025-11-29 08:18:55.394 232432 DEBUG oslo_concurrency.lockutils [None req-9c8ac320-70b3-4675-a439-c4c3cfb43bb4 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:55 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:18:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67946ad8a2478e3afbd8ac4456339a4d7540e7535be753780de176f207867d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:55 compute-2 podman[289147]: 2025-11-29 08:18:55.437718137 +0000 UTC m=+0.172663005 container init fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:18:55 compute-2 podman[289147]: 2025-11-29 08:18:55.45023495 +0000 UTC m=+0.185179808 container start fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:18:55 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [NOTICE]   (289166) : New worker (289168) forked
Nov 29 08:18:55 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [NOTICE]   (289166) : Loading success.
Nov 29 08:18:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:18:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:18:55 compute-2 ceph-mon[77138]: pgmap v2395: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.7 MiB/s wr, 243 op/s
Nov 29 08:18:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.233 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.234 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.235 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:56 compute-2 nova_compute[232428]: 2025-11-29 08:18:56.450 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:56 compute-2 podman[289178]: 2025-11-29 08:18:56.709875141 +0000 UTC m=+0.092882699 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.146 232432 DEBUG nova.compute.manager [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.146 232432 DEBUG oslo_concurrency.lockutils [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.147 232432 DEBUG oslo_concurrency.lockutils [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.147 232432 DEBUG oslo_concurrency.lockutils [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.147 232432 DEBUG nova.compute.manager [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] No waiting events found dispatching network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.147 232432 WARNING nova.compute.manager [req-c4d3fd94-2620-41cd-aa91-67ac1d05578e req-35b634ba-6f53-40ca-ba95-76ee42b8c139 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received unexpected event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f for instance with vm_state active and task_state None.
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.440 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.461 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:57 compute-2 nova_compute[232428]: 2025-11-29 08:18:57.462 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:18:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:57 compute-2 ceph-mon[77138]: pgmap v2396: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 9.8 MiB/s wr, 251 op/s
Nov 29 08:18:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1548226146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:58.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:58 compute-2 nova_compute[232428]: 2025-11-29 08:18:58.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:58 compute-2 nova_compute[232428]: 2025-11-29 08:18:58.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1630120659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:18:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:18:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:18:59 compute-2 nova_compute[232428]: 2025-11-29 08:18:59.627 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:18:59 compute-2 ceph-mon[77138]: pgmap v2397: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 9.0 MiB/s wr, 266 op/s
Nov 29 08:19:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:00.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:01 compute-2 nova_compute[232428]: 2025-11-29 08:19:01.453 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:01.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:01 compute-2 ceph-mon[77138]: pgmap v2398: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.8 MiB/s wr, 287 op/s
Nov 29 08:19:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.150 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.150 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.151 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.151 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.152 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.153 232432 INFO nova.compute.manager [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Terminating instance
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.154 232432 DEBUG nova.compute.manager [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:19:02 compute-2 kernel: tap47bc1fa3-34 (unregistering): left promiscuous mode
Nov 29 08:19:02 compute-2 NetworkManager[48993]: <info>  [1764404342.2014] device (tap47bc1fa3-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:19:02 compute-2 ovn_controller[134375]: 2025-11-29T08:19:02Z|00615|binding|INFO|Releasing lport 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f from this chassis (sb_readonly=0)
Nov 29 08:19:02 compute-2 ovn_controller[134375]: 2025-11-29T08:19:02Z|00616|binding|INFO|Setting lport 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f down in Southbound
Nov 29 08:19:02 compute-2 ovn_controller[134375]: 2025-11-29T08:19:02Z|00617|binding|INFO|Removing iface tap47bc1fa3-34 ovn-installed in OVS
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.213 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.215 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.221 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:88:79 10.100.0.9'], port_security=['fa:16:3e:26:88:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2fc5a1ee-5656-4751-869b-2246534fbb37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c4fadda976d4920ad442d740dc7159a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c327e5e4-6fc1-4d98-be41-3eb6a4cb7c8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e34c4def-1208-4082-b220-edac1763c6f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.223 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 47bc1fa3-34e1-4a86-a591-c87f20ecaa1f in datapath f3f9f779-5762-4d67-a10d-ad7d2b087f15 unbound from our chassis
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.225 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3f9f779-5762-4d67-a10d-ad7d2b087f15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.227 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e4082e88-a152-470c-830a-f84caece5b36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.228 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15 namespace which is not needed anymore
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 29 08:19:02 compute-2 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000086.scope: Consumed 7.972s CPU time.
Nov 29 08:19:02 compute-2 systemd-machined[194747]: Machine qemu-61-instance-00000086 terminated.
Nov 29 08:19:02 compute-2 podman[289202]: 2025-11-29 08:19:02.301434966 +0000 UTC m=+0.069590671 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:19:02 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [NOTICE]   (289166) : haproxy version is 2.8.14-c23fe91
Nov 29 08:19:02 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [NOTICE]   (289166) : path to executable is /usr/sbin/haproxy
Nov 29 08:19:02 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [WARNING]  (289166) : Exiting Master process...
Nov 29 08:19:02 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [ALERT]    (289166) : Current worker (289168) exited with code 143 (Terminated)
Nov 29 08:19:02 compute-2 neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15[289162]: [WARNING]  (289166) : All workers exited. Exiting... (0)
Nov 29 08:19:02 compute-2 systemd[1]: libpod-fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2.scope: Deactivated successfully.
Nov 29 08:19:02 compute-2 podman[289244]: 2025-11-29 08:19:02.381206212 +0000 UTC m=+0.048077196 container died fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.401 232432 INFO nova.virt.libvirt.driver [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Instance destroyed successfully.
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.402 232432 DEBUG nova.objects.instance [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lazy-loading 'resources' on Instance uuid 2fc5a1ee-5656-4751-869b-2246534fbb37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-a67946ad8a2478e3afbd8ac4456339a4d7540e7535be753780de176f207867d8-merged.mount: Deactivated successfully.
Nov 29 08:19:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2-userdata-shm.mount: Deactivated successfully.
Nov 29 08:19:02 compute-2 podman[289244]: 2025-11-29 08:19:02.420143371 +0000 UTC m=+0.087014355 container cleanup fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.420 232432 DEBUG nova.virt.libvirt.vif [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-15156658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-15156658',id=134,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c4fadda976d4920ad442d740dc7159a',ramdisk_id='',reservation_id='r-ckmplwyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-999929112',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-999929112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:55Z,user_data=None,user_id='3d42fe42c8cd46be80de3ef91eaf4aa8',uuid=2fc5a1ee-5656-4751-869b-2246534fbb37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.421 232432 DEBUG nova.network.os_vif_util [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converting VIF {"id": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "address": "fa:16:3e:26:88:79", "network": {"id": "f3f9f779-5762-4d67-a10d-ad7d2b087f15", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1304683687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c4fadda976d4920ad442d740dc7159a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bc1fa3-34", "ovs_interfaceid": "47bc1fa3-34e1-4a86-a591-c87f20ecaa1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.421 232432 DEBUG nova.network.os_vif_util [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.422 232432 DEBUG os_vif [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.424 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.425 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47bc1fa3-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.426 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.428 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.432 232432 INFO os_vif [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:88:79,bridge_name='br-int',has_traffic_filtering=True,id=47bc1fa3-34e1-4a86-a591-c87f20ecaa1f,network=Network(f3f9f779-5762-4d67-a10d-ad7d2b087f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bc1fa3-34')
Nov 29 08:19:02 compute-2 systemd[1]: libpod-conmon-fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2.scope: Deactivated successfully.
Nov 29 08:19:02 compute-2 podman[289281]: 2025-11-29 08:19:02.496920585 +0000 UTC m=+0.046456676 container remove fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.503 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb610d4-11c3-4103-b208-5f836aa02d16]: (4, ('Sat Nov 29 08:19:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15 (fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2)\nfc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2\nSat Nov 29 08:19:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15 (fc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2)\nfc5682c0cf5a1baa91a817591b547d572b7a0fd0540bf5d06b4c746b4f9d9ea2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.504 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2da742b-48dc-4c61-9860-2f53cd87058c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.505 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3f9f779-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 kernel: tapf3f9f779-50: left promiscuous mode
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.525 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[01b138a5-f4fb-4b1d-98ae-0b93cce6c4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.541 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cb53cb-106d-4e40-88dc-9f6fab987225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.542 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[726b9196-4346-4dfe-9beb-b9c532da84fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.559 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2a629415-dac7-441f-9fbc-dbb6b6842cf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730082, 'reachable_time': 30492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289316, 'error': None, 'target': 'ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 systemd[1]: run-netns-ovnmeta\x2df3f9f779\x2d5762\x2d4d67\x2da10d\x2dad7d2b087f15.mount: Deactivated successfully.
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.563 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3f9f779-5762-4d67-a10d-ad7d2b087f15 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:19:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:02.563 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb10d56-5906-477a-93f0-77bf5ad99390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.793 232432 INFO nova.virt.libvirt.driver [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Deleting instance files /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37_del
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.794 232432 INFO nova.virt.libvirt.driver [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Deletion of /var/lib/nova/instances/2fc5a1ee-5656-4751-869b-2246534fbb37_del complete
Nov 29 08:19:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/585820072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1709759204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.877 232432 INFO nova.compute.manager [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.878 232432 DEBUG oslo.service.loopingcall [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.879 232432 DEBUG nova.compute.manager [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:19:02 compute-2 nova_compute[232428]: 2025-11-29 08:19:02.879 232432 DEBUG nova.network.neutron [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.059 232432 DEBUG nova.compute.manager [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-unplugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.060 232432 DEBUG oslo_concurrency.lockutils [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.061 232432 DEBUG oslo_concurrency.lockutils [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.061 232432 DEBUG oslo_concurrency.lockutils [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.062 232432 DEBUG nova.compute.manager [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] No waiting events found dispatching network-vif-unplugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:03 compute-2 nova_compute[232428]: 2025-11-29 08:19:03.062 232432 DEBUG nova.compute.manager [req-904a3e90-2624-41c5-834c-df6eb65cb679 req-85c60d2f-a9c4-409c-88b4-4786b653c4fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-unplugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:03.324 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:03.325 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:03.325 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:03.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:03 compute-2 ceph-mon[77138]: pgmap v2399: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 170 op/s
Nov 29 08:19:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3757451918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/846496273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4058452510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.059 232432 DEBUG nova.network.neutron [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.079 232432 INFO nova.compute.manager [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Took 1.20 seconds to deallocate network for instance.
Nov 29 08:19:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.133 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:04.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.134 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.208 232432 DEBUG oslo_concurrency.processutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.261 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3511489581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.707 232432 DEBUG oslo_concurrency.processutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.714 232432 DEBUG nova.compute.provider_tree [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.735 232432 DEBUG nova.scheduler.client.report [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.771 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.774 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.775 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.775 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.776 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3511489581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.833 232432 INFO nova.scheduler.client.report [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Deleted allocations for instance 2fc5a1ee-5656-4751-869b-2246534fbb37
Nov 29 08:19:04 compute-2 nova_compute[232428]: 2025-11-29 08:19:04.911 232432 DEBUG oslo_concurrency.lockutils [None req-5b1bc1de-402c-4ac8-9061-4ec21882fd5f 3d42fe42c8cd46be80de3ef91eaf4aa8 8c4fadda976d4920ad442d740dc7159a - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.176 232432 DEBUG nova.compute.manager [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.177 232432 DEBUG oslo_concurrency.lockutils [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.178 232432 DEBUG oslo_concurrency.lockutils [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.178 232432 DEBUG oslo_concurrency.lockutils [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2fc5a1ee-5656-4751-869b-2246534fbb37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.178 232432 DEBUG nova.compute.manager [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] No waiting events found dispatching network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.179 232432 WARNING nova.compute.manager [req-21d501da-77fc-49e8-80b5-5e963f68220d req-bfae7d9f-f57b-40f6-b338-aad7699a7073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received unexpected event network-vif-plugged-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f for instance with vm_state deleted and task_state None.
Nov 29 08:19:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3946664096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.225 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.293 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.294 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.437 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.439 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4289MB free_disk=20.788658142089844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.439 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.439 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.544 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 6c463a92-8698-4035-b4d0-b1d3db01a43b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:19:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:05.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.545 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.545 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:19:05 compute-2 nova_compute[232428]: 2025-11-29 08:19:05.582 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:05 compute-2 ceph-mon[77138]: pgmap v2400: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 179 op/s
Nov 29 08:19:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3946664096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/342531704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.022 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.031 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.052 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.107 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.108 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.183 232432 DEBUG nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Received event network-vif-deleted-47bc1fa3-34e1-4a86-a591-c87f20ecaa1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:06 compute-2 nova_compute[232428]: 2025-11-29 08:19:06.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/342531704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:07 compute-2 nova_compute[232428]: 2025-11-29 08:19:07.109 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:07 compute-2 nova_compute[232428]: 2025-11-29 08:19:07.428 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:07.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:08 compute-2 ceph-mon[77138]: pgmap v2401: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.183125) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348183213, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 701, "num_deletes": 252, "total_data_size": 1077828, "memory_usage": 1093896, "flush_reason": "Manual Compaction"}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348200403, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 709396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50466, "largest_seqno": 51162, "table_properties": {"data_size": 706014, "index_size": 1226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8177, "raw_average_key_size": 19, "raw_value_size": 699105, "raw_average_value_size": 1676, "num_data_blocks": 54, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404309, "oldest_key_time": 1764404309, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17919 microseconds, and 12678 cpu microseconds.
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.201034) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 709396 bytes OK
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.201273) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.207538) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.207569) EVENT_LOG_v1 {"time_micros": 1764404348207559, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.207598) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1074037, prev total WAL file size 1074037, number of live WAL files 2.
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.210193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(692KB)], [96(10MB)]
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348210264, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11693768, "oldest_snapshot_seqno": -1}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7821 keys, 9808216 bytes, temperature: kUnknown
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348319168, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9808216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9758339, "index_size": 29221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 204579, "raw_average_key_size": 26, "raw_value_size": 9621019, "raw_average_value_size": 1230, "num_data_blocks": 1134, "num_entries": 7821, "num_filter_entries": 7821, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.319620) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9808216 bytes
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.323817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.1 rd, 89.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(30.3) write-amplify(13.8) OK, records in: 8337, records dropped: 516 output_compression: NoCompression
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.323895) EVENT_LOG_v1 {"time_micros": 1764404348323865, "job": 60, "event": "compaction_finished", "compaction_time_micros": 109137, "compaction_time_cpu_micros": 44563, "output_level": 6, "num_output_files": 1, "total_output_size": 9808216, "num_input_records": 8337, "num_output_records": 7821, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348324544, "job": 60, "event": "table_file_deletion", "file_number": 98}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348328685, "job": 60, "event": "table_file_deletion", "file_number": 96}
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.210073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.328978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.328986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.328988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.328991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:19:08.328993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:19:08 compute-2 ovn_controller[134375]: 2025-11-29T08:19:08Z|00618|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 08:19:09 compute-2 nova_compute[232428]: 2025-11-29 08:19:09.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2847169781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:09.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:10 compute-2 ceph-mon[77138]: pgmap v2402: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 83 KiB/s wr, 158 op/s
Nov 29 08:19:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3600629391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:10 compute-2 nova_compute[232428]: 2025-11-29 08:19:10.685 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:19:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:11 compute-2 nova_compute[232428]: 2025-11-29 08:19:11.460 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:11.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:12 compute-2 ceph-mon[77138]: pgmap v2403: 305 pgs: 305 active+clean; 416 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 29 08:19:12 compute-2 nova_compute[232428]: 2025-11-29 08:19:12.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:13.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:14 compute-2 ceph-mon[77138]: pgmap v2404: 305 pgs: 305 active+clean; 416 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Nov 29 08:19:14 compute-2 sudo[289391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:14 compute-2 sudo[289391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:14 compute-2 sudo[289391]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:14 compute-2 sudo[289422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:14 compute-2 sudo[289422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:14 compute-2 sudo[289422]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:14 compute-2 podman[289415]: 2025-11-29 08:19:14.698774383 +0000 UTC m=+0.181402159 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:19:15 compute-2 ceph-mon[77138]: pgmap v2405: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Nov 29 08:19:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:15.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:16.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3646109557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:16 compute-2 nova_compute[232428]: 2025-11-29 08:19:16.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:17 compute-2 ceph-mon[77138]: pgmap v2406: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 341 op/s
Nov 29 08:19:17 compute-2 nova_compute[232428]: 2025-11-29 08:19:17.400 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404342.3986244, 2fc5a1ee-5656-4751-869b-2246534fbb37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:17 compute-2 nova_compute[232428]: 2025-11-29 08:19:17.401 232432 INFO nova.compute.manager [-] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] VM Stopped (Lifecycle Event)
Nov 29 08:19:17 compute-2 nova_compute[232428]: 2025-11-29 08:19:17.426 232432 DEBUG nova.compute.manager [None req-d529f375-33a4-4a3e-8145-e236690814c8 - - - - - -] [instance: 2fc5a1ee-5656-4751-869b-2246534fbb37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:17 compute-2 nova_compute[232428]: 2025-11-29 08:19:17.434 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:17.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:18.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1650006821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:19 compute-2 ceph-mon[77138]: pgmap v2407: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 298 op/s
Nov 29 08:19:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:19.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:20.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/610726783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:21 compute-2 nova_compute[232428]: 2025-11-29 08:19:21.467 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:21 compute-2 ceph-mon[77138]: pgmap v2408: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 343 op/s
Nov 29 08:19:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/385554390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3711450792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:21.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:21 compute-2 nova_compute[232428]: 2025-11-29 08:19:21.743 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:19:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:22.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:22 compute-2 nova_compute[232428]: 2025-11-29 08:19:22.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:23 compute-2 sudo[289473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:23 compute-2 sudo[289473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:23 compute-2 sudo[289473]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:23 compute-2 sudo[289498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:19:23 compute-2 sudo[289498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:23 compute-2 sudo[289498]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:23 compute-2 sudo[289523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:23 compute-2 sudo[289523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:23 compute-2 sudo[289523]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:23 compute-2 ceph-mon[77138]: pgmap v2409: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 234 op/s
Nov 29 08:19:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:23.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:23 compute-2 sudo[289548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:19:23 compute-2 sudo[289548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:24.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:24 compute-2 sudo[289548]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:19:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:25 compute-2 ceph-mon[77138]: pgmap v2410: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 257 op/s
Nov 29 08:19:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:25.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:26.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:26 compute-2 nova_compute[232428]: 2025-11-29 08:19:26.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-2 nova_compute[232428]: 2025-11-29 08:19:27.439 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-2 ceph-mon[77138]: pgmap v2411: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 283 op/s
Nov 29 08:19:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 08:19:27 compute-2 podman[289607]: 2025-11-29 08:19:27.7465165 +0000 UTC m=+0.062400728 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 08:19:27 compute-2 nova_compute[232428]: 2025-11-29 08:19:27.770 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance failed to shutdown in 60 seconds.
Nov 29 08:19:27 compute-2 kernel: tapbe8a2e4d-8e (unregistering): left promiscuous mode
Nov 29 08:19:27 compute-2 NetworkManager[48993]: <info>  [1764404367.8175] device (tapbe8a2e4d-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:19:27 compute-2 ovn_controller[134375]: 2025-11-29T08:19:27Z|00619|binding|INFO|Releasing lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 from this chassis (sb_readonly=0)
Nov 29 08:19:27 compute-2 ovn_controller[134375]: 2025-11-29T08:19:27Z|00620|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 down in Southbound
Nov 29 08:19:27 compute-2 ovn_controller[134375]: 2025-11-29T08:19:27Z|00621|binding|INFO|Removing iface tapbe8a2e4d-8e ovn-installed in OVS
Nov 29 08:19:27 compute-2 nova_compute[232428]: 2025-11-29 08:19:27.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-2 nova_compute[232428]: 2025-11-29 08:19:27.832 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:27.843 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:27.844 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis
Nov 29 08:19:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:27.845 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c25940b-e63b-4443-a94b-0216a35e8dc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:19:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:27.847 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[90fc1229-a95c-4b4d-b868-2ec650b42bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:27.847 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace which is not needed anymore
Nov 29 08:19:27 compute-2 nova_compute[232428]: 2025-11-29 08:19:27.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-2 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 29 08:19:27 compute-2 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Consumed 1.351s CPU time.
Nov 29 08:19:27 compute-2 systemd-machined[194747]: Machine qemu-60-instance-00000082 terminated.
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.015 232432 INFO nova.virt.libvirt.driver [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance destroyed successfully.
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.016 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [NOTICE]   (288468) : haproxy version is 2.8.14-c23fe91
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [NOTICE]   (288468) : path to executable is /usr/sbin/haproxy
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [WARNING]  (288468) : Exiting Master process...
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [WARNING]  (288468) : Exiting Master process...
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [ALERT]    (288468) : Current worker (288470) exited with code 143 (Terminated)
Nov 29 08:19:28 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[288464]: [WARNING]  (288468) : All workers exited. Exiting... (0)
Nov 29 08:19:28 compute-2 systemd[1]: libpod-6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56.scope: Deactivated successfully.
Nov 29 08:19:28 compute-2 podman[289652]: 2025-11-29 08:19:28.03112659 +0000 UTC m=+0.059844345 container died 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.037 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Attempting a stable device rescue
Nov 29 08:19:28 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56-userdata-shm.mount: Deactivated successfully.
Nov 29 08:19:28 compute-2 systemd[1]: var-lib-containers-storage-overlay-03131941ccfab140b65ce3309f09e668f15472879b8df801b5f2b1009a54167e-merged.mount: Deactivated successfully.
Nov 29 08:19:28 compute-2 podman[289652]: 2025-11-29 08:19:28.108995227 +0000 UTC m=+0.137712942 container cleanup 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:19:28 compute-2 systemd[1]: libpod-conmon-6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56.scope: Deactivated successfully.
Nov 29 08:19:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:28.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:28 compute-2 podman[289691]: 2025-11-29 08:19:28.189660672 +0000 UTC m=+0.054512607 container remove 6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.198 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[89a8be7a-12bb-4565-85e5-51ddbef2166b]: (4, ('Sat Nov 29 08:19:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56)\n6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56\nSat Nov 29 08:19:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56)\n6cd0b0c3747803298cdb3a772c0fbd74c2f0c75c8c4903ae509311748c9aca56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.200 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[15c06d8d-87c7-4ef2-9c89-c3eccbf04dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.200 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.202 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:28 compute-2 kernel: tap3c25940b-e0: left promiscuous mode
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.221 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.224 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f059778c-eddf-4c94-8983-527935a4e0a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.236 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd61a47-0012-4fd1-920b-808741af6d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.237 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc1ade3-2968-476d-b1a4-477dca8ea604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.256 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[abe608d9-1ba0-4201-9c39-4c2297f598c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726432, 'reachable_time': 20568, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289710, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 systemd[1]: run-netns-ovnmeta\x2d3c25940b\x2de63b\x2d4443\x2da94b\x2d0216a35e8dc6.mount: Deactivated successfully.
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.260 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:19:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:28.260 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a5652e-fee5-4e26-9555-b617554afbb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.285 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.290 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.290 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Creating image(s)
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.316 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.320 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.360 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.384 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.388 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "f7f3b2cdb8e0ca01d92c538b2a6bb7f2169a612c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.389 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "f7f3b2cdb8e0ca01d92c538b2a6bb7f2169a612c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.587 232432 DEBUG nova.virt.libvirt.imagebackend [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/995630c6-dc23-4abf-afc5-51778a6f1496/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/995630c6-dc23-4abf-afc5-51778a6f1496/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.659 232432 DEBUG nova.virt.libvirt.imagebackend [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/995630c6-dc23-4abf-afc5-51778a6f1496/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.660 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] cloning images/995630c6-dc23-4abf-afc5-51778a6f1496@snap to None/6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:19:28 compute-2 ceph-mon[77138]: osdmap e313: 3 total, 3 up, 3 in
Nov 29 08:19:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1255212359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:19:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1255212359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:19:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.827 232432 DEBUG oslo_concurrency.lockutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "f7f3b2cdb8e0ca01d92c538b2a6bb7f2169a612c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.890 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.905 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.908 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start _get_guest_xml network_info=[{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:75:42:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '995630c6-dc23-4abf-afc5-51778a6f1496', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5d21b16e-ab30-4101-bf21-01197c71cf99', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5d21b16e-ab30-4101-bf21-01197c71cf99', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'attached_at': '', 'detached_at': '', 'volume_id': '5d21b16e-ab30-4101-bf21-01197c71cf99', 'serial': '5d21b16e-ab30-4101-bf21-01197c71cf99'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'cf6ba8d8-b1d6-4916-b30a-2c4b2c8f4ac1', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.909 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.928 232432 WARNING nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.933 232432 DEBUG nova.virt.libvirt.host [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.934 232432 DEBUG nova.virt.libvirt.host [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.936 232432 DEBUG nova.virt.libvirt.host [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.936 232432 DEBUG nova.virt.libvirt.host [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.937 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.938 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.938 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.938 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.939 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.939 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.939 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.939 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.940 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.940 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.940 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.941 232432 DEBUG nova.virt.hardware [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:19:28 compute-2 nova_compute[232428]: 2025-11-29 08:19:28.941 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.000 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1174711356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.547 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.591 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 08:19:29 compute-2 ceph-mon[77138]: pgmap v2413: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 218 op/s
Nov 29 08:19:29 compute-2 ceph-mon[77138]: osdmap e314: 3 total, 3 up, 3 in
Nov 29 08:19:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1174711356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.942 232432 DEBUG nova.compute.manager [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.943 232432 DEBUG oslo_concurrency.lockutils [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.943 232432 DEBUG oslo_concurrency.lockutils [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.944 232432 DEBUG oslo_concurrency.lockutils [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.944 232432 DEBUG nova.compute.manager [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:29 compute-2 nova_compute[232428]: 2025-11-29 08:19:29.945 232432 WARNING nova.compute.manager [req-c234d841-2380-4d7c-9d09-e757ed32a2bf req-edab38a7-c133-4595-b302-8dcb27cc1439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state active and task_state rescuing.
Nov 29 08:19:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1709225145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.144 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.146 232432 DEBUG nova.virt.libvirt.vif [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-285202970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-285202970',id=130,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-ri5l0c3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:20Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=6c463a92-8698-4035-b4d0-b1d3db01a43b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:75:42:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.146 232432 DEBUG nova.network.os_vif_util [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:75:42:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.148 232432 DEBUG nova.network.os_vif_util [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.150 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.168 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <uuid>6c463a92-8698-4035-b4d0-b1d3db01a43b</uuid>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <name>instance-00000082</name>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-285202970</nova:name>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:19:28</nova:creationTime>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <nova:port uuid="be8a2e4d-8e9b-4eff-a873-d7c8ad350b96">
Nov 29 08:19:30 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <system>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="serial">6c463a92-8698-4035-b4d0-b1d3db01a43b</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="uuid">6c463a92-8698-4035-b4d0-b1d3db01a43b</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </system>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <os>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </os>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <features>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </features>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </source>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-5d21b16e-ab30-4101-bf21-01197c71cf99">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </source>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <serial>5d21b16e-ab30-4101-bf21-01197c71cf99</serial>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.rescue">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </source>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:19:30 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <boot order="1"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:75:42:23"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <target dev="tapbe8a2e4d-8e"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/console.log" append="off"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <video>
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </video>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:19:30 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:19:30 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:19:30 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:19:30 compute-2 nova_compute[232428]: </domain>
Nov 29 08:19:30 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:19:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:30.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.179 232432 INFO nova.virt.libvirt.driver [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance destroyed successfully.
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.245 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.246 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.246 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.247 232432 DEBUG nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:75:42:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.248 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Using config drive
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.290 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.314 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:30 compute-2 nova_compute[232428]: 2025-11-29 08:19:30.354 232432 DEBUG nova.objects.instance [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'keypairs' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:30 compute-2 ceph-mon[77138]: osdmap e315: 3 total, 3 up, 3 in
Nov 29 08:19:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1709225145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:31 compute-2 sudo[289935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:31 compute-2 sudo[289935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:31 compute-2 sudo[289935]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.342 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Creating config drive at /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.347 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0pt1lb_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:31 compute-2 sudo[289960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:19:31 compute-2 sudo[289960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:31 compute-2 sudo[289960]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.473 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.509 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0pt1lb_n" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.536 232432 DEBUG nova.storage.rbd_utils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.540 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.780 232432 DEBUG oslo_concurrency.processutils [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue 6c463a92-8698-4035-b4d0-b1d3db01a43b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.782 232432 INFO nova.virt.libvirt.driver [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Deleting local config drive /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b/disk.config.rescue because it was imported into RBD.
Nov 29 08:19:31 compute-2 ceph-mon[77138]: pgmap v2416: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.9 MiB/s wr, 329 op/s
Nov 29 08:19:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:19:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:19:31 compute-2 kernel: tapbe8a2e4d-8e: entered promiscuous mode
Nov 29 08:19:31 compute-2 NetworkManager[48993]: <info>  [1764404371.8718] manager: (tapbe8a2e4d-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 08:19:31 compute-2 ovn_controller[134375]: 2025-11-29T08:19:31Z|00622|binding|INFO|Claiming lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for this chassis.
Nov 29 08:19:31 compute-2 ovn_controller[134375]: 2025-11-29T08:19:31Z|00623|binding|INFO|be8a2e4d-8e9b-4eff-a873-d7c8ad350b96: Claiming fa:16:3e:75:42:23 10.100.0.13
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.875 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.882 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.883 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.885 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.903 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43ed4e7a-4310-419c-b576-5b7dfd061e91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.904 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c25940b-e1 in ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:19:31 compute-2 ovn_controller[134375]: 2025-11-29T08:19:31Z|00624|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 ovn-installed in OVS
Nov 29 08:19:31 compute-2 ovn_controller[134375]: 2025-11-29T08:19:31Z|00625|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 up in Southbound
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.907 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c25940b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.908 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5898737b-4843-498b-8d26-b69c5f442512]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:31 compute-2 nova_compute[232428]: 2025-11-29 08:19:31.908 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.909 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b71179ce-3d48-4e1e-af9c-e2ef09fe5b2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.933 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[11311bcc-fac9-4ffd-b292-f83c2eaac452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:31 compute-2 systemd-udevd[290041]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:31 compute-2 systemd-machined[194747]: New machine qemu-62-instance-00000082.
Nov 29 08:19:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:31.964 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[64896523-b0d8-448b-82cf-ffe377a75543]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:31 compute-2 systemd[1]: Started Virtual Machine qemu-62-instance-00000082.
Nov 29 08:19:31 compute-2 NetworkManager[48993]: <info>  [1764404371.9678] device (tapbe8a2e4d-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:19:31 compute-2 NetworkManager[48993]: <info>  [1764404371.9695] device (tapbe8a2e4d-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.019 232432 DEBUG nova.compute.manager [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.020 232432 DEBUG oslo_concurrency.lockutils [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.020 232432 DEBUG oslo_concurrency.lockutils [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.021 232432 DEBUG oslo_concurrency.lockutils [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.021 232432 DEBUG nova.compute.manager [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.022 232432 WARNING nova.compute.manager [req-f418679a-c8c1-452e-99d4-de0385e576ce req-8feb5a9f-a349-4029-9cd4-009d01f32ecc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state active and task_state rescuing.
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.029 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[19fe92ed-5495-4ec4-82a6-b5bc562a82e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.035 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[311bac4f-7386-4ee9-8fb3-fd97f34440c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 NetworkManager[48993]: <info>  [1764404372.0389] manager: (tap3c25940b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.095 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5fceb633-e7f2-443c-a624-61079162b84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.100 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[34a19bf7-d2bb-4ee7-bcf7-204df4440482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 NetworkManager[48993]: <info>  [1764404372.1399] device (tap3c25940b-e0): carrier: link connected
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.149 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dc365860-b6c6-4f85-9376-c81db3bc1526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.177 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c4292fe1-e5a6-4582-8605-9d2f3fb8a8eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733840, 'reachable_time': 42404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:32.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.208 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[467133e1-250b-4cfc-b331-a34571de988b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:387b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733840, 'tstamp': 733840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290073, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.245 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74a79080-4300-46e3-b01d-9c3d806886e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733840, 'reachable_time': 42404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290079, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.301 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[476fe55a-2458-4d6f-81c9-d2ae235f74f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.404 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2738020-ebbe-45f5-9dfa-a883d90f5825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.407 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.408 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.408 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:32 compute-2 kernel: tap3c25940b-e0: entered promiscuous mode
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-2 NetworkManager[48993]: <info>  [1764404372.4124] manager: (tap3c25940b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.413 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.415 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.417 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-2 ovn_controller[134375]: 2025-11-29T08:19:32Z|00626|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.441 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.451 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.453 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.454 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea92307e-d63b-4461-96ed-e5ef7e7bb13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.455 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:19:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:32.456 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'env', 'PROCESS_TAG=haproxy-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c25940b-e63b-4443-a94b-0216a35e8dc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.480 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.481 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.500 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.581 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.581 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.594 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.595 232432 INFO nova.compute.claims [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.613 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 6c463a92-8698-4035-b4d0-b1d3db01a43b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.614 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404372.6126752, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.614 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Resumed (Lifecycle Event)
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.619 232432 DEBUG nova.compute.manager [None req-4e09c3fe-3767-4b42-bd88-ceb6f23050c2 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.665 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.669 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.694 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.696 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404372.616285, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.696 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Started (Lifecycle Event)
Nov 29 08:19:32 compute-2 podman[290144]: 2025-11-29 08:19:32.714648711 +0000 UTC m=+0.105638648 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.717 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.720 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:32 compute-2 nova_compute[232428]: 2025-11-29 08:19:32.765 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:32 compute-2 podman[290188]: 2025-11-29 08:19:32.932030745 +0000 UTC m=+0.063816388 container create 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:19:32 compute-2 systemd[1]: Started libpod-conmon-616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69.scope.
Nov 29 08:19:32 compute-2 podman[290188]: 2025-11-29 08:19:32.907338362 +0000 UTC m=+0.039124025 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:19:33 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:19:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245d70efca64d0d7177eb2f9f52d80735ed26d5a97d4a0e2abb5752971cde6c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:33 compute-2 podman[290188]: 2025-11-29 08:19:33.022083944 +0000 UTC m=+0.153869607 container init 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:19:33 compute-2 podman[290188]: 2025-11-29 08:19:33.033642546 +0000 UTC m=+0.165428189 container start 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:19:33 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [NOTICE]   (290226) : New worker (290228) forked
Nov 29 08:19:33 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [NOTICE]   (290226) : Loading success.
Nov 29 08:19:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3718247837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.359 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.366 232432 DEBUG nova.compute.provider_tree [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.391 232432 DEBUG nova.scheduler.client.report [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.415 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.416 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.453 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.453 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.483 232432 INFO nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.505 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.627 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.628 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.629 232432 INFO nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Creating image(s)
Nov 29 08:19:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.662 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.704 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.734 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.740 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.791 232432 DEBUG nova.policy [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09f1f8a0998948b7b96830d8559609f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:19:33 compute-2 ceph-mon[77138]: pgmap v2417: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 192 op/s
Nov 29 08:19:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3718247837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.845 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.845 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.846 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.847 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.878 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:33 compute-2 nova_compute[232428]: 2025-11-29 08:19:33.884 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e1e8c40f-128e-4265-b740-9f793af39b26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:34.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.241 232432 DEBUG nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.242 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.242 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.243 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.243 232432 DEBUG nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.243 232432 WARNING nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state rescued and task_state None.
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.244 232432 DEBUG nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.244 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.244 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.245 232432 DEBUG oslo_concurrency.lockutils [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.245 232432 DEBUG nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.246 232432 WARNING nova.compute.manager [req-64dea213-86d7-471a-893f-f53332a99eba req-be95c980-6a7e-407d-9946-b7091b0c0fc9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state rescued and task_state None.
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.300 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e1e8c40f-128e-4265-b740-9f793af39b26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.386 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] resizing rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.508 232432 DEBUG nova.objects.instance [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'migration_context' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.529 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.530 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Ensure instance console log exists: /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.531 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.531 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:34 compute-2 nova_compute[232428]: 2025-11-29 08:19:34.532 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:34 compute-2 sudo[290406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:34 compute-2 sudo[290406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:34 compute-2 sudo[290406]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:34 compute-2 sudo[290431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:34 compute-2 sudo[290431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:34 compute-2 sudo[290431]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:35 compute-2 nova_compute[232428]: 2025-11-29 08:19:35.062 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Successfully created port: dc54aa54-233d-4026-a2c5-883cfda7d4f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:19:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:35 compute-2 nova_compute[232428]: 2025-11-29 08:19:35.818 232432 INFO nova.compute.manager [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Unrescuing
Nov 29 08:19:35 compute-2 nova_compute[232428]: 2025-11-29 08:19:35.818 232432 DEBUG oslo_concurrency.lockutils [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:35 compute-2 nova_compute[232428]: 2025-11-29 08:19:35.819 232432 DEBUG oslo_concurrency.lockutils [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:35 compute-2 nova_compute[232428]: 2025-11-29 08:19:35.819 232432 DEBUG nova.network.neutron [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:19:35 compute-2 ceph-mon[77138]: pgmap v2418: 305 pgs: 305 active+clean; 387 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.2 MiB/s wr, 211 op/s
Nov 29 08:19:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:36 compute-2 nova_compute[232428]: 2025-11-29 08:19:36.475 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.113 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Successfully updated port: dc54aa54-233d-4026-a2c5-883cfda7d4f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.142 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.143 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquired lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.143 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.233 232432 DEBUG nova.compute.manager [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-changed-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.234 232432 DEBUG nova.compute.manager [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Refreshing instance network info cache due to event network-changed-dc54aa54-233d-4026-a2c5-883cfda7d4f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.235 232432 DEBUG oslo_concurrency.lockutils [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.318 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:19:37 compute-2 nova_compute[232428]: 2025-11-29 08:19:37.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:37 compute-2 ceph-mon[77138]: pgmap v2419: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.102 232432 DEBUG nova.network.neutron [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.125 232432 DEBUG oslo_concurrency.lockutils [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.126 232432 DEBUG nova.objects.instance [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'flavor' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:38.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:38 compute-2 kernel: tapbe8a2e4d-8e (unregistering): left promiscuous mode
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.2206] device (tapbe8a2e4d-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00627|binding|INFO|Releasing lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 from this chassis (sb_readonly=0)
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00628|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 down in Southbound
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00629|binding|INFO|Removing iface tapbe8a2e4d-8e ovn-installed in OVS
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.238 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.239 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.241 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c25940b-e63b-4443-a94b-0216a35e8dc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.242 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8de742dc-0356-4200-9ad1-703eb6bf52e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.242 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace which is not needed anymore
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.252 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 29 08:19:38 compute-2 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000082.scope: Consumed 6.399s CPU time.
Nov 29 08:19:38 compute-2 systemd-machined[194747]: Machine qemu-62-instance-00000082 terminated.
Nov 29 08:19:38 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [NOTICE]   (290226) : haproxy version is 2.8.14-c23fe91
Nov 29 08:19:38 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [NOTICE]   (290226) : path to executable is /usr/sbin/haproxy
Nov 29 08:19:38 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [WARNING]  (290226) : Exiting Master process...
Nov 29 08:19:38 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [ALERT]    (290226) : Current worker (290228) exited with code 143 (Terminated)
Nov 29 08:19:38 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290222]: [WARNING]  (290226) : All workers exited. Exiting... (0)
Nov 29 08:19:38 compute-2 systemd[1]: libpod-616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69.scope: Deactivated successfully.
Nov 29 08:19:38 compute-2 podman[290483]: 2025-11-29 08:19:38.396014067 +0000 UTC m=+0.045533967 container died 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.408 232432 INFO nova.virt.libvirt.driver [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance destroyed successfully.
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.409 232432 DEBUG nova.objects.instance [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69-userdata-shm.mount: Deactivated successfully.
Nov 29 08:19:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-245d70efca64d0d7177eb2f9f52d80735ed26d5a97d4a0e2abb5752971cde6c6-merged.mount: Deactivated successfully.
Nov 29 08:19:38 compute-2 podman[290483]: 2025-11-29 08:19:38.442653526 +0000 UTC m=+0.092173416 container cleanup 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:19:38 compute-2 systemd[1]: libpod-conmon-616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69.scope: Deactivated successfully.
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.493 232432 DEBUG nova.compute.manager [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.494 232432 DEBUG oslo_concurrency.lockutils [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.494 232432 DEBUG oslo_concurrency.lockutils [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.494 232432 DEBUG oslo_concurrency.lockutils [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.494 232432 DEBUG nova.compute.manager [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.495 232432 WARNING nova.compute.manager [req-732400ed-41bf-4dfe-9c95-a6fb88ce642a req-905a146a-9972-4dbf-929d-5fee1ad66167 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state rescued and task_state unrescuing.
Nov 29 08:19:38 compute-2 kernel: tapbe8a2e4d-8e: entered promiscuous mode
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.5032] manager: (tapbe8a2e4d-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 08:19:38 compute-2 systemd-udevd[290463]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00630|binding|INFO|Claiming lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for this chassis.
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00631|binding|INFO|be8a2e4d-8e9b-4eff-a873-d7c8ad350b96: Claiming fa:16:3e:75:42:23 10.100.0.13
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.505 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 podman[290523]: 2025-11-29 08:19:38.51110762 +0000 UTC m=+0.043974919 container remove 616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.513 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.5166] device (tapbe8a2e4d-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.5180] device (tapbe8a2e4d-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.520 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c9aec6b2-167e-4a92-98db-dfdf1aa793f2]: (4, ('Sat Nov 29 08:19:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69)\n616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69\nSat Nov 29 08:19:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69)\n616919f737159a6fecf929cc09f1ec78db3bd141f14d1fed0de9b0c1748ebe69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.522 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[abc4aab4-f4bc-4336-8ac3-168f2de945de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00632|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 ovn-installed in OVS
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00633|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 up in Southbound
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.523 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 kernel: tap3c25940b-e0: left promiscuous mode
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.541 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 systemd-machined[194747]: New machine qemu-63-instance-00000082.
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.543 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.545 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae7cd6a-d5b3-4e2c-adff-3383995640dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 systemd[1]: Started Virtual Machine qemu-63-instance-00000082.
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.567 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a8736dfc-7d4e-46cd-b026-3f8ac41acf4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.569 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7bcb0e-5862-4d8c-913c-c16429301219]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.594 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b3be0bbf-9aa4-4894-8d04-b002d7841f5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733828, 'reachable_time': 20658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290552, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.596 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.596 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d82622a0-6135-4d34-9b88-dcbb3fc6611b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.597 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis
Nov 29 08:19:38 compute-2 systemd[1]: run-netns-ovnmeta\x2d3c25940b\x2de63b\x2d4443\x2da94b\x2d0216a35e8dc6.mount: Deactivated successfully.
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.599 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.611 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cbcb68-3ee9-4180-a495-b50e52925808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.612 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c25940b-e1 in ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.614 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c25940b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.614 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d6a507-6526-4f69-ad69-2192ab50b6e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.614 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[23f8762d-317e-4162-936e-a183b0aefc7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.626 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4abab2b6-46c4-4af1-96bd-7b3dfdbea258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.650 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7dea8a-197f-4ff9-8ef6-7aa3218f37b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.678 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[84a1cf22-94d5-4ba7-a147-7b5877fedc11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.684 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[63ad3cc9-b98c-4cec-aad8-c5cad259af6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.6863] manager: (tap3c25940b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 08:19:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.721 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f7506c-a4a0-42a0-b33f-e3a3b89e8159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.723 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0654dd1a-1983-4b70-a682-8aa119c82c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.7449] device (tap3c25940b-e0): carrier: link connected
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.753 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a69fba-a39a-45ba-ab3a-ab7cf9f747af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.774 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b757b0-d815-4c25-b6d0-01479d3f0ef3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734501, 'reachable_time': 20870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290582, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.787 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9192dcea-69bb-4479-8d72-e1b7408652df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:387b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734501, 'tstamp': 734501}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290583, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.800 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dbaf5278-1952-48d3-8784-867678386e27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734501, 'reachable_time': 20870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290584, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.825 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b9df06c9-7ccf-4800-bf70-6d45b77fe404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.860 232432 DEBUG nova.network.neutron [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updating instance_info_cache with network_info: [{"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.884 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Releasing lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.884 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Instance network_info: |[{"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.885 232432 DEBUG oslo_concurrency.lockutils [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.885 232432 DEBUG nova.network.neutron [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Refreshing network info cache for port dc54aa54-233d-4026-a2c5-883cfda7d4f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.888 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Start _get_guest_xml network_info=[{"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.893 232432 WARNING nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.908 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ec44712f-061a-47fc-a3bd-45e2f688ee78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.911 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.912 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.912 232432 DEBUG nova.virt.libvirt.host [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.912 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.912 232432 DEBUG nova.virt.libvirt.host [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:19:38 compute-2 kernel: tap3c25940b-e0: entered promiscuous mode
Nov 29 08:19:38 compute-2 NetworkManager[48993]: <info>  [1764404378.9149] manager: (tap3c25940b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.914 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.916 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.918 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:38 compute-2 ovn_controller[134375]: 2025-11-29T08:19:38Z|00634|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.921 232432 DEBUG nova.virt.libvirt.host [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.922 232432 DEBUG nova.virt.libvirt.host [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.923 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.923 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.924 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.925 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.925 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.925 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.925 232432 DEBUG nova.virt.hardware [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.928 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.944 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.945 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[25c97060-8f37-4523-93aa-57cb1fb03b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.947 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:19:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:38.948 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'env', 'PROCESS_TAG=haproxy-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c25940b-e63b-4443-a94b-0216a35e8dc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:19:38 compute-2 nova_compute[232428]: 2025-11-29 08:19:38.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.114 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 6c463a92-8698-4035-b4d0-b1d3db01a43b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.114 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404379.1007493, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.114 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Resumed (Lifecycle Event)
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.172 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.180 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.223 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.227 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404379.1130354, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.228 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Started (Lifecycle Event)
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.259 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.264 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.311 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 29 08:19:39 compute-2 podman[290696]: 2025-11-29 08:19:39.349016568 +0000 UTC m=+0.059854214 container create 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:19:39 compute-2 systemd[1]: Started libpod-conmon-36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a.scope.
Nov 29 08:19:39 compute-2 podman[290696]: 2025-11-29 08:19:39.317459931 +0000 UTC m=+0.028297607 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:19:39 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:19:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be443fcb271304d36310db07bf41d805f8e4e8a414949879926fb27b3b1f899/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1350368836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:39 compute-2 podman[290696]: 2025-11-29 08:19:39.454962415 +0000 UTC m=+0.165800071 container init 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.459 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:39 compute-2 podman[290696]: 2025-11-29 08:19:39.462044237 +0000 UTC m=+0.172881893 container start 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:19:39 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [NOTICE]   (290719) : New worker (290737) forked
Nov 29 08:19:39 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [NOTICE]   (290719) : Loading success.
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.492 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.499 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.629 232432 DEBUG nova.compute.manager [None req-76fc6184-f1d6-4d00-b300-7cb2fdf13fae 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:39 compute-2 ceph-mon[77138]: pgmap v2420: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 276 op/s
Nov 29 08:19:39 compute-2 ceph-mon[77138]: osdmap e316: 3 total, 3 up, 3 in
Nov 29 08:19:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4043866743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1350368836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4259605427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.951 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.953 232432 DEBUG nova.virt.libvirt.vif [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1938739361',display_name='tempest-AttachVolumeNegativeTest-server-1938739361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1938739361',id=136,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMIsWR6SsoLJ2ABS7KFOaCG42ihmfzH2foyr0TWRY5eiVTGmcsBZ3D3XsZFXLAc7zssT4SRJpC0TgFOf+org2lBQ3V5jmwHujOdbfp9WbvsZK55zCOcLHee2v91OcfA2sg==',key_name='tempest-keypair-10635770',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-gyy4ct4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=e1e8c40f-128e-4265-b740-9f793af39b26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.953 232432 DEBUG nova.network.os_vif_util [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.956 232432 DEBUG nova.network.os_vif_util [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.959 232432 DEBUG nova.objects.instance [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.988 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <uuid>e1e8c40f-128e-4265-b740-9f793af39b26</uuid>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <name>instance-00000088</name>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeNegativeTest-server-1938739361</nova:name>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:19:38</nova:creationTime>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:user uuid="09f1f8a0998948b7b96830d8559609f6">tempest-AttachVolumeNegativeTest-1426807399-project-member</nova:user>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:project uuid="61d8d3b6b31f4b36b5749db9c550c696">tempest-AttachVolumeNegativeTest-1426807399</nova:project>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <nova:port uuid="dc54aa54-233d-4026-a2c5-883cfda7d4f2">
Nov 29 08:19:39 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <system>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="serial">e1e8c40f-128e-4265-b740-9f793af39b26</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="uuid">e1e8c40f-128e-4265-b740-9f793af39b26</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </system>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <os>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </os>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <features>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </features>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/e1e8c40f-128e-4265-b740-9f793af39b26_disk">
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </source>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/e1e8c40f-128e-4265-b740-9f793af39b26_disk.config">
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </source>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:19:39 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:2a:cd:f5"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <target dev="tapdc54aa54-23"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/console.log" append="off"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <video>
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </video>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:19:39 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:19:39 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:19:39 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:19:39 compute-2 nova_compute[232428]: </domain>
Nov 29 08:19:39 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.990 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Preparing to wait for external event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.990 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.990 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.991 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.992 232432 DEBUG nova.virt.libvirt.vif [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1938739361',display_name='tempest-AttachVolumeNegativeTest-server-1938739361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1938739361',id=136,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMIsWR6SsoLJ2ABS7KFOaCG42ihmfzH2foyr0TWRY5eiVTGmcsBZ3D3XsZFXLAc7zssT4SRJpC0TgFOf+org2lBQ3V5jmwHujOdbfp9WbvsZK55zCOcLHee2v91OcfA2sg==',key_name='tempest-keypair-10635770',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-gyy4ct4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=e1e8c40f-128e-4265-b740-9f793af39b26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.993 232432 DEBUG nova.network.os_vif_util [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.994 232432 DEBUG nova.network.os_vif_util [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.994 232432 DEBUG os_vif [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.995 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.996 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:39 compute-2 nova_compute[232428]: 2025-11-29 08:19:39.997 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.002 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc54aa54-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.003 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc54aa54-23, col_values=(('external_ids', {'iface-id': 'dc54aa54-233d-4026-a2c5-883cfda7d4f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:cd:f5', 'vm-uuid': 'e1e8c40f-128e-4265-b740-9f793af39b26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.005 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:40 compute-2 NetworkManager[48993]: <info>  [1764404380.0065] manager: (tapdc54aa54-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.007 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.013 232432 INFO os_vif [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23')
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.070 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.071 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.071 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:2a:cd:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.072 232432 INFO nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Using config drive
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.114 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:40.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.578 232432 INFO nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Creating config drive at /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.587 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8740kr7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.640 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.641 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.641 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.641 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.642 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.642 232432 WARNING nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state active and task_state None.
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.642 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.642 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.643 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.643 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.643 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.644 232432 WARNING nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state active and task_state None.
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.644 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.644 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.645 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.645 232432 DEBUG oslo_concurrency.lockutils [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.645 232432 DEBUG nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.646 232432 WARNING nova.compute.manager [req-99399e2f-29de-4b0b-90d9-bcca64fd99c3 req-cbf305d7-7249-4b64-99f6-f9b9f3ed8e4c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state active and task_state None.
Nov 29 08:19:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4259605427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.751 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8740kr7e" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.801 232432 DEBUG nova.storage.rbd_utils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image e1e8c40f-128e-4265-b740-9f793af39b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.807 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config e1e8c40f-128e-4265-b740-9f793af39b26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.852 232432 DEBUG nova.network.neutron [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updated VIF entry in instance network info cache for port dc54aa54-233d-4026-a2c5-883cfda7d4f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.853 232432 DEBUG nova.network.neutron [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updating instance_info_cache with network_info: [{"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:40 compute-2 nova_compute[232428]: 2025-11-29 08:19:40.894 232432 DEBUG oslo_concurrency.lockutils [req-c4ad8c6e-c81f-4bff-b0e2-0961858337c4 req-c9433fd2-b8d8-4381-9c95-8ac92c8a46d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.116 232432 DEBUG oslo_concurrency.processutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config e1e8c40f-128e-4265-b740-9f793af39b26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.118 232432 INFO nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Deleting local config drive /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26/disk.config because it was imported into RBD.
Nov 29 08:19:41 compute-2 kernel: tapdc54aa54-23: entered promiscuous mode
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.2009] manager: (tapdc54aa54-23): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 08:19:41 compute-2 systemd-udevd[290567]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:41 compute-2 ovn_controller[134375]: 2025-11-29T08:19:41Z|00635|binding|INFO|Claiming lport dc54aa54-233d-4026-a2c5-883cfda7d4f2 for this chassis.
Nov 29 08:19:41 compute-2 ovn_controller[134375]: 2025-11-29T08:19:41Z|00636|binding|INFO|dc54aa54-233d-4026-a2c5-883cfda7d4f2: Claiming fa:16:3e:2a:cd:f5 10.100.0.9
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.204 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.2253] device (tapdc54aa54-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.2268] device (tapdc54aa54-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:19:41 compute-2 systemd-machined[194747]: New machine qemu-64-instance-00000088.
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.261 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:cd:f5 10.100.0.9'], port_security=['fa:16:3e:2a:cd:f5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1e8c40f-128e-4265-b740-9f793af39b26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1599c60a-4274-48ca-aa11-779b4d5de698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc54aa54-233d-4026-a2c5-883cfda7d4f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.264 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc54aa54-233d-4026-a2c5-883cfda7d4f2 in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 bound to our chassis
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.267 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 systemd[1]: Started Virtual Machine qemu-64-instance-00000088.
Nov 29 08:19:41 compute-2 ovn_controller[134375]: 2025-11-29T08:19:41Z|00637|binding|INFO|Setting lport dc54aa54-233d-4026-a2c5-883cfda7d4f2 ovn-installed in OVS
Nov 29 08:19:41 compute-2 ovn_controller[134375]: 2025-11-29T08:19:41Z|00638|binding|INFO|Setting lport dc54aa54-233d-4026-a2c5-883cfda7d4f2 up in Southbound
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.278 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.287 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7dca1c73-17f2-49eb-b510-629365dd9db0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.288 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d6ff1b5-e1 in ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.290 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d6ff1b5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.290 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb2babe-7123-4dfd-b976-07501cef5fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.292 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9a9085-6a59-48ef-95bd-07def8ce0be7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.308 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[60c7ded8-872d-432c-a8e8-d6cf95cba1a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bb74b4a1-7d47-4a73-96f2-9f72f454242f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.387 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c93ee53-2fee-4d5a-91c0-f7a6bf8fbfb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 systemd-udevd[290854]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.3968] manager: (tap3d6ff1b5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.396 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36c95ebc-2b3e-4c32-b93e-bfd4c6e090c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.435 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd3d4af-1b6f-4f17-9582-1d7aa1deb269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.439 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2a44b571-85d6-4dad-83c5-28d5c4a5b88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.4677] device (tap3d6ff1b5-e0): carrier: link connected
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.475 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b2112a6a-9d76-4923-8d88-901935de35ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.496 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ca7cc7-de13-4406-a492-2fa91ebfbd92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290877, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.521 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9a23e6ce-fa3d-4b65-be44-d8f29966191f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:6a1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734773, 'tstamp': 734773}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290878, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.548 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b712c12-4288-43bc-8506-0486800e7dce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290879, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.605 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7045c6b1-a9c7-4908-bf5c-5059ff327c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:41.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.715 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[071ed48b-f354-4413-822a-3687cd5c4b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.718 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.719 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.719 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 kernel: tap3d6ff1b5-e0: entered promiscuous mode
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 NetworkManager[48993]: <info>  [1764404381.7258] manager: (tap3d6ff1b5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.730 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.733 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 ovn_controller[134375]: 2025-11-29T08:19:41Z|00639|binding|INFO|Releasing lport 54675c6b-d3a2-417c-b976-28c1e010fd1e from this chassis (sb_readonly=0)
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.734 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.738 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.739 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9e78df2f-a80f-4347-99ad-368e325546d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.740 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:19:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:19:41.742 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'env', 'PROCESS_TAG=haproxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.752 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-2 ceph-mon[77138]: pgmap v2422: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.872 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404381.8720312, e1e8c40f-128e-4265-b740-9f793af39b26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.874 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] VM Started (Lifecycle Event)
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.910 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.914 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404381.8724205, e1e8c40f-128e-4265-b740-9f793af39b26 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.915 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] VM Paused (Lifecycle Event)
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.937 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.950 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:41 compute-2 nova_compute[232428]: 2025-11-29 08:19:41.981 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:19:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:42 compute-2 podman[290954]: 2025-11-29 08:19:42.233032238 +0000 UTC m=+0.089578745 container create 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:19:42 compute-2 podman[290954]: 2025-11-29 08:19:42.191831479 +0000 UTC m=+0.048378026 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:19:42 compute-2 systemd[1]: Started libpod-conmon-03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31.scope.
Nov 29 08:19:42 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:19:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ac062a0491c7d064d3c0719f615c57a1988d50f1b42fa3672bb165992cc02ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:42 compute-2 podman[290954]: 2025-11-29 08:19:42.361636984 +0000 UTC m=+0.218183481 container init 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:19:42 compute-2 podman[290954]: 2025-11-29 08:19:42.373495475 +0000 UTC m=+0.230041952 container start 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:19:42 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [NOTICE]   (290973) : New worker (290975) forked
Nov 29 08:19:42 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [NOTICE]   (290973) : Loading success.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.093 232432 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.094 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.094 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.095 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.095 232432 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Processing event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.095 232432 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.096 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.096 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.096 232432 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.097 232432 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] No waiting events found dispatching network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.097 232432 WARNING nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received unexpected event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 for instance with vm_state building and task_state spawning.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.098 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.103 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.105 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404383.1036701, e1e8c40f-128e-4265-b740-9f793af39b26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.105 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] VM Resumed (Lifecycle Event)
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.112 232432 INFO nova.virt.libvirt.driver [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Instance spawned successfully.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.113 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.133 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.143 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.149 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.150 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.150 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.151 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.151 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.151 232432 DEBUG nova.virt.libvirt.driver [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.219 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.268 232432 INFO nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Took 9.64 seconds to spawn the instance on the hypervisor.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.268 232432 DEBUG nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.343 232432 INFO nova.compute.manager [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Took 10.80 seconds to build instance.
Nov 29 08:19:43 compute-2 nova_compute[232428]: 2025-11-29 08:19:43.374 232432 DEBUG oslo_concurrency.lockutils [None req-a6f6bbc9-9908-417e-8e68-7dd8f66e215a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:43.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:43 compute-2 ceph-mon[77138]: pgmap v2423: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Nov 29 08:19:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2289726521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/524578851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2260853582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/703786362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2270226176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4142059694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:45 compute-2 nova_compute[232428]: 2025-11-29 08:19:45.007 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:45 compute-2 podman[290985]: 2025-11-29 08:19:45.703602868 +0000 UTC m=+0.092961890 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 29 08:19:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:45 compute-2 ceph-mon[77138]: pgmap v2424: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 259 op/s
Nov 29 08:19:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:46 compute-2 nova_compute[232428]: 2025-11-29 08:19:46.480 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-2 nova_compute[232428]: 2025-11-29 08:19:46.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-2 NetworkManager[48993]: <info>  [1764404386.6621] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 08:19:46 compute-2 NetworkManager[48993]: <info>  [1764404386.6644] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 08:19:46 compute-2 nova_compute[232428]: 2025-11-29 08:19:46.798 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-2 ovn_controller[134375]: 2025-11-29T08:19:46Z|00640|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 08:19:46 compute-2 ovn_controller[134375]: 2025-11-29T08:19:46Z|00641|binding|INFO|Releasing lport 54675c6b-d3a2-417c-b976-28c1e010fd1e from this chassis (sb_readonly=0)
Nov 29 08:19:46 compute-2 nova_compute[232428]: 2025-11-29 08:19:46.821 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:47 compute-2 nova_compute[232428]: 2025-11-29 08:19:47.125 232432 DEBUG nova.compute.manager [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-changed-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:47 compute-2 nova_compute[232428]: 2025-11-29 08:19:47.126 232432 DEBUG nova.compute.manager [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Refreshing instance network info cache due to event network-changed-dc54aa54-233d-4026-a2c5-883cfda7d4f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:19:47 compute-2 nova_compute[232428]: 2025-11-29 08:19:47.126 232432 DEBUG oslo_concurrency.lockutils [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:47 compute-2 nova_compute[232428]: 2025-11-29 08:19:47.127 232432 DEBUG oslo_concurrency.lockutils [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:47 compute-2 nova_compute[232428]: 2025-11-29 08:19:47.127 232432 DEBUG nova.network.neutron [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Refreshing network info cache for port dc54aa54-233d-4026-a2c5-883cfda7d4f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:19:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:47 compute-2 ceph-mon[77138]: pgmap v2425: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.2 MiB/s wr, 263 op/s
Nov 29 08:19:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/112214616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:48 compute-2 nova_compute[232428]: 2025-11-29 08:19:48.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3714404165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:49 compute-2 ceph-mon[77138]: pgmap v2426: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.1 MiB/s wr, 249 op/s
Nov 29 08:19:50 compute-2 nova_compute[232428]: 2025-11-29 08:19:50.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:51 compute-2 nova_compute[232428]: 2025-11-29 08:19:51.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:51 compute-2 nova_compute[232428]: 2025-11-29 08:19:51.481 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:51 compute-2 ceph-mon[77138]: pgmap v2427: 305 pgs: 305 active+clean; 559 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 259 op/s
Nov 29 08:19:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:19:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:19:52 compute-2 nova_compute[232428]: 2025-11-29 08:19:52.931 232432 DEBUG nova.network.neutron [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updated VIF entry in instance network info cache for port dc54aa54-233d-4026-a2c5-883cfda7d4f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:19:52 compute-2 nova_compute[232428]: 2025-11-29 08:19:52.932 232432 DEBUG nova.network.neutron [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updating instance_info_cache with network_info: [{"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:52 compute-2 nova_compute[232428]: 2025-11-29 08:19:52.957 232432 DEBUG oslo_concurrency.lockutils [req-d094981f-6cf1-4d76-8bac-9735a58a4497 req-0d0998c3-9845-4041-a970-dabe151058ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e1e8c40f-128e-4265-b740-9f793af39b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:53 compute-2 nova_compute[232428]: 2025-11-29 08:19:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:53 compute-2 ceph-mon[77138]: pgmap v2428: 305 pgs: 305 active+clean; 559 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Nov 29 08:19:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:54 compute-2 sshd-session[291016]: Invalid user sol from 45.148.10.240 port 60036
Nov 29 08:19:54 compute-2 sshd-session[291016]: Connection closed by invalid user sol 45.148.10.240 port 60036 [preauth]
Nov 29 08:19:54 compute-2 systemd[1]: Starting dnf makecache...
Nov 29 08:19:54 compute-2 sudo[291018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:54 compute-2 sudo[291018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:54 compute-2 sudo[291018]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:55 compute-2 nova_compute[232428]: 2025-11-29 08:19:55.065 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:55 compute-2 sudo[291044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:55 compute-2 sudo[291044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:55 compute-2 sudo[291044]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:55 compute-2 dnf[291042]: Metadata cache refreshed recently.
Nov 29 08:19:55 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 08:19:55 compute-2 systemd[1]: Finished dnf makecache.
Nov 29 08:19:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:19:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:56 compute-2 ceph-mon[77138]: pgmap v2429: 305 pgs: 305 active+clean; 560 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 29 08:19:56 compute-2 nova_compute[232428]: 2025-11-29 08:19:56.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:19:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:19:56 compute-2 nova_compute[232428]: 2025-11-29 08:19:56.484 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:56 compute-2 ovn_controller[134375]: 2025-11-29T08:19:56Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:cd:f5 10.100.0.9
Nov 29 08:19:56 compute-2 ovn_controller[134375]: 2025-11-29T08:19:56Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:cd:f5 10.100.0.9
Nov 29 08:19:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 08:19:57 compute-2 nova_compute[232428]: 2025-11-29 08:19:57.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:57 compute-2 nova_compute[232428]: 2025-11-29 08:19:57.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:19:57 compute-2 nova_compute[232428]: 2025-11-29 08:19:57.235 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:19:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:58 compute-2 ceph-mon[77138]: pgmap v2430: 305 pgs: 305 active+clean; 560 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 330 op/s
Nov 29 08:19:58 compute-2 ceph-mon[77138]: osdmap e317: 3 total, 3 up, 3 in
Nov 29 08:19:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 08:19:58 compute-2 nova_compute[232428]: 2025-11-29 08:19:58.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:19:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:58.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:19:58 compute-2 podman[291072]: 2025-11-29 08:19:58.682059488 +0000 UTC m=+0.080468871 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:19:59 compute-2 ceph-mon[77138]: osdmap e318: 3 total, 3 up, 3 in
Nov 29 08:19:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 08:19:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:19:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:19:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:00 compute-2 ceph-mon[77138]: pgmap v2433: 305 pgs: 305 active+clean; 587 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.0 MiB/s wr, 328 op/s
Nov 29 08:20:00 compute-2 ceph-mon[77138]: osdmap e319: 3 total, 3 up, 3 in
Nov 29 08:20:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4266005841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:20:00 compute-2 nova_compute[232428]: 2025-11-29 08:20:00.072 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:00.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2490009090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:01 compute-2 nova_compute[232428]: 2025-11-29 08:20:01.490 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:01.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:02.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:02 compute-2 ceph-mon[77138]: pgmap v2435: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 666 op/s
Nov 29 08:20:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1623551928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:03 compute-2 nova_compute[232428]: 2025-11-29 08:20:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:03 compute-2 nova_compute[232428]: 2025-11-29 08:20:03.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.259 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:03 compute-2 nova_compute[232428]: 2025-11-29 08:20:03.259 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.261 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.262 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:03 compute-2 ceph-mon[77138]: pgmap v2436: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 371 op/s
Nov 29 08:20:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3509580080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3550358599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.325 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.326 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:03.326 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:03 compute-2 podman[291095]: 2025-11-29 08:20:03.675756266 +0000 UTC m=+0.069946451 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 08:20:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:03.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1668561165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.540 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.541 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.559 232432 DEBUG nova.objects.instance [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.594 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.844 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.845 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:04 compute-2 nova_compute[232428]: 2025-11-29 08:20:04.845 232432 INFO nova.compute.manager [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Attaching volume ca617c97-cbed-42b4-9b83-01f6fe990ce2 to /dev/vdb
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.023 232432 DEBUG os_brick.utils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.026 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.040 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.040 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[e34601bc-2d6a-4335-8e7b-a7ec87712475]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.042 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.052 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.052 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[af0924f4-b6e2-4332-bb09-197e03a614de]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.054 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.063 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.063 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[2f501ea8-1eb7-4d8f-9020-394f50789962]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.065 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[99aacb88-d70c-4cdb-9444-36d1a2e413ce]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.065 232432 DEBUG oslo_concurrency.processutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.102 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.107 232432 DEBUG oslo_concurrency.processutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.109 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.109 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.109 232432 DEBUG os_brick.initiator.connectors.lightos [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.109 232432 DEBUG os_brick.utils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.110 232432 DEBUG nova.virt.block_device [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updating existing volume attachment record: 581f7330-fa73-443c-a7f3-7acb3a0698ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:05 compute-2 ceph-mon[77138]: pgmap v2437: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 9.2 MiB/s wr, 349 op/s
Nov 29 08:20:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2691703042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.878 232432 DEBUG nova.objects.instance [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.910 232432 DEBUG nova.virt.libvirt.driver [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Attempting to attach volume ca617c97-cbed-42b4-9b83-01f6fe990ce2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:20:05 compute-2 nova_compute[232428]: 2025-11-29 08:20:05.916 232432 DEBUG nova.virt.libvirt.guest [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:20:05 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-ca617c97-cbed-42b4-9b83-01f6fe990ce2">
Nov 29 08:20:05 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   </source>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:20:05 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:20:05 compute-2 nova_compute[232428]:   <serial>ca617c97-cbed-42b4-9b83-01f6fe990ce2</serial>
Nov 29 08:20:05 compute-2 nova_compute[232428]: </disk>
Nov 29 08:20:05 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.108 232432 DEBUG nova.virt.libvirt.driver [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.108 232432 DEBUG nova.virt.libvirt.driver [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.108 232432 DEBUG nova.virt.libvirt.driver [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.109 232432 DEBUG nova.virt.libvirt.driver [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:2a:cd:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.221 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.221 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.221 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.221 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.222 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.366 232432 DEBUG oslo_concurrency.lockutils [None req-b92f7720-71a2-45f6-a965-07c35ba88931 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2691703042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/735718917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.693 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.776 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.777 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.784 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.784 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:20:06 compute-2 nova_compute[232428]: 2025-11-29 08:20:06.785 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.016 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.017 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4110MB free_disk=20.74386215209961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.017 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.017 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.089 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 6c463a92-8698-4035-b4d0-b1d3db01a43b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.090 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance e1e8c40f-128e-4265-b740-9f793af39b26 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.090 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.090 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.149 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:07 compute-2 ceph-mon[77138]: pgmap v2438: 305 pgs: 305 active+clean; 781 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 14 MiB/s wr, 395 op/s
Nov 29 08:20:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/735718917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3081443638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2429014654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.605 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.615 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.637 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.666 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:20:07 compute-2 nova_compute[232428]: 2025-11-29 08:20:07.667 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:08.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2429014654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1317666613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2768765722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3781726136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/334364862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 08:20:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:09 compute-2 ceph-mon[77138]: pgmap v2439: 305 pgs: 305 active+clean; 788 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 382 op/s
Nov 29 08:20:09 compute-2 ceph-mon[77138]: osdmap e320: 3 total, 3 up, 3 in
Nov 29 08:20:10 compute-2 nova_compute[232428]: 2025-11-29 08:20:10.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:11 compute-2 nova_compute[232428]: 2025-11-29 08:20:11.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:11 compute-2 ceph-mon[77138]: pgmap v2441: 305 pgs: 305 active+clean; 797 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 29 08:20:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:12.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.737 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.738 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.764 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.840 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.840 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.846 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:20:12 compute-2 nova_compute[232428]: 2025-11-29 08:20:12.846 232432 INFO nova.compute.claims [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.048 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2590206784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.564 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.576 232432 DEBUG nova.compute.provider_tree [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.594 232432 DEBUG nova.scheduler.client.report [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.625 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.627 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.691 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.691 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:20:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.726 232432 INFO nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:20:13 compute-2 nova_compute[232428]: 2025-11-29 08:20:13.920 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:20:14 compute-2 ceph-mon[77138]: pgmap v2442: 305 pgs: 305 active+clean; 797 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 29 08:20:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2590206784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1508581070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.053 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.055 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.056 232432 INFO nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Creating image(s)
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.090 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.137 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.182 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.188 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.228 232432 DEBUG nova.policy [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09f1f8a0998948b7b96830d8559609f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:20:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.263 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.265 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.267 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.268 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.314 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.320 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d8142255-87a5-4d36-9908-a5456701e3c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.877 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d8142255-87a5-4d36-9908-a5456701e3c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:14 compute-2 nova_compute[232428]: 2025-11-29 08:20:14.972 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] resizing rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:20:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.141 232432 DEBUG nova.objects.instance [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'migration_context' on Instance uuid d8142255-87a5-4d36-9908-a5456701e3c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.158 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.159 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Ensure instance console log exists: /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.160 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.160 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.160 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:15 compute-2 sudo[291382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:15 compute-2 sudo[291382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:15 compute-2 sudo[291382]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:15 compute-2 nova_compute[232428]: 2025-11-29 08:20:15.289 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Successfully created port: d2132df3-9087-42d6-83a1-0e253e16ba1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:20:15 compute-2 sudo[291407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:15 compute-2 sudo[291407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:15 compute-2 sudo[291407]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:15.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 08:20:16 compute-2 ceph-mon[77138]: pgmap v2443: 305 pgs: 305 active+clean; 787 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.3 MiB/s wr, 339 op/s
Nov 29 08:20:16 compute-2 ceph-mon[77138]: osdmap e321: 3 total, 3 up, 3 in
Nov 29 08:20:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:16.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:16 compute-2 nova_compute[232428]: 2025-11-29 08:20:16.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:16 compute-2 podman[291433]: 2025-11-29 08:20:16.707433429 +0000 UTC m=+0.102554031 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:20:17 compute-2 ceph-mon[77138]: osdmap e322: 3 total, 3 up, 3 in
Nov 29 08:20:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.537 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Successfully updated port: d2132df3-9087-42d6-83a1-0e253e16ba1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.559 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.559 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquired lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.560 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.643 232432 DEBUG nova.compute.manager [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-changed-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.643 232432 DEBUG nova.compute.manager [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Refreshing instance network info cache due to event network-changed-d2132df3-9087-42d6-83a1-0e253e16ba1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:17 compute-2 nova_compute[232428]: 2025-11-29 08:20:17.644 232432 DEBUG oslo_concurrency.lockutils [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:18 compute-2 nova_compute[232428]: 2025-11-29 08:20:18.004 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:20:18 compute-2 ceph-mon[77138]: pgmap v2446: 305 pgs: 305 active+clean; 768 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 1.7 MiB/s wr, 361 op/s
Nov 29 08:20:18 compute-2 ceph-mon[77138]: osdmap e323: 3 total, 3 up, 3 in
Nov 29 08:20:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:18 compute-2 nova_compute[232428]: 2025-11-29 08:20:18.790 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:18 compute-2 nova_compute[232428]: 2025-11-29 08:20:18.791 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:18 compute-2 nova_compute[232428]: 2025-11-29 08:20:18.812 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:20:18 compute-2 nova_compute[232428]: 2025-11-29 08:20:18.837 232432 DEBUG nova.network.neutron [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updating instance_info_cache with network_info: [{"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.036 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Releasing lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.036 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Instance network_info: |[{"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.038 232432 DEBUG oslo_concurrency.lockutils [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.038 232432 DEBUG nova.network.neutron [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Refreshing network info cache for port d2132df3-9087-42d6-83a1-0e253e16ba1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.043 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Start _get_guest_xml network_info=[{"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.064 232432 WARNING nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.069 232432 DEBUG nova.virt.libvirt.host [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.070 232432 DEBUG nova.virt.libvirt.host [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.074 232432 DEBUG nova.virt.libvirt.host [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.075 232432 DEBUG nova.virt.libvirt.host [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.076 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.077 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.077 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.077 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.078 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.078 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.078 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.078 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.079 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.079 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.079 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.079 232432 DEBUG nova.virt.hardware [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.083 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.126 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.127 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.136 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.138 232432 INFO nova.compute.claims [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.337 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/391132986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.574 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.618 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.626 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:19.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2149097262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.932 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.938 232432 DEBUG nova.compute.provider_tree [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.955 232432 DEBUG nova.scheduler.client.report [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.983 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:19 compute-2 nova_compute[232428]: 2025-11-29 08:20:19.984 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.041 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.042 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.062 232432 INFO nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.079 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:20:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/817025426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.107 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.108 232432 DEBUG nova.virt.libvirt.vif [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-198631012',display_name='tempest-AttachVolumeNegativeTest-server-198631012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-198631012',id=142,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwHl78errYrJFkmma0i5ieoie7I7fwaj44zdFp/3Fn3Jwg2kBg2Hoebwi84sZnLnrseRly93c2dO4W6/57XakSStgW+oCTcJZfyq33Ol3rDeFf1hNRp3bLOlP0aiSfzew==',key_name='tempest-keypair-616488267',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-u6167f73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=d8142255-87a5-4d36-9908-a5456701e3c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.109 232432 DEBUG nova.network.os_vif_util [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.109 232432 DEBUG nova.network.os_vif_util [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.113 232432 DEBUG nova.objects.instance [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8142255-87a5-4d36-9908-a5456701e3c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:20 compute-2 ceph-mon[77138]: pgmap v2448: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.0 MiB/s wr, 453 op/s
Nov 29 08:20:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/391132986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2149097262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/817025426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.131 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <uuid>d8142255-87a5-4d36-9908-a5456701e3c3</uuid>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <name>instance-0000008e</name>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeNegativeTest-server-198631012</nova:name>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:20:19</nova:creationTime>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:user uuid="09f1f8a0998948b7b96830d8559609f6">tempest-AttachVolumeNegativeTest-1426807399-project-member</nova:user>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:project uuid="61d8d3b6b31f4b36b5749db9c550c696">tempest-AttachVolumeNegativeTest-1426807399</nova:project>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <nova:port uuid="d2132df3-9087-42d6-83a1-0e253e16ba1f">
Nov 29 08:20:20 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <system>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="serial">d8142255-87a5-4d36-9908-a5456701e3c3</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="uuid">d8142255-87a5-4d36-9908-a5456701e3c3</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </system>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <os>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </os>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <features>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </features>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d8142255-87a5-4d36-9908-a5456701e3c3_disk">
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/d8142255-87a5-4d36-9908-a5456701e3c3_disk.config">
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:20 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:5c:39:64"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <target dev="tapd2132df3-90"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/console.log" append="off"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <video>
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </video>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:20:20 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:20:20 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:20:20 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:20:20 compute-2 nova_compute[232428]: </domain>
Nov 29 08:20:20 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.133 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Preparing to wait for external event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.134 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.135 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.135 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.137 232432 DEBUG nova.virt.libvirt.vif [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-198631012',display_name='tempest-AttachVolumeNegativeTest-server-198631012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-198631012',id=142,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwHl78errYrJFkmma0i5ieoie7I7fwaj44zdFp/3Fn3Jwg2kBg2Hoebwi84sZnLnrseRly93c2dO4W6/57XakSStgW+oCTcJZfyq33Ol3rDeFf1hNRp3bLOlP0aiSfzew==',key_name='tempest-keypair-616488267',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-u6167f73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=d8142255-87a5-4d36-9908-a5456701e3c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.138 232432 DEBUG nova.network.os_vif_util [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.139 232432 DEBUG nova.network.os_vif_util [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.140 232432 DEBUG os_vif [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.142 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.143 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.144 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.144 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.150 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.150 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2132df3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.152 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2132df3-90, col_values=(('external_ids', {'iface-id': 'd2132df3-9087-42d6-83a1-0e253e16ba1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:39:64', 'vm-uuid': 'd8142255-87a5-4d36-9908-a5456701e3c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.154 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-2 NetworkManager[48993]: <info>  [1764404420.1557] manager: (tapd2132df3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.159 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.167 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.169 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.170 232432 INFO nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Creating image(s)
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.202 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.237 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:20.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.274 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.279 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.320 232432 DEBUG nova.policy [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.322 232432 INFO os_vif [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90')
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.352 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.353 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.354 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.354 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.385 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.388 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 32059599-6076-4efa-95e2-06e686824adf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.446 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.447 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.447 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:5c:39:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.448 232432 INFO nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Using config drive
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.482 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.713 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 32059599-6076-4efa-95e2-06e686824adf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.801 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image 32059599-6076-4efa-95e2-06e686824adf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.911 232432 INFO nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Creating config drive at /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.921 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7pea1ms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.968 232432 DEBUG nova.objects.instance [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid 32059599-6076-4efa-95e2-06e686824adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.987 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.988 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Ensure instance console log exists: /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.988 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.989 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:20 compute-2 nova_compute[232428]: 2025-11-29 08:20:20.989 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.027 232432 DEBUG nova.network.neutron [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updated VIF entry in instance network info cache for port d2132df3-9087-42d6-83a1-0e253e16ba1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.028 232432 DEBUG nova.network.neutron [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updating instance_info_cache with network_info: [{"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.061 232432 DEBUG oslo_concurrency.lockutils [req-d4dc218d-26e7-41cd-9dcb-98ef6fab1f78 req-910159e9-d214-4321-ac62-e7c9c4f629a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2473710275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.076 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7pea1ms" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.108 232432 DEBUG nova.storage.rbd_utils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image d8142255-87a5-4d36-9908-a5456701e3c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.113 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config d8142255-87a5-4d36-9908-a5456701e3c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2473710275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.164 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Successfully created port: a0014539-7251-4e83-847d-c77a1f32859f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.294 232432 DEBUG oslo_concurrency.processutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config d8142255-87a5-4d36-9908-a5456701e3c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.296 232432 INFO nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Deleting local config drive /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3/disk.config because it was imported into RBD.
Nov 29 08:20:21 compute-2 kernel: tapd2132df3-90: entered promiscuous mode
Nov 29 08:20:21 compute-2 NetworkManager[48993]: <info>  [1764404421.3623] manager: (tapd2132df3-90): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 08:20:21 compute-2 ovn_controller[134375]: 2025-11-29T08:20:21Z|00642|binding|INFO|Claiming lport d2132df3-9087-42d6-83a1-0e253e16ba1f for this chassis.
Nov 29 08:20:21 compute-2 ovn_controller[134375]: 2025-11-29T08:20:21Z|00643|binding|INFO|d2132df3-9087-42d6-83a1-0e253e16ba1f: Claiming fa:16:3e:5c:39:64 10.100.0.13
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.374 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:39:64 10.100.0.13'], port_security=['fa:16:3e:5c:39:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd8142255-87a5-4d36-9908-a5456701e3c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9ebed7a-d169-4791-b080-c494336a271e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d2132df3-9087-42d6-83a1-0e253e16ba1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.376 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d2132df3-9087-42d6-83a1-0e253e16ba1f in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 bound to our chassis
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.378 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:20:21 compute-2 ovn_controller[134375]: 2025-11-29T08:20:21Z|00644|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f ovn-installed in OVS
Nov 29 08:20:21 compute-2 ovn_controller[134375]: 2025-11-29T08:20:21Z|00645|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f up in Southbound
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.384 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.389 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.407 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfabcf3-65ff-4d77-a7f7-274e71246ebb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 systemd-machined[194747]: New machine qemu-65-instance-0000008e.
Nov 29 08:20:21 compute-2 systemd[1]: Started Virtual Machine qemu-65-instance-0000008e.
Nov 29 08:20:21 compute-2 systemd-udevd[291786]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.464 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0030e303-4483-4774-b485-2e5a34efd789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 NetworkManager[48993]: <info>  [1764404421.4691] device (tapd2132df3-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.468 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e59a2cb1-fd77-4648-82fe-7182e98aab73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 NetworkManager[48993]: <info>  [1764404421.4715] device (tapd2132df3-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.519 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[616f3a58-6eb6-4b5a-ac2b-2c97d444cc47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.551 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09dbab67-db72-4b64-9e67-093c487bfc1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291797, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.574 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[82d542a5-7431-48eb-b26a-a026a82e143f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734791, 'tstamp': 734791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291798, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734797, 'tstamp': 734797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291798, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.577 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.582 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.582 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.583 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:21.583 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.660 232432 DEBUG nova.compute.manager [req-d6c8073b-ebbf-4725-a795-510d6576f8d8 req-4b0724e1-8163-4b77-b8fe-6ae40b58cacc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.661 232432 DEBUG oslo_concurrency.lockutils [req-d6c8073b-ebbf-4725-a795-510d6576f8d8 req-4b0724e1-8163-4b77-b8fe-6ae40b58cacc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.661 232432 DEBUG oslo_concurrency.lockutils [req-d6c8073b-ebbf-4725-a795-510d6576f8d8 req-4b0724e1-8163-4b77-b8fe-6ae40b58cacc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.661 232432 DEBUG oslo_concurrency.lockutils [req-d6c8073b-ebbf-4725-a795-510d6576f8d8 req-4b0724e1-8163-4b77-b8fe-6ae40b58cacc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.661 232432 DEBUG nova.compute.manager [req-d6c8073b-ebbf-4725-a795-510d6576f8d8 req-4b0724e1-8163-4b77-b8fe-6ae40b58cacc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Processing event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:20:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:21.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.894 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404421.893703, d8142255-87a5-4d36-9908-a5456701e3c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.895 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] VM Started (Lifecycle Event)
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.898 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.906 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.911 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Successfully updated port: a0014539-7251-4e83-847d-c77a1f32859f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.914 232432 INFO nova.virt.libvirt.driver [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Instance spawned successfully.
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.915 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.920 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.926 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.932 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.932 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.932 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.946 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.947 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.948 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.949 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.950 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.951 232432 DEBUG nova.virt.libvirt.driver [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.964 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.965 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404421.89805, d8142255-87a5-4d36-9908-a5456701e3c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.965 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] VM Paused (Lifecycle Event)
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.991 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.997 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404421.9047246, d8142255-87a5-4d36-9908-a5456701e3c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:21 compute-2 nova_compute[232428]: 2025-11-29 08:20:21.997 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] VM Resumed (Lifecycle Event)
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.013 232432 DEBUG nova.compute.manager [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-changed-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.014 232432 DEBUG nova.compute.manager [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Refreshing instance network info cache due to event network-changed-a0014539-7251-4e83-847d-c77a1f32859f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.014 232432 DEBUG oslo_concurrency.lockutils [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.018 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.023 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.030 232432 INFO nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Took 7.98 seconds to spawn the instance on the hypervisor.
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.031 232432 DEBUG nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.046 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.127 232432 INFO nova.compute.manager [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Took 9.32 seconds to build instance.
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.149 232432 DEBUG oslo_concurrency.lockutils [None req-99d00570-c32c-40bc-bee2-12d1053e4296 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:22 compute-2 ceph-mon[77138]: pgmap v2449: 305 pgs: 305 active+clean; 891 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 433 op/s
Nov 29 08:20:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2464548411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:22 compute-2 nova_compute[232428]: 2025-11-29 08:20:22.600 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:20:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.791 232432 DEBUG nova.compute.manager [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.791 232432 DEBUG oslo_concurrency.lockutils [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.791 232432 DEBUG oslo_concurrency.lockutils [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.791 232432 DEBUG oslo_concurrency.lockutils [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.792 232432 DEBUG nova.compute.manager [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:23 compute-2 nova_compute[232428]: 2025-11-29 08:20:23.792 232432 WARNING nova.compute.manager [req-b070a58b-bbdf-4d86-9b93-592803977ef3 req-7b2fcf94-6f9f-4a6c-afc0-56c0e35c82e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received unexpected event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with vm_state active and task_state None.
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.137 232432 DEBUG nova.network.neutron [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Updating instance_info_cache with network_info: [{"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.155 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.155 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Instance network_info: |[{"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.156 232432 DEBUG oslo_concurrency.lockutils [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.156 232432 DEBUG nova.network.neutron [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Refreshing network info cache for port a0014539-7251-4e83-847d-c77a1f32859f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.159 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Start _get_guest_xml network_info=[{"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.163 232432 WARNING nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.168 232432 DEBUG nova.virt.libvirt.host [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.169 232432 DEBUG nova.virt.libvirt.host [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.174 232432 DEBUG nova.virt.libvirt.host [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.174 232432 DEBUG nova.virt.libvirt.host [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.176 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.176 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.176 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.176 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.176 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.177 232432 DEBUG nova.virt.hardware [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:20:24 compute-2 ceph-mon[77138]: pgmap v2450: 305 pgs: 305 active+clean; 891 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 8.7 MiB/s wr, 290 op/s
Nov 29 08:20:24 compute-2 ceph-mon[77138]: osdmap e324: 3 total, 3 up, 3 in
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.180 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3188003033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.700 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.740 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:24 compute-2 nova_compute[232428]: 2025-11-29 08:20:24.746 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3571242896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3188003033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.213 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.215 232432 DEBUG nova.virt.libvirt.vif [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-683916015',display_name='tempest-ServersTestJSON-server-683916015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-683916015',id=143,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXDn9EZrFcKiTArEKnOhE203w9vG8A7a91pCh3YSZOoieyBM6WEkSVQ949eyM2fsavXTFqZ6SygHPLwPiRMEn/vLV2xUgjUQnT8wRjOvSU92oBg8oxZ8LEdm9hOh1CBFQ==',key_name='tempest-key-1058594689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-dddrnnbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:20Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=32059599-6076-4efa-95e2-06e686824adf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.216 232432 DEBUG nova.network.os_vif_util [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.218 232432 DEBUG nova.network.os_vif_util [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.220 232432 DEBUG nova.objects.instance [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32059599-6076-4efa-95e2-06e686824adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.239 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <uuid>32059599-6076-4efa-95e2-06e686824adf</uuid>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <name>instance-0000008f</name>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestJSON-server-683916015</nova:name>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:20:24</nova:creationTime>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <nova:port uuid="a0014539-7251-4e83-847d-c77a1f32859f">
Nov 29 08:20:25 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <system>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="serial">32059599-6076-4efa-95e2-06e686824adf</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="uuid">32059599-6076-4efa-95e2-06e686824adf</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </system>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <os>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </os>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <features>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </features>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/32059599-6076-4efa-95e2-06e686824adf_disk">
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/32059599-6076-4efa-95e2-06e686824adf_disk.config">
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:25 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:14:22:9a"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <target dev="tapa0014539-72"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/console.log" append="off"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <video>
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </video>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:20:25 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:20:25 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:20:25 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:20:25 compute-2 nova_compute[232428]: </domain>
Nov 29 08:20:25 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.240 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Preparing to wait for external event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.240 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.241 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.242 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.243 232432 DEBUG nova.virt.libvirt.vif [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-683916015',display_name='tempest-ServersTestJSON-server-683916015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-683916015',id=143,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXDn9EZrFcKiTArEKnOhE203w9vG8A7a91pCh3YSZOoieyBM6WEkSVQ949eyM2fsavXTFqZ6SygHPLwPiRMEn/vLV2xUgjUQnT8wRjOvSU92oBg8oxZ8LEdm9hOh1CBFQ==',key_name='tempest-key-1058594689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-dddrnnbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:20Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=32059599-6076-4efa-95e2-06e686824adf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.244 232432 DEBUG nova.network.os_vif_util [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.245 232432 DEBUG nova.network.os_vif_util [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.246 232432 DEBUG os_vif [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.248 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.249 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.255 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0014539-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.256 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0014539-72, col_values=(('external_ids', {'iface-id': 'a0014539-7251-4e83-847d-c77a1f32859f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:22:9a', 'vm-uuid': '32059599-6076-4efa-95e2-06e686824adf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.258 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:25 compute-2 NetworkManager[48993]: <info>  [1764404425.2603] manager: (tapa0014539-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.263 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.267 232432 INFO os_vif [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72')
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.334 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.335 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.335 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:14:22:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.335 232432 INFO nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Using config drive
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.377 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:25 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 08:20:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.738 232432 INFO nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Creating config drive at /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config
Nov 29 08:20:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.746 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod8dyw_b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.892 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod8dyw_b" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.936 232432 DEBUG nova.storage.rbd_utils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 32059599-6076-4efa-95e2-06e686824adf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:25 compute-2 nova_compute[232428]: 2025-11-29 08:20:25.941 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config 32059599-6076-4efa-95e2-06e686824adf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.135 232432 DEBUG oslo_concurrency.processutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config 32059599-6076-4efa-95e2-06e686824adf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.136 232432 INFO nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Deleting local config drive /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf/disk.config because it was imported into RBD.
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.159 232432 DEBUG nova.network.neutron [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Updated VIF entry in instance network info cache for port a0014539-7251-4e83-847d-c77a1f32859f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.160 232432 DEBUG nova.network.neutron [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Updating instance_info_cache with network_info: [{"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.178 232432 DEBUG oslo_concurrency.lockutils [req-12ea65c2-eae4-4098-a748-ba4ffa3f5038 req-304d6de0-1e2c-422b-a4b9-7e0cf7aa5d5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-32059599-6076-4efa-95e2-06e686824adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:26 compute-2 kernel: tapa0014539-72: entered promiscuous mode
Nov 29 08:20:26 compute-2 ceph-mon[77138]: pgmap v2452: 305 pgs: 305 active+clean; 902 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 200 op/s
Nov 29 08:20:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3571242896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.2100] manager: (tapa0014539-72): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 08:20:26 compute-2 ovn_controller[134375]: 2025-11-29T08:20:26Z|00646|binding|INFO|Claiming lport a0014539-7251-4e83-847d-c77a1f32859f for this chassis.
Nov 29 08:20:26 compute-2 ovn_controller[134375]: 2025-11-29T08:20:26Z|00647|binding|INFO|a0014539-7251-4e83-847d-c77a1f32859f: Claiming fa:16:3e:14:22:9a 10.100.0.4
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.221 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:22:9a 10.100.0.4'], port_security=['fa:16:3e:14:22:9a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '32059599-6076-4efa-95e2-06e686824adf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=a0014539-7251-4e83-847d-c77a1f32859f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.222 143801 INFO neutron.agent.ovn.metadata.agent [-] Port a0014539-7251-4e83-847d-c77a1f32859f in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.223 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.240 232432 DEBUG nova.compute.manager [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-changed-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.241 232432 DEBUG nova.compute.manager [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Refreshing instance network info cache due to event network-changed-d2132df3-9087-42d6-83a1-0e253e16ba1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.242 232432 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.242 232432 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.243 232432 DEBUG nova.network.neutron [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Refreshing network info cache for port d2132df3-9087-42d6-83a1-0e253e16ba1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.243 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[39fc6953-8dcf-46ee-953f-3c7510c8e3fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.244 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97e6ef02-61 in ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.246 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97e6ef02-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.247 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7138f4db-0376-4349-93ad-5cc2996c969b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.247 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc546b2-e013-437c-b37f-af910bded0d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_controller[134375]: 2025-11-29T08:20:26Z|00648|binding|INFO|Setting lport a0014539-7251-4e83-847d-c77a1f32859f ovn-installed in OVS
Nov 29 08:20:26 compute-2 ovn_controller[134375]: 2025-11-29T08:20:26Z|00649|binding|INFO|Setting lport a0014539-7251-4e83-847d-c77a1f32859f up in Southbound
Nov 29 08:20:26 compute-2 systemd-machined[194747]: New machine qemu-66-instance-0000008f.
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 systemd[1]: Started Virtual Machine qemu-66-instance-0000008f.
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.273 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[379411fe-ccda-4fca-aabc-a263f6c53c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:26 compute-2 systemd-udevd[291982]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.302 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb9dfc7-42f7-4dde-b4c1-1bf372f822a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.3194] device (tapa0014539-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.3217] device (tapa0014539-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.339 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2e17fd6f-98af-483e-8c78-99d4a601cf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.345 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aac5470b-6981-4a62-bffd-a0d1189a8451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.3477] manager: (tap97e6ef02-60): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.392 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d8640de3-9e54-48b8-a164-fcfaacea072b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.396 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e242b836-5bd1-4618-b3ca-92cab9022d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.4251] device (tap97e6ef02-60): carrier: link connected
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.433 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b21afd15-f610-4349-9984-1b2be8292734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.458 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7897bf9e-9af1-4f09-acbc-cadf55834e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739269, 'reachable_time': 36828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292012, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.477 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[268a3f67-a612-4681-8802-b7359ce8dd5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:de28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739269, 'tstamp': 739269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292013, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.499 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9e68c0-e335-42c3-8022-1182f942c279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739269, 'reachable_time': 36828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292014, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.538 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c44027-0241-493c-beac-7a69cac92a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.615 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2817a5c1-0d6f-41b2-b805-c3fddda80c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.617 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.617 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.618 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:26 compute-2 kernel: tap97e6ef02-60: entered promiscuous mode
Nov 29 08:20:26 compute-2 NetworkManager[48993]: <info>  [1764404426.6204] manager: (tap97e6ef02-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.629 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:26 compute-2 ovn_controller[134375]: 2025-11-29T08:20:26Z|00650|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.644 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.645 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.645 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe0d1a4-0def-4d1f-aa6d-dbd99e769bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.646 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:20:26 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:26.647 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'env', 'PROCESS_TAG=haproxy-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97e6ef02-6896-45a2-9eb9-28926c1a7400.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.876 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404426.8755453, 32059599-6076-4efa-95e2-06e686824adf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.876 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] VM Started (Lifecycle Event)
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.922 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.927 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404426.8757205, 32059599-6076-4efa-95e2-06e686824adf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.927 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] VM Paused (Lifecycle Event)
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.943 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.945 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:26 compute-2 nova_compute[232428]: 2025-11-29 08:20:26.960 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:27 compute-2 podman[292085]: 2025-11-29 08:20:27.058066769 +0000 UTC m=+0.063853010 container create 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:20:27 compute-2 systemd[1]: Started libpod-conmon-28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069.scope.
Nov 29 08:20:27 compute-2 podman[292085]: 2025-11-29 08:20:27.019761369 +0000 UTC m=+0.025547660 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:20:27 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:20:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368e1ec1a2cf7f0d33608be5fe84a8c88eea5cc92d26cdcab8db22bd01752349/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:27 compute-2 podman[292085]: 2025-11-29 08:20:27.162361183 +0000 UTC m=+0.168147444 container init 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:20:27 compute-2 podman[292085]: 2025-11-29 08:20:27.167714472 +0000 UTC m=+0.173500703 container start 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:20:27 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [NOTICE]   (292104) : New worker (292106) forked
Nov 29 08:20:27 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [NOTICE]   (292104) : Loading success.
Nov 29 08:20:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:20:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3903903278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:20:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:20:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3903903278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:28 compute-2 ceph-mon[77138]: pgmap v2453: 305 pgs: 305 active+clean; 955 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 10 MiB/s wr, 339 op/s
Nov 29 08:20:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3903903278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:20:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3903903278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.340 232432 DEBUG nova.compute.manager [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.341 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.341 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.342 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.342 232432 DEBUG nova.compute.manager [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Processing event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.342 232432 DEBUG nova.compute.manager [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.343 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.343 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.343 232432 DEBUG oslo_concurrency.lockutils [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.344 232432 DEBUG nova.compute.manager [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] No waiting events found dispatching network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.344 232432 WARNING nova.compute.manager [req-3ecfd19b-8cc8-4c64-bfab-7e09ec705c12 req-0330f699-79db-4363-8b83-e654f3d85e70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received unexpected event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f for instance with vm_state building and task_state spawning.
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.345 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.364 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404428.3629627, 32059599-6076-4efa-95e2-06e686824adf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.365 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] VM Resumed (Lifecycle Event)
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.367 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.374 232432 INFO nova.virt.libvirt.driver [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] Instance spawned successfully.
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.374 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.413 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.418 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.430 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.430 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.431 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.431 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.432 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.432 232432 DEBUG nova.virt.libvirt.driver [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.590 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.610 232432 DEBUG nova.network.neutron [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updated VIF entry in instance network info cache for port d2132df3-9087-42d6-83a1-0e253e16ba1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.611 232432 DEBUG nova.network.neutron [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updating instance_info_cache with network_info: [{"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.696 232432 INFO nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Took 8.53 seconds to spawn the instance on the hypervisor.
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.696 232432 DEBUG nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.703 232432 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d8142255-87a5-4d36-9908-a5456701e3c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.818 232432 INFO nova.compute.manager [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Took 9.77 seconds to build instance.
Nov 29 08:20:28 compute-2 nova_compute[232428]: 2025-11-29 08:20:28.879 232432 DEBUG oslo_concurrency.lockutils [None req-d61c1dc0-d995-46e9-abd6-108c9f3b36bd 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:29 compute-2 ceph-mon[77138]: pgmap v2454: 305 pgs: 305 active+clean; 963 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.0 MiB/s wr, 300 op/s
Nov 29 08:20:29 compute-2 podman[292116]: 2025-11-29 08:20:29.675023708 +0000 UTC m=+0.079289524 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:20:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:29.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.890 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.891 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.891 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.891 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.892 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.893 232432 INFO nova.compute.manager [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Terminating instance
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.895 232432 DEBUG nova.compute.manager [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:20:29 compute-2 kernel: tapa0014539-72 (unregistering): left promiscuous mode
Nov 29 08:20:29 compute-2 NetworkManager[48993]: <info>  [1764404429.9676] device (tapa0014539-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:20:29 compute-2 ovn_controller[134375]: 2025-11-29T08:20:29Z|00651|binding|INFO|Releasing lport a0014539-7251-4e83-847d-c77a1f32859f from this chassis (sb_readonly=0)
Nov 29 08:20:29 compute-2 ovn_controller[134375]: 2025-11-29T08:20:29Z|00652|binding|INFO|Setting lport a0014539-7251-4e83-847d-c77a1f32859f down in Southbound
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:29 compute-2 ovn_controller[134375]: 2025-11-29T08:20:29Z|00653|binding|INFO|Removing iface tapa0014539-72 ovn-installed in OVS
Nov 29 08:20:29 compute-2 nova_compute[232428]: 2025-11-29 08:20:29.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:29.984 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:22:9a 10.100.0.4'], port_security=['fa:16:3e:14:22:9a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '32059599-6076-4efa-95e2-06e686824adf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=a0014539-7251-4e83-847d-c77a1f32859f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:29.985 143801 INFO neutron.agent.ovn.metadata.agent [-] Port a0014539-7251-4e83-847d-c77a1f32859f in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis
Nov 29 08:20:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:29.987 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97e6ef02-6896-45a2-9eb9-28926c1a7400, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:20:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:29.988 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b83b33d-dc06-451a-8561-565d61a79934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:29.988 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace which is not needed anymore
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.008 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:30 compute-2 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 29 08:20:30 compute-2 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008f.scope: Consumed 2.253s CPU time.
Nov 29 08:20:30 compute-2 systemd-machined[194747]: Machine qemu-66-instance-0000008f terminated.
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.143 232432 INFO nova.virt.libvirt.driver [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] Instance destroyed successfully.
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.145 232432 DEBUG nova.objects.instance [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid 32059599-6076-4efa-95e2-06e686824adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.161 232432 DEBUG nova.virt.libvirt.vif [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-683916015',display_name='tempest-ServersTestJSON-server-683916015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-683916015',id=143,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXDn9EZrFcKiTArEKnOhE203w9vG8A7a91pCh3YSZOoieyBM6WEkSVQ949eyM2fsavXTFqZ6SygHPLwPiRMEn/vLV2xUgjUQnT8wRjOvSU92oBg8oxZ8LEdm9hOh1CBFQ==',key_name='tempest-key-1058594689',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-dddrnnbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:28Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=32059599-6076-4efa-95e2-06e686824adf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.162 232432 DEBUG nova.network.os_vif_util [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "a0014539-7251-4e83-847d-c77a1f32859f", "address": "fa:16:3e:14:22:9a", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0014539-72", "ovs_interfaceid": "a0014539-7251-4e83-847d-c77a1f32859f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.164 232432 DEBUG nova.network.os_vif_util [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.165 232432 DEBUG os_vif [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.168 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.169 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0014539-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.173 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.177 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.181 232432 INFO os_vif [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:22:9a,bridge_name='br-int',has_traffic_filtering=True,id=a0014539-7251-4e83-847d-c77a1f32859f,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0014539-72')
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [NOTICE]   (292104) : haproxy version is 2.8.14-c23fe91
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [NOTICE]   (292104) : path to executable is /usr/sbin/haproxy
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [WARNING]  (292104) : Exiting Master process...
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [WARNING]  (292104) : Exiting Master process...
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [ALERT]    (292104) : Current worker (292106) exited with code 143 (Terminated)
Nov 29 08:20:30 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292100]: [WARNING]  (292104) : All workers exited. Exiting... (0)
Nov 29 08:20:30 compute-2 systemd[1]: libpod-28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069.scope: Deactivated successfully.
Nov 29 08:20:30 compute-2 podman[292167]: 2025-11-29 08:20:30.224610242 +0000 UTC m=+0.059746192 container died 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:20:30 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069-userdata-shm.mount: Deactivated successfully.
Nov 29 08:20:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-368e1ec1a2cf7f0d33608be5fe84a8c88eea5cc92d26cdcab8db22bd01752349-merged.mount: Deactivated successfully.
Nov 29 08:20:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:30 compute-2 podman[292167]: 2025-11-29 08:20:30.29324295 +0000 UTC m=+0.128378880 container cleanup 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:20:30 compute-2 systemd[1]: libpod-conmon-28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069.scope: Deactivated successfully.
Nov 29 08:20:30 compute-2 podman[292220]: 2025-11-29 08:20:30.377093155 +0000 UTC m=+0.055288152 container remove 28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[57501382-09b5-4ff5-aa19-088945b99a6b]: (4, ('Sat Nov 29 08:20:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069)\n28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069\nSat Nov 29 08:20:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069)\n28f64a8ee4b2456ac83575de969dca649f7efd9120cae4b76e7de44ae984b069\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.386 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a0380d-37d8-48f7-8e9f-1283c855cdb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.388 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:30 compute-2 kernel: tap97e6ef02-60: left promiscuous mode
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.391 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.409 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.411 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b44d79-cde8-45fa-83c8-49fdbfc1ba2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.434 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[af237e23-bb64-4c3d-a9d0-7e021ad6791e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.436 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f0973146-f5de-4fbc-ae37-dbe8c7e5b503]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.454 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f35a184-7577-41b8-bcab-1ad96f18a289]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739260, 'reachable_time': 44889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292237, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 systemd[1]: run-netns-ovnmeta\x2d97e6ef02\x2d6896\x2d45a2\x2d9eb9\x2d28926c1a7400.mount: Deactivated successfully.
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.459 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:20:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:30.459 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[46694a01-a013-4cdc-b20c-4043bfad46f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.483 232432 DEBUG nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-unplugged-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.485 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.486 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.487 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.488 232432 DEBUG nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] No waiting events found dispatching network-vif-unplugged-a0014539-7251-4e83-847d-c77a1f32859f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.489 232432 DEBUG nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-unplugged-a0014539-7251-4e83-847d-c77a1f32859f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.490 232432 DEBUG nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.490 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32059599-6076-4efa-95e2-06e686824adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.490 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.491 232432 DEBUG oslo_concurrency.lockutils [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.491 232432 DEBUG nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] No waiting events found dispatching network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.492 232432 WARNING nova.compute.manager [req-40bb8b4c-a5a0-4448-92ed-75a5fd8c1fc2 req-a2a1b9b4-7a22-4784-a877-41defeec2fa5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received unexpected event network-vif-plugged-a0014539-7251-4e83-847d-c77a1f32859f for instance with vm_state active and task_state deleting.
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.661 232432 INFO nova.virt.libvirt.driver [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Deleting instance files /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf_del
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.663 232432 INFO nova.virt.libvirt.driver [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Deletion of /var/lib/nova/instances/32059599-6076-4efa-95e2-06e686824adf_del complete
Nov 29 08:20:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.741 232432 INFO nova.compute.manager [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.743 232432 DEBUG oslo.service.loopingcall [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.743 232432 DEBUG nova.compute.manager [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:20:30 compute-2 nova_compute[232428]: 2025-11-29 08:20:30.744 232432 DEBUG nova.network.neutron [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.319 232432 DEBUG nova.network.neutron [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.337 232432 INFO nova.compute.manager [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] Took 0.59 seconds to deallocate network for instance.
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.417 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.418 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:31 compute-2 sudo[292238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:31 compute-2 sudo[292238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:31 compute-2 sudo[292238]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:31 compute-2 nova_compute[232428]: 2025-11-29 08:20:31.527 232432 DEBUG oslo_concurrency.processutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:31 compute-2 ceph-mon[77138]: pgmap v2455: 305 pgs: 305 active+clean; 971 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 340 op/s
Nov 29 08:20:31 compute-2 sudo[292263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:20:31 compute-2 sudo[292263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:31 compute-2 sudo[292263]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:31 compute-2 sudo[292289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:31 compute-2 sudo[292289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:31 compute-2 sudo[292289]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:31 compute-2 sudo[292334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:20:31 compute-2 sudo[292334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2453119071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.027 232432 DEBUG oslo_concurrency.processutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.041 232432 DEBUG nova.compute.provider_tree [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.070 232432 DEBUG nova.scheduler.client.report [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.105 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.137 232432 INFO nova.scheduler.client.report [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance 32059599-6076-4efa-95e2-06e686824adf
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.240 232432 DEBUG oslo_concurrency.lockutils [None req-5e1300ab-fe65-4ce1-b9d9-34c079326f07 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "32059599-6076-4efa-95e2-06e686824adf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:32.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:32 compute-2 sudo[292334]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:32 compute-2 nova_compute[232428]: 2025-11-29 08:20:32.552 232432 DEBUG nova.compute.manager [req-5900c63f-a6bb-4eda-8ba4-faf12d1b0d5b req-c94f25b6-f7d9-436b-b050-d6bc12c6383a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32059599-6076-4efa-95e2-06e686824adf] Received event network-vif-deleted-a0014539-7251-4e83-847d-c77a1f32859f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2453119071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:33.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:34 compute-2 ceph-mon[77138]: pgmap v2456: 305 pgs: 305 active+clean; 971 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 340 op/s
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:20:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:34 compute-2 podman[292390]: 2025-11-29 08:20:34.736835939 +0000 UTC m=+0.122462364 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.174 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:35 compute-2 sudo[292412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:35 compute-2 sudo[292412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:35 compute-2 sudo[292412]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:35 compute-2 sudo[292437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:35 compute-2 sudo[292437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:35 compute-2 sudo[292437]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.860 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.861 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.889 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.973 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.973 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.981 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:20:35 compute-2 nova_compute[232428]: 2025-11-29 08:20:35.981 232432 INFO nova.compute.claims [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:20:36 compute-2 ovn_controller[134375]: 2025-11-29T08:20:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:39:64 10.100.0.13
Nov 29 08:20:36 compute-2 ovn_controller[134375]: 2025-11-29T08:20:36Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:39:64 10.100.0.13
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.183 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:36 compute-2 ceph-mon[77138]: pgmap v2457: 305 pgs: 305 active+clean; 960 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.7 MiB/s wr, 311 op/s
Nov 29 08:20:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1851070294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.713 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.723 232432 DEBUG nova.compute.provider_tree [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.739 232432 DEBUG nova.scheduler.client.report [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.758 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.758 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.796 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.796 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.818 232432 INFO nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.840 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.932 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.934 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.935 232432 INFO nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Creating image(s)
Nov 29 08:20:36 compute-2 nova_compute[232428]: 2025-11-29 08:20:36.976 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.015 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.048 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.052 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.094 232432 DEBUG nova.policy [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.133 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.134 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.135 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.135 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.170 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.175 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 69aa1e78-8728-455d-9d19-eaa720f597b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1851070294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.524 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 69aa1e78-8728-455d-9d19-eaa720f597b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.622 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:20:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:37.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.921 232432 DEBUG nova.objects.instance [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid 69aa1e78-8728-455d-9d19-eaa720f597b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.934 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Successfully created port: 7e9e5bb3-536d-4044-ba89-87caa7779a1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.947 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.947 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Ensure instance console log exists: /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.947 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.948 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:37 compute-2 nova_compute[232428]: 2025-11-29 08:20:37.948 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:38 compute-2 ceph-mon[77138]: pgmap v2458: 305 pgs: 305 active+clean; 933 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.0 MiB/s wr, 320 op/s
Nov 29 08:20:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3670188589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/328847071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.739 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Successfully updated port: 7e9e5bb3-536d-4044-ba89-87caa7779a1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.756 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.757 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.757 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.835 232432 DEBUG nova.compute.manager [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-changed-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.836 232432 DEBUG nova.compute.manager [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Refreshing instance network info cache due to event network-changed-7e9e5bb3-536d-4044-ba89-87caa7779a1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.836 232432 DEBUG oslo_concurrency.lockutils [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:38 compute-2 nova_compute[232428]: 2025-11-29 08:20:38.963 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:20:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1105626513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:39.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.117 232432 DEBUG nova.network.neutron [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Updating instance_info_cache with network_info: [{"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.153 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.154 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Instance network_info: |[{"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.155 232432 DEBUG oslo_concurrency.lockutils [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.156 232432 DEBUG nova.network.neutron [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Refreshing network info cache for port 7e9e5bb3-536d-4044-ba89-87caa7779a1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.162 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Start _get_guest_xml network_info=[{"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.171 232432 WARNING nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.187 232432 DEBUG nova.virt.libvirt.host [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.188 232432 DEBUG nova.virt.libvirt.host [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.192 232432 DEBUG nova.virt.libvirt.host [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.193 232432 DEBUG nova.virt.libvirt.host [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.195 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.196 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.197 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.197 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.198 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.198 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.199 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.199 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.200 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.200 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.201 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.202 232432 DEBUG nova.virt.hardware [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.208 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:40 compute-2 sudo[292653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:40 compute-2 sudo[292653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:40 compute-2 sudo[292653]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:40 compute-2 ceph-mon[77138]: pgmap v2459: 305 pgs: 305 active+clean; 965 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 224 op/s
Nov 29 08:20:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:20:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:20:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:40 compute-2 sudo[292679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:20:40 compute-2 sudo[292679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:40 compute-2 sudo[292679]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/579870382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.657 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.691 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:40 compute-2 nova_compute[232428]: 2025-11-29 08:20:40.697 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4016446381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.226 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.228 232432 DEBUG nova.virt.libvirt.vif [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=144,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-rwn56saz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:36Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=69aa1e78-8728-455d-9d19-eaa720f597b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.229 232432 DEBUG nova.network.os_vif_util [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.230 232432 DEBUG nova.network.os_vif_util [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.231 232432 DEBUG nova.objects.instance [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69aa1e78-8728-455d-9d19-eaa720f597b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.249 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <uuid>69aa1e78-8728-455d-9d19-eaa720f597b1</uuid>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <name>instance-00000090</name>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestJSON-server-1402160904</nova:name>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:20:40</nova:creationTime>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <nova:port uuid="7e9e5bb3-536d-4044-ba89-87caa7779a1a">
Nov 29 08:20:41 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <system>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="serial">69aa1e78-8728-455d-9d19-eaa720f597b1</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="uuid">69aa1e78-8728-455d-9d19-eaa720f597b1</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </system>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <os>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </os>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <features>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </features>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/69aa1e78-8728-455d-9d19-eaa720f597b1_disk">
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config">
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </source>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:20:41 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:ea:c8:af"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <target dev="tap7e9e5bb3-53"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/console.log" append="off"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <video>
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </video>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:20:41 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:20:41 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:20:41 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:20:41 compute-2 nova_compute[232428]: </domain>
Nov 29 08:20:41 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.250 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Preparing to wait for external event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.251 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.252 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.252 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.253 232432 DEBUG nova.virt.libvirt.vif [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=144,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-rwn56saz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:36Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=69aa1e78-8728-455d-9d19-eaa720f597b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.254 232432 DEBUG nova.network.os_vif_util [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.255 232432 DEBUG nova.network.os_vif_util [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.256 232432 DEBUG os_vif [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.257 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.258 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.259 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.265 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e9e5bb3-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.266 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e9e5bb3-53, col_values=(('external_ids', {'iface-id': '7e9e5bb3-536d-4044-ba89-87caa7779a1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:c8:af', 'vm-uuid': '69aa1e78-8728-455d-9d19-eaa720f597b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-2 NetworkManager[48993]: <info>  [1764404441.2704] manager: (tap7e9e5bb3-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 08:20:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/579870382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4016446381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.273 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.283 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.286 232432 INFO os_vif [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53')
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.359 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.359 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.360 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:ea:c8:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.360 232432 INFO nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Using config drive
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.393 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.508 232432 DEBUG nova.network.neutron [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Updated VIF entry in instance network info cache for port 7e9e5bb3-536d-4044-ba89-87caa7779a1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.509 232432 DEBUG nova.network.neutron [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Updating instance_info_cache with network_info: [{"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.513 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.525 232432 DEBUG oslo_concurrency.lockutils [req-9953f8dd-14b7-4129-b2cd-7ddc885f92c3 req-9934f1c1-d7ab-46be-a0cc-4bb627371ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-69aa1e78-8728-455d-9d19-eaa720f597b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.947 232432 INFO nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Creating config drive at /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config
Nov 29 08:20:41 compute-2 nova_compute[232428]: 2025-11-29 08:20:41.958 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl94jfnwj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.111 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl94jfnwj" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.168 232432 DEBUG nova.storage.rbd_utils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.173 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config 69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:42 compute-2 ceph-mon[77138]: pgmap v2460: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 298 op/s
Nov 29 08:20:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.397 232432 DEBUG oslo_concurrency.processutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config 69aa1e78-8728-455d-9d19-eaa720f597b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.398 232432 INFO nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Deleting local config drive /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1/disk.config because it was imported into RBD.
Nov 29 08:20:42 compute-2 kernel: tap7e9e5bb3-53: entered promiscuous mode
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.4604] manager: (tap7e9e5bb3-53): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 08:20:42 compute-2 ovn_controller[134375]: 2025-11-29T08:20:42Z|00654|binding|INFO|Claiming lport 7e9e5bb3-536d-4044-ba89-87caa7779a1a for this chassis.
Nov 29 08:20:42 compute-2 ovn_controller[134375]: 2025-11-29T08:20:42Z|00655|binding|INFO|7e9e5bb3-536d-4044-ba89-87caa7779a1a: Claiming fa:16:3e:ea:c8:af 10.100.0.9
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.462 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.472 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:c8:af 10.100.0.9'], port_security=['fa:16:3e:ea:c8:af 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '69aa1e78-8728-455d-9d19-eaa720f597b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7e9e5bb3-536d-4044-ba89-87caa7779a1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.476 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7e9e5bb3-536d-4044-ba89-87caa7779a1a in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.480 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:42 compute-2 ovn_controller[134375]: 2025-11-29T08:20:42Z|00656|binding|INFO|Setting lport 7e9e5bb3-536d-4044-ba89-87caa7779a1a ovn-installed in OVS
Nov 29 08:20:42 compute-2 ovn_controller[134375]: 2025-11-29T08:20:42Z|00657|binding|INFO|Setting lport 7e9e5bb3-536d-4044-ba89-87caa7779a1a up in Southbound
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.484 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.489 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 systemd-udevd[292840]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.495 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[15bec2fa-4395-4d0d-a2e7-6428b4acc1e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.496 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97e6ef02-61 in ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.498 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97e6ef02-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.498 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e5558d-e668-440d-8f04-c835be791629]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.499 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5127f24e-5a0e-4233-8036-025ffe166f84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 systemd-machined[194747]: New machine qemu-67-instance-00000090.
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.5104] device (tap7e9e5bb3-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.5119] device (tap7e9e5bb3-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:20:42 compute-2 systemd[1]: Started Virtual Machine qemu-67-instance-00000090.
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.513 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c8977395-c978-4936-aed2-3309c4685259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.542 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b009b37e-15d3-4042-b68a-cc0aa83c3660]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.592 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1a972d09-6ba9-48b0-afa8-d502eaa9810f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.597 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[61b71243-b8a8-4675-970f-c87f3c7e28de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.5992] manager: (tap97e6ef02-60): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.639 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9702d4ab-65a7-40b9-99b1-9c64a31cfe3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.643 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0b99fd20-cc09-46d1-9f41-7ab128d5e55b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.6652] device (tap97e6ef02-60): carrier: link connected
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.672 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2a4e75-27f8-4abb-9c6f-7ce1fe776b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.689 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d795b3b3-eef5-408d-a048-e17b7d7d0923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740893, 'reachable_time': 37306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292873, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.712 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6c63c89e-d2c7-4226-87a4-55dbf0f544b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:de28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 740893, 'tstamp': 740893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292874, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.729 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80a38234-1422-4089-aa3a-24395bcd2263]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740893, 'reachable_time': 37306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292875, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.775 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4d08e4-5305-4918-9f96-c91b70c0b91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.868 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[74c1fb62-94cf-4df7-8eac-30d21d89cf22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.870 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.870 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.870 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 NetworkManager[48993]: <info>  [1764404442.8736] manager: (tap97e6ef02-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 08:20:42 compute-2 kernel: tap97e6ef02-60: entered promiscuous mode
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.881 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.882 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 ovn_controller[134375]: 2025-11-29T08:20:42Z|00658|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 nova_compute[232428]: 2025-11-29 08:20:42.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.900 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.901 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9842585d-9274-4ac5-9e9e-b2ec373a9f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.901 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:20:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:20:42.902 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'env', 'PROCESS_TAG=haproxy-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97e6ef02-6896-45a2-9eb9-28926c1a7400.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.004 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404443.0040665, 69aa1e78-8728-455d-9d19-eaa720f597b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.005 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] VM Started (Lifecycle Event)
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.080 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.086 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404443.0044582, 69aa1e78-8728-455d-9d19-eaa720f597b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.086 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] VM Paused (Lifecycle Event)
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.104 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.108 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:43 compute-2 nova_compute[232428]: 2025-11-29 08:20:43.126 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:43 compute-2 ceph-mon[77138]: pgmap v2461: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.1 MiB/s wr, 215 op/s
Nov 29 08:20:43 compute-2 podman[292949]: 2025-11-29 08:20:43.500861755 +0000 UTC m=+0.096198272 container create 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:20:43 compute-2 podman[292949]: 2025-11-29 08:20:43.451236852 +0000 UTC m=+0.046573379 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:20:43 compute-2 systemd[1]: Started libpod-conmon-36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655.scope.
Nov 29 08:20:43 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:20:43 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2e32acddab416c1d9d3698866abd32a8b0e041a2cd1657428747233da8699f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:43 compute-2 podman[292949]: 2025-11-29 08:20:43.650279003 +0000 UTC m=+0.245615620 container init 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:20:43 compute-2 podman[292949]: 2025-11-29 08:20:43.663020641 +0000 UTC m=+0.258357178 container start 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:20:43 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [NOTICE]   (292969) : New worker (292971) forked
Nov 29 08:20:43 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [NOTICE]   (292969) : Loading success.
Nov 29 08:20:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:44.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/798225702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.877 232432 DEBUG nova.compute.manager [req-33e96cba-d580-428c-a16f-54db3e16e874 req-10188ef9-2342-47ea-ad27-e60ddab70510 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.877 232432 DEBUG oslo_concurrency.lockutils [req-33e96cba-d580-428c-a16f-54db3e16e874 req-10188ef9-2342-47ea-ad27-e60ddab70510 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.878 232432 DEBUG oslo_concurrency.lockutils [req-33e96cba-d580-428c-a16f-54db3e16e874 req-10188ef9-2342-47ea-ad27-e60ddab70510 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.878 232432 DEBUG oslo_concurrency.lockutils [req-33e96cba-d580-428c-a16f-54db3e16e874 req-10188ef9-2342-47ea-ad27-e60ddab70510 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.879 232432 DEBUG nova.compute.manager [req-33e96cba-d580-428c-a16f-54db3e16e874 req-10188ef9-2342-47ea-ad27-e60ddab70510 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Processing event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.880 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.886 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404444.8857465, 69aa1e78-8728-455d-9d19-eaa720f597b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.886 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] VM Resumed (Lifecycle Event)
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.930 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.931 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.936 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.938 232432 INFO nova.virt.libvirt.driver [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Instance spawned successfully.
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.939 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.959 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.964 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.965 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.965 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.966 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.966 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:44 compute-2 nova_compute[232428]: 2025-11-29 08:20:44.967 232432 DEBUG nova.virt.libvirt.driver [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.142 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404430.1406665, 32059599-6076-4efa-95e2-06e686824adf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.142 232432 INFO nova.compute.manager [-] [instance: 32059599-6076-4efa-95e2-06e686824adf] VM Stopped (Lifecycle Event)
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.154 232432 INFO nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Took 8.22 seconds to spawn the instance on the hypervisor.
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.155 232432 DEBUG nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.177 232432 DEBUG nova.compute.manager [None req-7f0b9250-b5e1-4f91-921d-e468b2e46a26 - - - - - -] [instance: 32059599-6076-4efa-95e2-06e686824adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.222 232432 INFO nova.compute.manager [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Took 9.28 seconds to build instance.
Nov 29 08:20:45 compute-2 nova_compute[232428]: 2025-11-29 08:20:45.237 232432 DEBUG oslo_concurrency.lockutils [None req-663afe35-7d35-446a-9a37-1737013b3299 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:45 compute-2 ceph-mon[77138]: pgmap v2462: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 29 08:20:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2194724653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:46.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.960 232432 DEBUG nova.compute.manager [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.961 232432 DEBUG oslo_concurrency.lockutils [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.961 232432 DEBUG oslo_concurrency.lockutils [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.961 232432 DEBUG oslo_concurrency.lockutils [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.961 232432 DEBUG nova.compute.manager [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] No waiting events found dispatching network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:46 compute-2 nova_compute[232428]: 2025-11-29 08:20:46.962 232432 WARNING nova.compute.manager [req-fc3da2fe-f800-4777-b4ea-49e82d065cc8 req-d154a269-d9d0-4c9c-a952-6454d633d9c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received unexpected event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a for instance with vm_state active and task_state None.
Nov 29 08:20:47 compute-2 ceph-mon[77138]: pgmap v2463: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 29 08:20:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:47 compute-2 podman[292981]: 2025-11-29 08:20:47.760833028 +0000 UTC m=+0.140331084 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:20:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:49 compute-2 ceph-mon[77138]: pgmap v2464: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.4 MiB/s wr, 345 op/s
Nov 29 08:20:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:50.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:50 compute-2 nova_compute[232428]: 2025-11-29 08:20:50.668 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:51 compute-2 nova_compute[232428]: 2025-11-29 08:20:51.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:51 compute-2 nova_compute[232428]: 2025-11-29 08:20:51.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:51 compute-2 ceph-mon[77138]: pgmap v2465: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.7 MiB/s wr, 457 op/s
Nov 29 08:20:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2471549231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:52 compute-2 nova_compute[232428]: 2025-11-29 08:20:52.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:52.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:53 compute-2 nova_compute[232428]: 2025-11-29 08:20:53.224 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:53 compute-2 ceph-mon[77138]: pgmap v2466: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 97 KiB/s wr, 333 op/s
Nov 29 08:20:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:55 compute-2 nova_compute[232428]: 2025-11-29 08:20:55.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:55 compute-2 ceph-mon[77138]: pgmap v2467: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 829 KiB/s wr, 399 op/s
Nov 29 08:20:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/181383023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:55 compute-2 sudo[293012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:55 compute-2 sudo[293012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:55 compute-2 sudo[293012]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:20:55 compute-2 sudo[293038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:55 compute-2 sudo[293038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:55 compute-2 sudo[293038]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:56 compute-2 nova_compute[232428]: 2025-11-29 08:20:56.274 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:56 compute-2 nova_compute[232428]: 2025-11-29 08:20:56.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4066490339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2242993027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:57 compute-2 nova_compute[232428]: 2025-11-29 08:20:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:57 compute-2 nova_compute[232428]: 2025-11-29 08:20:57.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:20:57 compute-2 nova_compute[232428]: 2025-11-29 08:20:57.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:20:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:20:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:20:57 compute-2 ceph-mon[77138]: pgmap v2468: 305 pgs: 305 active+clean; 1.1 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.2 MiB/s rd, 1.8 MiB/s wr, 453 op/s
Nov 29 08:20:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2316166036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:20:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:20:58 compute-2 nova_compute[232428]: 2025-11-29 08:20:58.767 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:58 compute-2 nova_compute[232428]: 2025-11-29 08:20:58.768 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:58 compute-2 nova_compute[232428]: 2025-11-29 08:20:58.769 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:20:58 compute-2 nova_compute[232428]: 2025-11-29 08:20:58.770 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:20:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:20:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:59.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:21:00 compute-2 ceph-mon[77138]: pgmap v2469: 305 pgs: 305 active+clean; 1.1 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.2 MiB/s wr, 385 op/s
Nov 29 08:21:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:00.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:00 compute-2 podman[293066]: 2025-11-29 08:21:00.712465149 +0000 UTC m=+0.101324962 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:21:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:00 compute-2 ovn_controller[134375]: 2025-11-29T08:21:00Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:c8:af 10.100.0.9
Nov 29 08:21:00 compute-2 ovn_controller[134375]: 2025-11-29T08:21:00Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:c8:af 10.100.0.9
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.942 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.943 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.944 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.944 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.945 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.947 232432 INFO nova.compute.manager [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Terminating instance
Nov 29 08:21:00 compute-2 nova_compute[232428]: 2025-11-29 08:21:00.949 232432 DEBUG nova.compute.manager [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.278 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.525 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 kernel: tapd2132df3-90 (unregistering): left promiscuous mode
Nov 29 08:21:01 compute-2 NetworkManager[48993]: <info>  [1764404461.6821] device (tapd2132df3-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.694 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00659|binding|INFO|Releasing lport d2132df3-9087-42d6-83a1-0e253e16ba1f from this chassis (sb_readonly=0)
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00660|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f down in Southbound
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00661|binding|INFO|Removing iface tapd2132df3-90 ovn-installed in OVS
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 ceph-mon[77138]: pgmap v2470: 305 pgs: 305 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.5 MiB/s wr, 396 op/s
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.713 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.732 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:39:64 10.100.0.13'], port_security=['fa:16:3e:5c:39:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd8142255-87a5-4d36-9908-a5456701e3c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9ebed7a-d169-4791-b080-c494336a271e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d2132df3-9087-42d6-83a1-0e253e16ba1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.735 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d2132df3-9087-42d6-83a1-0e253e16ba1f in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.739 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:21:01 compute-2 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 29 08:21:01 compute-2 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008e.scope: Consumed 15.900s CPU time.
Nov 29 08:21:01 compute-2 systemd-machined[194747]: Machine qemu-65-instance-0000008e terminated.
Nov 29 08:21:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.761 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a4950-2f9e-4f22-a65e-73235f59f163]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:01 compute-2 kernel: tapd2132df3-90: entered promiscuous mode
Nov 29 08:21:01 compute-2 kernel: tapd2132df3-90 (unregistering): left promiscuous mode
Nov 29 08:21:01 compute-2 NetworkManager[48993]: <info>  [1764404461.8835] manager: (tapd2132df3-90): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00662|binding|INFO|Claiming lport d2132df3-9087-42d6-83a1-0e253e16ba1f for this chassis.
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00663|binding|INFO|d2132df3-9087-42d6-83a1-0e253e16ba1f: Claiming fa:16:3e:5c:39:64 10.100.0.13
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.896 232432 INFO nova.virt.libvirt.driver [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Instance destroyed successfully.
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.897 232432 DEBUG nova.objects.instance [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'resources' on Instance uuid d8142255-87a5-4d36-9908-a5456701e3c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.902 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:39:64 10.100.0.13'], port_security=['fa:16:3e:5c:39:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd8142255-87a5-4d36-9908-a5456701e3c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9ebed7a-d169-4791-b080-c494336a271e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d2132df3-9087-42d6-83a1-0e253e16ba1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.905 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f74011f2-38ca-411c-bbc2-ec34d65db1d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.909 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3534ac73-51b3-445c-9815-28eeb8acedc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00664|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f ovn-installed in OVS
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00665|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f up in Southbound
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00666|binding|INFO|Releasing lport d2132df3-9087-42d6-83a1-0e253e16ba1f from this chassis (sb_readonly=1)
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00667|if_status|INFO|Dropped 2 log messages in last 981 seconds (most recently, 981 seconds ago) due to excessive rate
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00668|if_status|INFO|Not setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f down as sb is readonly
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00669|binding|INFO|Removing iface tapd2132df3-90 ovn-installed in OVS
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00670|binding|INFO|Releasing lport d2132df3-9087-42d6-83a1-0e253e16ba1f from this chassis (sb_readonly=0)
Nov 29 08:21:01 compute-2 ovn_controller[134375]: 2025-11-29T08:21:01Z|00671|binding|INFO|Setting lport d2132df3-9087-42d6-83a1-0e253e16ba1f down in Southbound
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.923 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:39:64 10.100.0.13'], port_security=['fa:16:3e:5c:39:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd8142255-87a5-4d36-9908-a5456701e3c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9ebed7a-d169-4791-b080-c494336a271e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d2132df3-9087-42d6-83a1-0e253e16ba1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.943 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bc335a-d60b-4ba1-8780-25b893b53ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.945 232432 DEBUG nova.virt.libvirt.vif [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-198631012',display_name='tempest-AttachVolumeNegativeTest-server-198631012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-198631012',id=142,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwHl78errYrJFkmma0i5ieoie7I7fwaj44zdFp/3Fn3Jwg2kBg2Hoebwi84sZnLnrseRly93c2dO4W6/57XakSStgW+oCTcJZfyq33Ol3rDeFf1hNRp3bLOlP0aiSfzew==',key_name='tempest-keypair-616488267',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-u6167f73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=d8142255-87a5-4d36-9908-a5456701e3c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.946 232432 DEBUG nova.network.os_vif_util [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "address": "fa:16:3e:5c:39:64", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2132df3-90", "ovs_interfaceid": "d2132df3-9087-42d6-83a1-0e253e16ba1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.947 232432 DEBUG nova.network.os_vif_util [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.947 232432 DEBUG os_vif [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.949 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.950 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2132df3-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.951 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-2 nova_compute[232428]: 2025-11-29 08:21:01.955 232432 INFO os_vif [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:39:64,bridge_name='br-int',has_traffic_filtering=True,id=d2132df3-9087-42d6-83a1-0e253e16ba1f,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2132df3-90')
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.969 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4783f14c-0a2e-490d-ba8e-06ea7d9d1dd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293106, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:01.998 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[022e3f5a-48a8-4381-9c60-aa95d0cb427d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734791, 'tstamp': 734791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293122, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734797, 'tstamp': 734797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293122, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.000 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.003 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.003 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.003 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.004 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.005 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d2132df3-9087-42d6-83a1-0e253e16ba1f in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.007 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.029 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7c766921-de5f-4ade-bdea-4852e5264ba1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.066 232432 DEBUG nova.compute.manager [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-unplugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.067 232432 DEBUG oslo_concurrency.lockutils [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.067 232432 DEBUG oslo_concurrency.lockutils [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.067 232432 DEBUG oslo_concurrency.lockutils [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.067 232432 DEBUG nova.compute.manager [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-unplugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.067 232432 DEBUG nova.compute.manager [req-cca67329-530b-4726-aa21-e124dfff523b req-e844762a-599d-4778-91a0-9e7fca43ba4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-unplugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.071 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b82bdea7-e43c-4256-9d71-1101f14832fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.074 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a9580bf1-e186-4f0a-890b-e231ac9d0d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.114 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[620d73e4-ef1e-4312-afb1-da51a83f3dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.137 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09d50063-782b-4c7e-b360-34735be802b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293131, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.162 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9f4d5a-16c7-4fa8-adb8-8dbf0fff3f60]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734791, 'tstamp': 734791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293132, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734797, 'tstamp': 734797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293132, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.164 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.168 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.168 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.169 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.170 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.170 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.172 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d2132df3-9087-42d6-83a1-0e253e16ba1f in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.174 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.196 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[51508c6b-a24f-4d77-bcd1-4f46b7dc9d0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.241 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[058839a1-6eae-4674-9850-355de1cff620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.244 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bf2b6b-5666-48a0-b682-1cb176879466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.284 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb2f138-4704-4893-b643-e703bc5ae1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.305 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[faba0cfa-a5fd-4808-98ca-a43833db47c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734773, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293138, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.329 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[426f67ef-7ea1-4db7-b223-7c7de84c1907]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734791, 'tstamp': 734791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293139, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d6ff1b5-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734797, 'tstamp': 734797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293139, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.330 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.332 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.334 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.334 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.334 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:02.334 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.805 232432 INFO nova.virt.libvirt.driver [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Deleting instance files /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3_del
Nov 29 08:21:02 compute-2 nova_compute[232428]: 2025-11-29 08:21:02.806 232432 INFO nova.virt.libvirt.driver [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Deletion of /var/lib/nova/instances/d8142255-87a5-4d36-9908-a5456701e3c3_del complete
Nov 29 08:21:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 08:21:03 compute-2 nova_compute[232428]: 2025-11-29 08:21:03.063 232432 INFO nova.compute.manager [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Took 2.11 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:03 compute-2 nova_compute[232428]: 2025-11-29 08:21:03.064 232432 DEBUG oslo.service.loopingcall [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:03 compute-2 nova_compute[232428]: 2025-11-29 08:21:03.064 232432 DEBUG nova.compute.manager [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:03 compute-2 nova_compute[232428]: 2025-11-29 08:21:03.064 232432 DEBUG nova.network.neutron [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:03.326 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:03.326 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:03.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:03 compute-2 ceph-mon[77138]: pgmap v2471: 305 pgs: 305 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.5 MiB/s wr, 250 op/s
Nov 29 08:21:03 compute-2 ceph-mon[77138]: osdmap e325: 3 total, 3 up, 3 in
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.191 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.192 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.193 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.193 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.194 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.194 232432 WARNING nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received unexpected event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with vm_state active and task_state deleting.
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.195 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.195 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.196 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.197 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.197 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.197 232432 WARNING nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received unexpected event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with vm_state active and task_state deleting.
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.198 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.198 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.199 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.199 232432 DEBUG oslo_concurrency.lockutils [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.200 232432 DEBUG nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.200 232432 WARNING nova.compute.manager [req-953873c0-d6bd-433c-aff8-9adedde622b4 req-0bdf7303-dc59-4fc5-a303-5a8ee582b092 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received unexpected event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with vm_state active and task_state deleting.
Nov 29 08:21:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:04.218 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:04.219 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.280 232432 DEBUG nova.network.neutron [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.318 232432 INFO nova.compute.manager [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Took 1.25 seconds to deallocate network for instance.
Nov 29 08:21:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:21:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.481 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.482 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.547 232432 DEBUG nova.compute.manager [req-b8d4234f-1a88-4799-b12e-0c3c023e5cf7 req-6fe96697-44a3-4047-a653-136c1adce148 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-deleted-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:04 compute-2 nova_compute[232428]: 2025-11-29 08:21:04.789 232432 DEBUG oslo_concurrency.processutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:05 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2198768794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.282 232432 DEBUG oslo_concurrency.processutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.289 232432 DEBUG nova.compute.provider_tree [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.430 232432 DEBUG nova.scheduler.client.report [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.590 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.640 232432 INFO nova.scheduler.client.report [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Deleted allocations for instance d8142255-87a5-4d36-9908-a5456701e3c3
Nov 29 08:21:05 compute-2 podman[293164]: 2025-11-29 08:21:05.683145366 +0000 UTC m=+0.081678638 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 08:21:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.754 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [{"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.789 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-6c463a92-8698-4035-b4d0-b1d3db01a43b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.790 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.791 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.791 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.792 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.792 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:21:05 compute-2 nova_compute[232428]: 2025-11-29 08:21:05.812 232432 DEBUG oslo_concurrency.lockutils [None req-e97ff50e-bd88-4860-bfad-66d9382a086f 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:05 compute-2 ceph-mon[77138]: pgmap v2473: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.9 MiB/s wr, 252 op/s
Nov 29 08:21:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2440225623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2198768794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:06.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.363 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.364 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.364 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.364 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.365 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.403 232432 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.404 232432 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.404 232432 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.405 232432 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d8142255-87a5-4d36-9908-a5456701e3c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.405 232432 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] No waiting events found dispatching network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.405 232432 WARNING nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Received unexpected event network-vif-plugged-d2132df3-9087-42d6-83a1-0e253e16ba1f for instance with vm_state deleted and task_state None.
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1956332203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.813 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2457919789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1345593698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1956332203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.925 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.926 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.931 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.932 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.932 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.935 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.936 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:21:06 compute-2 nova_compute[232428]: 2025-11-29 08:21:06.951 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.142 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.143 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3847MB free_disk=20.551525115966797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.143 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.144 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.222 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 6c463a92-8698-4035-b4d0-b1d3db01a43b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.222 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance e1e8c40f-128e-4265-b740-9f793af39b26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.222 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 69aa1e78-8728-455d-9d19-eaa720f597b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.223 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.223 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.293 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1721992972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.755 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.764 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.786 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.811 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:21:07 compute-2 nova_compute[232428]: 2025-11-29 08:21:07.812 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:07 compute-2 ceph-mon[77138]: pgmap v2474: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 989 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 308 op/s
Nov 29 08:21:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1042416663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4212738034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1721992972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.202 232432 DEBUG oslo_concurrency.lockutils [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.203 232432 DEBUG oslo_concurrency.lockutils [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.223 232432 INFO nova.compute.manager [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Detaching volume ca617c97-cbed-42b4-9b83-01f6fe990ce2
Nov 29 08:21:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:08.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.538 232432 INFO nova.virt.block_device [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Attempting to driver detach volume ca617c97-cbed-42b4-9b83-01f6fe990ce2 from mountpoint /dev/vdb
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.548 232432 DEBUG nova.virt.libvirt.driver [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Attempting to detach device vdb from instance e1e8c40f-128e-4265-b740-9f793af39b26 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.549 232432 DEBUG nova.virt.libvirt.guest [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-ca617c97-cbed-42b4-9b83-01f6fe990ce2">
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   </source>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <serial>ca617c97-cbed-42b4-9b83-01f6fe990ce2</serial>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]: </disk>
Nov 29 08:21:08 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.557 232432 INFO nova.virt.libvirt.driver [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance e1e8c40f-128e-4265-b740-9f793af39b26 from the persistent domain config.
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.557 232432 DEBUG nova.virt.libvirt.driver [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e1e8c40f-128e-4265-b740-9f793af39b26 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.558 232432 DEBUG nova.virt.libvirt.guest [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-ca617c97-cbed-42b4-9b83-01f6fe990ce2">
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   </source>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <serial>ca617c97-cbed-42b4-9b83-01f6fe990ce2</serial>
Nov 29 08:21:08 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:21:08 compute-2 nova_compute[232428]: </disk>
Nov 29 08:21:08 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.636 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.637 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.637 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.637 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.637 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.639 232432 INFO nova.compute.manager [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Terminating instance
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.642 232432 DEBUG nova.compute.manager [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:08 compute-2 kernel: tapbe8a2e4d-8e (unregistering): left promiscuous mode
Nov 29 08:21:08 compute-2 NetworkManager[48993]: <info>  [1764404468.6910] device (tapbe8a2e4d-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.693 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764404468.6931925, e1e8c40f-128e-4265-b740-9f793af39b26 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.697 232432 DEBUG nova.virt.libvirt.driver [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e1e8c40f-128e-4265-b740-9f793af39b26 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.700 232432 INFO nova.virt.libvirt.driver [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance e1e8c40f-128e-4265-b740-9f793af39b26 from the live domain config.
Nov 29 08:21:08 compute-2 ovn_controller[134375]: 2025-11-29T08:21:08Z|00672|binding|INFO|Releasing lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 from this chassis (sb_readonly=0)
Nov 29 08:21:08 compute-2 ovn_controller[134375]: 2025-11-29T08:21:08Z|00673|binding|INFO|Setting lport be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 down in Southbound
Nov 29 08:21:08 compute-2 ovn_controller[134375]: 2025-11-29T08:21:08Z|00674|binding|INFO|Removing iface tapbe8a2e4d-8e ovn-installed in OVS
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.707 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:08.714 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:42:23 10.100.0.13'], port_security=['fa:16:3e:75:42:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6c463a92-8698-4035-b4d0-b1d3db01a43b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:08.716 143801 INFO neutron.agent.ovn.metadata.agent [-] Port be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis
Nov 29 08:21:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:08.720 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c25940b-e63b-4443-a94b-0216a35e8dc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:21:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:08.722 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4e6098-25f3-4d33-a82a-3f126a4c0752]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:08.723 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace which is not needed anymore
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.742 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:08 compute-2 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 29 08:21:08 compute-2 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Consumed 1.506s CPU time.
Nov 29 08:21:08 compute-2 systemd-machined[194747]: Machine qemu-63-instance-00000082 terminated.
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [NOTICE]   (290719) : haproxy version is 2.8.14-c23fe91
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [NOTICE]   (290719) : path to executable is /usr/sbin/haproxy
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [WARNING]  (290719) : Exiting Master process...
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [WARNING]  (290719) : Exiting Master process...
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [ALERT]    (290719) : Current worker (290737) exited with code 143 (Terminated)
Nov 29 08:21:08 compute-2 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[290713]: [WARNING]  (290719) : All workers exited. Exiting... (0)
Nov 29 08:21:08 compute-2 systemd[1]: libpod-36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a.scope: Deactivated successfully.
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.879 232432 INFO nova.virt.libvirt.driver [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Instance destroyed successfully.
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.880 232432 DEBUG nova.objects.instance [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 6c463a92-8698-4035-b4d0-b1d3db01a43b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:08 compute-2 podman[293257]: 2025-11-29 08:21:08.88039216 +0000 UTC m=+0.051344948 container died 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.883 232432 DEBUG nova.objects.instance [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.903 232432 DEBUG nova.virt.libvirt.vif [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-285202970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-285202970',id=130,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-ri5l0c3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:39Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=6c463a92-8698-4035-b4d0-b1d3db01a43b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.904 232432 DEBUG nova.network.os_vif_util [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "address": "fa:16:3e:75:42:23", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe8a2e4d-8e", "ovs_interfaceid": "be8a2e4d-8e9b-4eff-a873-d7c8ad350b96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.905 232432 DEBUG nova.network.os_vif_util [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.905 232432 DEBUG os_vif [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:08 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.909 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe8a2e4d-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.913 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:08 compute-2 systemd[1]: var-lib-containers-storage-overlay-1be443fcb271304d36310db07bf41d805f8e4e8a414949879926fb27b3b1f899-merged.mount: Deactivated successfully.
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.917 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.919 232432 INFO os_vif [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:42:23,bridge_name='br-int',has_traffic_filtering=True,id=be8a2e4d-8e9b-4eff-a873-d7c8ad350b96,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe8a2e4d-8e')
Nov 29 08:21:08 compute-2 podman[293257]: 2025-11-29 08:21:08.927244407 +0000 UTC m=+0.098197185 container cleanup 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:21:08 compute-2 nova_compute[232428]: 2025-11-29 08:21:08.946 232432 DEBUG oslo_concurrency.lockutils [None req-9c07d3f3-788a-4bb9-bfb7-8f02f2531661 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:08 compute-2 systemd[1]: libpod-conmon-36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a.scope: Deactivated successfully.
Nov 29 08:21:08 compute-2 podman[293309]: 2025-11-29 08:21:08.993905893 +0000 UTC m=+0.040914751 container remove 36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.000 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9cba95d5-79f0-48a9-bbd9-8a00bc9e4f4e]: (4, ('Sat Nov 29 08:21:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a)\n36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a\nSat Nov 29 08:21:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a)\n36ca1e82f4fd92ad45e9412c0e81bdd42ce4599c4ad9c4d5efa58325bf044d8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.001 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e4874b5b-9fa9-411f-801a-2f63f5679b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.002 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:09 compute-2 kernel: tap3c25940b-e0: left promiscuous mode
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.019 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.022 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cb701270-ab90-4736-a159-4f3895f5f0c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.049 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c491180c-8f83-4b78-9a4b-304096d39555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.050 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12b42385-ea1c-48cc-af8d-e5782b0b1b50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.069 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[066b066c-5b88-46cc-af68-8389ec796f59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734494, 'reachable_time': 18694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293331, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.072 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.073 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5bd6db-a388-4e97-97c7-1962b76aed8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 systemd[1]: run-netns-ovnmeta\x2d3c25940b\x2de63b\x2d4443\x2da94b\x2d0216a35e8dc6.mount: Deactivated successfully.
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.144 232432 INFO nova.virt.libvirt.driver [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Deleting instance files /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b_del
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.145 232432 INFO nova.virt.libvirt.driver [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Deletion of /var/lib/nova/instances/6c463a92-8698-4035-b4d0-b1d3db01a43b_del complete
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.368 232432 INFO nova.compute.manager [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.369 232432 DEBUG oslo.service.loopingcall [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.370 232432 DEBUG nova.compute.manager [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.371 232432 DEBUG nova.network.neutron [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.706 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.707 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.707 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.708 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.708 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.709 232432 INFO nova.compute.manager [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Terminating instance
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.711 232432 DEBUG nova.compute.manager [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:09 compute-2 kernel: tapdc54aa54-23 (unregistering): left promiscuous mode
Nov 29 08:21:09 compute-2 NetworkManager[48993]: <info>  [1764404469.7823] device (tapdc54aa54-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:09 compute-2 ovn_controller[134375]: 2025-11-29T08:21:09Z|00675|binding|INFO|Releasing lport dc54aa54-233d-4026-a2c5-883cfda7d4f2 from this chassis (sb_readonly=0)
Nov 29 08:21:09 compute-2 ovn_controller[134375]: 2025-11-29T08:21:09Z|00676|binding|INFO|Setting lport dc54aa54-233d-4026-a2c5-883cfda7d4f2 down in Southbound
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:09 compute-2 ovn_controller[134375]: 2025-11-29T08:21:09Z|00677|binding|INFO|Removing iface tapdc54aa54-23 ovn-installed in OVS
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.801 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:cd:f5 10.100.0.9'], port_security=['fa:16:3e:2a:cd:f5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1e8c40f-128e-4265-b740-9f793af39b26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1599c60a-4274-48ca-aa11-779b4d5de698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc54aa54-233d-4026-a2c5-883cfda7d4f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.803 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc54aa54-233d-4026-a2c5-883cfda7d4f2 in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.806 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.807 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[98a503cd-792b-4497-b2a8-aa02a64c2916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:09.808 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace which is not needed anymore
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.811 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:09 compute-2 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 29 08:21:09 compute-2 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Consumed 18.820s CPU time.
Nov 29 08:21:09 compute-2 systemd-machined[194747]: Machine qemu-64-instance-00000088 terminated.
Nov 29 08:21:09 compute-2 ceph-mon[77138]: pgmap v2475: 305 pgs: 305 active+clean; 959 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.2 MiB/s wr, 333 op/s
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.953 232432 INFO nova.virt.libvirt.driver [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Instance destroyed successfully.
Nov 29 08:21:09 compute-2 nova_compute[232428]: 2025-11-29 08:21:09.955 232432 DEBUG nova.objects.instance [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'resources' on Instance uuid e1e8c40f-128e-4265-b740-9f793af39b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.033 232432 DEBUG nova.virt.libvirt.vif [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1938739361',display_name='tempest-AttachVolumeNegativeTest-server-1938739361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1938739361',id=136,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMIsWR6SsoLJ2ABS7KFOaCG42ihmfzH2foyr0TWRY5eiVTGmcsBZ3D3XsZFXLAc7zssT4SRJpC0TgFOf+org2lBQ3V5jmwHujOdbfp9WbvsZK55zCOcLHee2v91OcfA2sg==',key_name='tempest-keypair-10635770',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-gyy4ct4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=e1e8c40f-128e-4265-b740-9f793af39b26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.035 232432 DEBUG nova.network.os_vif_util [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "address": "fa:16:3e:2a:cd:f5", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc54aa54-23", "ovs_interfaceid": "dc54aa54-233d-4026-a2c5-883cfda7d4f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.036 232432 DEBUG nova.network.os_vif_util [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.036 232432 DEBUG os_vif [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.038 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.038 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc54aa54-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.040 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.046 232432 INFO os_vif [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:cd:f5,bridge_name='br-int',has_traffic_filtering=True,id=dc54aa54-233d-4026-a2c5-883cfda7d4f2,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc54aa54-23')
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [NOTICE]   (290973) : haproxy version is 2.8.14-c23fe91
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [NOTICE]   (290973) : path to executable is /usr/sbin/haproxy
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [WARNING]  (290973) : Exiting Master process...
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [WARNING]  (290973) : Exiting Master process...
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [ALERT]    (290973) : Current worker (290975) exited with code 143 (Terminated)
Nov 29 08:21:10 compute-2 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[290969]: [WARNING]  (290973) : All workers exited. Exiting... (0)
Nov 29 08:21:10 compute-2 systemd[1]: libpod-03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31.scope: Deactivated successfully.
Nov 29 08:21:10 compute-2 podman[293355]: 2025-11-29 08:21:10.162519715 +0000 UTC m=+0.207084433 container died 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:21:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:10.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:10 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31-userdata-shm.mount: Deactivated successfully.
Nov 29 08:21:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-0ac062a0491c7d064d3c0719f615c57a1988d50f1b42fa3672bb165992cc02ae-merged.mount: Deactivated successfully.
Nov 29 08:21:10 compute-2 podman[293355]: 2025-11-29 08:21:10.379151417 +0000 UTC m=+0.423716105 container cleanup 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:21:10 compute-2 systemd[1]: libpod-conmon-03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31.scope: Deactivated successfully.
Nov 29 08:21:10 compute-2 podman[293414]: 2025-11-29 08:21:10.543572794 +0000 UTC m=+0.138358903 container remove 03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.550 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[29e9c9ea-787d-4c02-9296-39c00b1ed21a]: (4, ('Sat Nov 29 08:21:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31)\n03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31\nSat Nov 29 08:21:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31)\n03461ede096e99234737461a2cbe35c1b1dffea306fe20c3b1488f452b71cf31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.551 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5af277-6862-432a-adda-35115642383d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.553 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-2 kernel: tap3d6ff1b5-e0: left promiscuous mode
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.557 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.560 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[96894e8c-fe37-4136-a677-8d0057167993]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.575 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.577 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[76daa04b-d115-4c8a-b70e-2672aa02491e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.578 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4fab6810-42bf-4f04-a00f-d80c6d57a0c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.595 232432 DEBUG nova.network.neutron [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.595 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6dae0fb3-33d7-4b6f-8ad4-005bfdf73809]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734764, 'reachable_time': 42004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293429, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.598 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:21:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:10.598 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[cf34d8e3-437b-4970-92c3-46f62bd7f72b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:10 compute-2 systemd[1]: run-netns-ovnmeta\x2d3d6ff1b5\x2de67b\x2d4a23\x2d9145\x2d8139b35e63e8.mount: Deactivated successfully.
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.621 232432 INFO nova.compute.manager [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Took 1.25 seconds to deallocate network for instance.
Nov 29 08:21:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.810 232432 INFO nova.compute.manager [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Took 0.19 seconds to detach 1 volumes for instance.
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.893 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.894 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.940 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.941 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.941 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.942 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.942 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.942 232432 WARNING nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-unplugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state deleted and task_state None.
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.942 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.943 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.943 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.943 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.943 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] No waiting events found dispatching network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.944 232432 WARNING nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received unexpected event network-vif-plugged-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 for instance with vm_state deleted and task_state None.
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.944 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-unplugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.944 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.944 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.944 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.945 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] No waiting events found dispatching network-vif-unplugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.945 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-unplugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.945 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Received event network-vif-deleted-be8a2e4d-8e9b-4eff-a873-d7c8ad350b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.945 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.946 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.946 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.946 232432 DEBUG oslo_concurrency.lockutils [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.946 232432 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] No waiting events found dispatching network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.947 232432 WARNING nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received unexpected event network-vif-plugged-dc54aa54-233d-4026-a2c5-883cfda7d4f2 for instance with vm_state active and task_state deleting.
Nov 29 08:21:10 compute-2 nova_compute[232428]: 2025-11-29 08:21:10.998 232432 DEBUG oslo_concurrency.processutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/343410961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.425 232432 DEBUG oslo_concurrency.processutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.435 232432 DEBUG nova.compute.provider_tree [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.461 232432 DEBUG nova.scheduler.client.report [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.493 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.521 232432 INFO nova.scheduler.client.report [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Deleted allocations for instance 6c463a92-8698-4035-b4d0-b1d3db01a43b
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.596 232432 DEBUG oslo_concurrency.lockutils [None req-7a45d7df-f966-4d77-b00e-6fb910bce05b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "6c463a92-8698-4035-b4d0-b1d3db01a43b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.608 232432 INFO nova.virt.libvirt.driver [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Deleting instance files /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26_del
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.610 232432 INFO nova.virt.libvirt.driver [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Deletion of /var/lib/nova/instances/e1e8c40f-128e-4265-b740-9f793af39b26_del complete
Nov 29 08:21:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2189757330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3916480110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2937818125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.676 232432 INFO nova.compute.manager [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Took 1.96 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.677 232432 DEBUG oslo.service.loopingcall [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.677 232432 DEBUG nova.compute.manager [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.677 232432 DEBUG nova.network.neutron [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.784 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.785 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.785 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.785 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.786 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.788 232432 INFO nova.compute.manager [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Terminating instance
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.790 232432 DEBUG nova.compute.manager [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:11 compute-2 kernel: tap7e9e5bb3-53 (unregistering): left promiscuous mode
Nov 29 08:21:11 compute-2 NetworkManager[48993]: <info>  [1764404471.9117] device (tap7e9e5bb3-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:11 compute-2 ovn_controller[134375]: 2025-11-29T08:21:11Z|00678|binding|INFO|Releasing lport 7e9e5bb3-536d-4044-ba89-87caa7779a1a from this chassis (sb_readonly=0)
Nov 29 08:21:11 compute-2 ovn_controller[134375]: 2025-11-29T08:21:11Z|00679|binding|INFO|Setting lport 7e9e5bb3-536d-4044-ba89-87caa7779a1a down in Southbound
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.926 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:11 compute-2 ovn_controller[134375]: 2025-11-29T08:21:11Z|00680|binding|INFO|Removing iface tap7e9e5bb3-53 ovn-installed in OVS
Nov 29 08:21:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:11.932 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:c8:af 10.100.0.9'], port_security=['fa:16:3e:ea:c8:af 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '69aa1e78-8728-455d-9d19-eaa720f597b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=7e9e5bb3-536d-4044-ba89-87caa7779a1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:11.933 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 7e9e5bb3-536d-4044-ba89-87caa7779a1a in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis
Nov 29 08:21:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:11.935 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97e6ef02-6896-45a2-9eb9-28926c1a7400, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:21:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:11.937 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[59f7a5c2-2cf4-47c4-a1f1-6d1f11cfc818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:11.938 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace which is not needed anymore
Nov 29 08:21:11 compute-2 nova_compute[232428]: 2025-11-29 08:21:11.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:11 compute-2 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 29 08:21:11 compute-2 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000090.scope: Consumed 15.419s CPU time.
Nov 29 08:21:11 compute-2 systemd-machined[194747]: Machine qemu-67-instance-00000090 terminated.
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.038 232432 INFO nova.virt.libvirt.driver [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Instance destroyed successfully.
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.039 232432 DEBUG nova.objects.instance [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid 69aa1e78-8728-455d-9d19-eaa720f597b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.057 232432 DEBUG nova.virt.libvirt.vif [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=144,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-rwn56saz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:45Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=69aa1e78-8728-455d-9d19-eaa720f597b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.058 232432 DEBUG nova.network.os_vif_util [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "address": "fa:16:3e:ea:c8:af", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e9e5bb3-53", "ovs_interfaceid": "7e9e5bb3-536d-4044-ba89-87caa7779a1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.058 232432 DEBUG nova.network.os_vif_util [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.059 232432 DEBUG os_vif [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.061 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.061 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e9e5bb3-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.066 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.069 232432 INFO os_vif [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:c8:af,bridge_name='br-int',has_traffic_filtering=True,id=7e9e5bb3-536d-4044-ba89-87caa7779a1a,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e9e5bb3-53')
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [NOTICE]   (292969) : haproxy version is 2.8.14-c23fe91
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [NOTICE]   (292969) : path to executable is /usr/sbin/haproxy
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [WARNING]  (292969) : Exiting Master process...
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [WARNING]  (292969) : Exiting Master process...
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [ALERT]    (292969) : Current worker (292971) exited with code 143 (Terminated)
Nov 29 08:21:12 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[292964]: [WARNING]  (292969) : All workers exited. Exiting... (0)
Nov 29 08:21:12 compute-2 systemd[1]: libpod-36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655.scope: Deactivated successfully.
Nov 29 08:21:12 compute-2 podman[293486]: 2025-11-29 08:21:12.124704329 +0000 UTC m=+0.048173930 container died 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:21:12 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655-userdata-shm.mount: Deactivated successfully.
Nov 29 08:21:12 compute-2 systemd[1]: var-lib-containers-storage-overlay-8c2e32acddab416c1d9d3698866abd32a8b0e041a2cd1657428747233da8699f-merged.mount: Deactivated successfully.
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.160 232432 DEBUG nova.compute.manager [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-unplugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.161 232432 DEBUG oslo_concurrency.lockutils [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.161 232432 DEBUG oslo_concurrency.lockutils [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.162 232432 DEBUG oslo_concurrency.lockutils [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.162 232432 DEBUG nova.compute.manager [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] No waiting events found dispatching network-vif-unplugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.162 232432 DEBUG nova.compute.manager [req-81225901-88a5-4ccb-9fe9-b5bda2aee76a req-06d8e832-6879-4884-bdcb-1e1084d0405e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-unplugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:21:12 compute-2 podman[293486]: 2025-11-29 08:21:12.16690143 +0000 UTC m=+0.090371041 container cleanup 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:21:12 compute-2 systemd[1]: libpod-conmon-36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655.scope: Deactivated successfully.
Nov 29 08:21:12 compute-2 podman[293536]: 2025-11-29 08:21:12.232586635 +0000 UTC m=+0.038635200 container remove 36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.242 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[074ae080-27b9-4cbc-ad1d-21eead400020]: (4, ('Sat Nov 29 08:21:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655)\n36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655\nSat Nov 29 08:21:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655)\n36d4786c7ecd06aec70be93f36d891b9dac2aa483c5103670db15d3f865fe655\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.243 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1daa3f9e-c58f-4fba-afac-54c9ad102105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.244 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:12 compute-2 kernel: tap97e6ef02-60: left promiscuous mode
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.263 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce462af-f4f4-4af7-a75a-9911cfddd550]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.279 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b84b22ae-1581-48fe-bac0-8e327ff7caeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.281 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a00b51cb-92f5-40bf-a771-26829b529e76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.301 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09553dc3-4fac-4169-a161-f79687dfacc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740884, 'reachable_time': 20380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293551, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.303 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:21:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:12.303 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c45181c1-a5fc-487a-b9eb-1e9be64c1389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:12 compute-2 systemd[1]: run-netns-ovnmeta\x2d97e6ef02\x2d6896\x2d45a2\x2d9eb9\x2d28926c1a7400.mount: Deactivated successfully.
Nov 29 08:21:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.524 232432 DEBUG nova.network.neutron [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.542 232432 INFO nova.compute.manager [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Took 0.87 seconds to deallocate network for instance.
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.591 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.591 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:12 compute-2 nova_compute[232428]: 2025-11-29 08:21:12.664 232432 DEBUG oslo_concurrency.processutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:12 compute-2 ceph-mon[77138]: pgmap v2476: 305 pgs: 305 active+clean; 928 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 274 KiB/s wr, 275 op/s
Nov 29 08:21:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/343410961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.042 232432 DEBUG nova.compute.manager [req-4f750059-3d93-4ef5-af3d-b845f3419e3c req-a190402d-ef46-49c6-87f3-a99afaffcf7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Received event network-vif-deleted-dc54aa54-233d-4026-a2c5-883cfda7d4f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.059 232432 INFO nova.virt.libvirt.driver [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Deleting instance files /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1_del
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.060 232432 INFO nova.virt.libvirt.driver [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Deletion of /var/lib/nova/instances/69aa1e78-8728-455d-9d19-eaa720f597b1_del complete
Nov 29 08:21:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/209034067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.139 232432 DEBUG oslo_concurrency.processutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.147 232432 DEBUG nova.compute.provider_tree [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.149 232432 INFO nova.compute.manager [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Took 1.36 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.150 232432 DEBUG oslo.service.loopingcall [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.150 232432 DEBUG nova.compute.manager [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.150 232432 DEBUG nova.network.neutron [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.228 232432 DEBUG nova.scheduler.client.report [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.578 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.677 232432 INFO nova.scheduler.client.report [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Deleted allocations for instance e1e8c40f-128e-4265-b740-9f793af39b26
Nov 29 08:21:13 compute-2 ceph-mon[77138]: pgmap v2477: 305 pgs: 305 active+clean; 928 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 274 KiB/s wr, 275 op/s
Nov 29 08:21:13 compute-2 ceph-mon[77138]: osdmap e326: 3 total, 3 up, 3 in
Nov 29 08:21:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/209034067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 08:21:13 compute-2 nova_compute[232428]: 2025-11-29 08:21:13.745 232432 DEBUG oslo_concurrency.lockutils [None req-f1c73eee-09d4-4d2f-8fa2-f6c4e64863e1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "e1e8c40f-128e-4265-b740-9f793af39b26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:13.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.115 232432 DEBUG nova.network.neutron [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.133 232432 INFO nova.compute.manager [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Took 0.98 seconds to deallocate network for instance.
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.198 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.199 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:14.221 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.292 232432 DEBUG oslo_concurrency.processutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.342 232432 DEBUG nova.compute.manager [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.343 232432 DEBUG oslo_concurrency.lockutils [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.344 232432 DEBUG oslo_concurrency.lockutils [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.344 232432 DEBUG oslo_concurrency.lockutils [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.345 232432 DEBUG nova.compute.manager [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] No waiting events found dispatching network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.346 232432 WARNING nova.compute.manager [req-3bd3e103-120e-4df8-940b-ae10a2786736 req-c9b42942-1e7b-4e7a-95e1-aec4ec92d141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received unexpected event network-vif-plugged-7e9e5bb3-536d-4044-ba89-87caa7779a1a for instance with vm_state deleted and task_state None.
Nov 29 08:21:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:14.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3693221066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.758 232432 DEBUG oslo_concurrency.processutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.765 232432 DEBUG nova.compute.provider_tree [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.785 232432 DEBUG nova.scheduler.client.report [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.811 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.843 232432 INFO nova.scheduler.client.report [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance 69aa1e78-8728-455d-9d19-eaa720f597b1
Nov 29 08:21:14 compute-2 nova_compute[232428]: 2025-11-29 08:21:14.943 232432 DEBUG oslo_concurrency.lockutils [None req-9affec10-8228-419b-8701-32256ba5e518 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "69aa1e78-8728-455d-9d19-eaa720f597b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:15 compute-2 nova_compute[232428]: 2025-11-29 08:21:15.186 232432 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Received event network-vif-deleted-7e9e5bb3-536d-4044-ba89-87caa7779a1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:15 compute-2 ceph-mon[77138]: osdmap e327: 3 total, 3 up, 3 in
Nov 29 08:21:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:15.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:15 compute-2 sudo[293599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:15 compute-2 sudo[293599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:15 compute-2 sudo[293599]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:15 compute-2 sudo[293624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:15 compute-2 sudo[293624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:15 compute-2 sudo[293624]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:16 compute-2 nova_compute[232428]: 2025-11-29 08:21:16.534 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:16 compute-2 ceph-mon[77138]: pgmap v2480: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 878 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 31 KiB/s wr, 137 op/s
Nov 29 08:21:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3693221066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2091616258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:16 compute-2 nova_compute[232428]: 2025-11-29 08:21:16.895 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404461.8938713, d8142255-87a5-4d36-9908-a5456701e3c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:16 compute-2 nova_compute[232428]: 2025-11-29 08:21:16.896 232432 INFO nova.compute.manager [-] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] VM Stopped (Lifecycle Event)
Nov 29 08:21:17 compute-2 nova_compute[232428]: 2025-11-29 08:21:17.064 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:17 compute-2 nova_compute[232428]: 2025-11-29 08:21:17.175 232432 DEBUG nova.compute.manager [None req-bed1f92b-7857-4f26-b699-0230127e61c3 - - - - - -] [instance: d8142255-87a5-4d36-9908-a5456701e3c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:17.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:17 compute-2 ceph-mon[77138]: pgmap v2481: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 60 KiB/s wr, 293 op/s
Nov 29 08:21:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:18.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:18 compute-2 podman[293650]: 2025-11-29 08:21:18.721368576 +0000 UTC m=+0.121117162 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.211 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.213 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.332 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.592 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.593 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.601 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.601 232432 INFO nova.compute.claims [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:21:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:19 compute-2 nova_compute[232428]: 2025-11-29 08:21:19.877 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2420035978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.348 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.355 232432 DEBUG nova.compute.provider_tree [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:20.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.389 232432 DEBUG nova.scheduler.client.report [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:20 compute-2 ceph-mon[77138]: pgmap v2482: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 33 KiB/s wr, 274 op/s
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.548 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.549 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.647 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.648 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:21:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.757 232432 INFO nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.789 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.877 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.878 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.879 232432 INFO nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Creating image(s)
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.908 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.940 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.972 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:20 compute-2 nova_compute[232428]: 2025-11-29 08:21:20.975 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.051 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.052 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.053 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.053 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.081 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.084 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 79ee67d4-e2fb-43f9-8489-337e78260613_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:21 compute-2 nova_compute[232428]: 2025-11-29 08:21:21.696 232432 DEBUG nova.policy [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:21:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:21.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:22 compute-2 nova_compute[232428]: 2025-11-29 08:21:22.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:22.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2420035978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:23 compute-2 ceph-mon[77138]: pgmap v2483: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 35 KiB/s wr, 281 op/s
Nov 29 08:21:23 compute-2 nova_compute[232428]: 2025-11-29 08:21:23.748 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 79ee67d4-e2fb-43f9-8489-337e78260613_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:23.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 08:21:23 compute-2 nova_compute[232428]: 2025-11-29 08:21:23.887 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404468.8746774, 6c463a92-8698-4035-b4d0-b1d3db01a43b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:23 compute-2 nova_compute[232428]: 2025-11-29 08:21:23.888 232432 INFO nova.compute.manager [-] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] VM Stopped (Lifecycle Event)
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.001 232432 DEBUG nova.compute.manager [None req-5ca55ee2-af45-4021-97a6-e3d981b14532 - - - - - -] [instance: 6c463a92-8698-4035-b4d0-b1d3db01a43b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.014 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.296 232432 DEBUG nova.objects.instance [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid 79ee67d4-e2fb-43f9-8489-337e78260613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.334 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.335 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Ensure instance console log exists: /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.335 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.336 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.336 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.649 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Successfully created port: c66c1fa7-438d-47f4-bbad-2e22ef5153af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:21:24 compute-2 ceph-mon[77138]: pgmap v2484: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 29 KiB/s wr, 230 op/s
Nov 29 08:21:24 compute-2 ceph-mon[77138]: osdmap e328: 3 total, 3 up, 3 in
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.953 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404469.9514277, e1e8c40f-128e-4265-b740-9f793af39b26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.954 232432 INFO nova.compute.manager [-] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] VM Stopped (Lifecycle Event)
Nov 29 08:21:24 compute-2 nova_compute[232428]: 2025-11-29 08:21:24.979 232432 DEBUG nova.compute.manager [None req-65d54788-377a-4ac4-87b5-ef867ab6d374 - - - - - -] [instance: e1e8c40f-128e-4265-b740-9f793af39b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.387 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:25 compute-2 ceph-mon[77138]: pgmap v2486: 305 pgs: 305 active+clean; 632 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 60 KiB/s wr, 194 op/s
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.980 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Successfully updated port: c66c1fa7-438d-47f4-bbad-2e22ef5153af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.994 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.994 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:25 compute-2 nova_compute[232428]: 2025-11-29 08:21:25.995 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:21:26 compute-2 nova_compute[232428]: 2025-11-29 08:21:26.139 232432 DEBUG nova.compute.manager [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-changed-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:26 compute-2 nova_compute[232428]: 2025-11-29 08:21:26.139 232432 DEBUG nova.compute.manager [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Refreshing instance network info cache due to event network-changed-c66c1fa7-438d-47f4-bbad-2e22ef5153af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:21:26 compute-2 nova_compute[232428]: 2025-11-29 08:21:26.139 232432 DEBUG oslo_concurrency.lockutils [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:26 compute-2 nova_compute[232428]: 2025-11-29 08:21:26.254 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:21:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:26 compute-2 nova_compute[232428]: 2025-11-29 08:21:26.540 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/808761927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.035 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404472.0341537, 69aa1e78-8728-455d-9d19-eaa720f597b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.036 232432 INFO nova.compute.manager [-] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] VM Stopped (Lifecycle Event)
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.060 232432 DEBUG nova.compute.manager [None req-bfe11d52-adf6-424f-9ea9-83626c260386 - - - - - -] [instance: 69aa1e78-8728-455d-9d19-eaa720f597b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.068 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.615 232432 DEBUG nova.network.neutron [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Updating instance_info_cache with network_info: [{"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.638 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.638 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Instance network_info: |[{"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.638 232432 DEBUG oslo_concurrency.lockutils [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.639 232432 DEBUG nova.network.neutron [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Refreshing network info cache for port c66c1fa7-438d-47f4-bbad-2e22ef5153af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.643 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Start _get_guest_xml network_info=[{"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.647 232432 WARNING nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.654 232432 DEBUG nova.virt.libvirt.host [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.654 232432 DEBUG nova.virt.libvirt.host [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.661 232432 DEBUG nova.virt.libvirt.host [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.661 232432 DEBUG nova.virt.libvirt.host [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.663 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.663 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.663 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.664 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.664 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.664 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.664 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.664 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.665 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.665 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.665 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.665 232432 DEBUG nova.virt.hardware [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:21:27 compute-2 nova_compute[232428]: 2025-11-29 08:21:27.671 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:21:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560883826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:21:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560883826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1296045122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:28 compute-2 ceph-mon[77138]: pgmap v2487: 305 pgs: 305 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 29 08:21:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/560883826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/560883826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.152 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.187 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.191 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:28.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2132491287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.614 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.618 232432 DEBUG nova.virt.libvirt.vif [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-589754215',display_name='tempest-ServersTestJSON-server-589754215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-589754215',id=147,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-7ef3g7v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:20Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=79ee67d4-e2fb-43f9-8489-337e78260613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.619 232432 DEBUG nova.network.os_vif_util [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.620 232432 DEBUG nova.network.os_vif_util [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.623 232432 DEBUG nova.objects.instance [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79ee67d4-e2fb-43f9-8489-337e78260613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.642 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <uuid>79ee67d4-e2fb-43f9-8489-337e78260613</uuid>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <name>instance-00000093</name>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestJSON-server-589754215</nova:name>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:21:27</nova:creationTime>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <nova:port uuid="c66c1fa7-438d-47f4-bbad-2e22ef5153af">
Nov 29 08:21:28 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <system>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="serial">79ee67d4-e2fb-43f9-8489-337e78260613</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="uuid">79ee67d4-e2fb-43f9-8489-337e78260613</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </system>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <os>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </os>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <features>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </features>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/79ee67d4-e2fb-43f9-8489-337e78260613_disk">
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/79ee67d4-e2fb-43f9-8489-337e78260613_disk.config">
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:28 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:73:0b:d4"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <target dev="tapc66c1fa7-43"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/console.log" append="off"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <video>
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </video>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:21:28 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:21:28 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:21:28 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:21:28 compute-2 nova_compute[232428]: </domain>
Nov 29 08:21:28 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.644 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Preparing to wait for external event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.644 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.644 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.645 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.646 232432 DEBUG nova.virt.libvirt.vif [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-589754215',display_name='tempest-ServersTestJSON-server-589754215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-589754215',id=147,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-7ef3g7v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:20Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=79ee67d4-e2fb-43f9-8489-337e78260613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.646 232432 DEBUG nova.network.os_vif_util [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.647 232432 DEBUG nova.network.os_vif_util [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.647 232432 DEBUG os_vif [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.648 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.649 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.650 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.654 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.654 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66c1fa7-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.655 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66c1fa7-43, col_values=(('external_ids', {'iface-id': 'c66c1fa7-438d-47f4-bbad-2e22ef5153af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:0b:d4', 'vm-uuid': '79ee67d4-e2fb-43f9-8489-337e78260613'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:28 compute-2 NetworkManager[48993]: <info>  [1764404488.6580] manager: (tapc66c1fa7-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.663 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.665 232432 INFO os_vif [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43')
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.728 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.728 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.729 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:73:0b:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.729 232432 INFO nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Using config drive
Nov 29 08:21:28 compute-2 nova_compute[232428]: 2025-11-29 08:21:28.761 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.328 232432 INFO nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Creating config drive at /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.335 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparb7rdz3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.479 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparb7rdz3" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.516 232432 DEBUG nova.storage.rbd_utils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 79ee67d4-e2fb-43f9-8489-337e78260613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.521 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config 79ee67d4-e2fb-43f9-8489-337e78260613_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1296045122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2132491287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:21:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:29.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.923 232432 DEBUG oslo_concurrency.processutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config 79ee67d4-e2fb-43f9-8489-337e78260613_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:29 compute-2 nova_compute[232428]: 2025-11-29 08:21:29.924 232432 INFO nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Deleting local config drive /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613/disk.config because it was imported into RBD.
Nov 29 08:21:29 compute-2 virtqemud[231977]: End of file while reading data: Input/output error
Nov 29 08:21:29 compute-2 kernel: tapc66c1fa7-43: entered promiscuous mode
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.0033] manager: (tapc66c1fa7-43): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 08:21:30 compute-2 ovn_controller[134375]: 2025-11-29T08:21:30Z|00681|binding|INFO|Claiming lport c66c1fa7-438d-47f4-bbad-2e22ef5153af for this chassis.
Nov 29 08:21:30 compute-2 ovn_controller[134375]: 2025-11-29T08:21:30Z|00682|binding|INFO|c66c1fa7-438d-47f4-bbad-2e22ef5153af: Claiming fa:16:3e:73:0b:d4 10.100.0.11
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.011 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.020 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:0b:d4 10.100.0.11'], port_security=['fa:16:3e:73:0b:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '79ee67d4-e2fb-43f9-8489-337e78260613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c66c1fa7-438d-47f4-bbad-2e22ef5153af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.022 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c66c1fa7-438d-47f4-bbad-2e22ef5153af in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.025 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.044 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ede3061-33f5-48e0-9118-192ad7c9f526]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.046 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97e6ef02-61 in ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.049 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97e6ef02-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.049 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[17fbd8a2-48be-40ff-acdc-8be8c8cfffa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.051 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[81718b0f-7319-42ab-adff-ff57c6bfccc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 systemd-machined[194747]: New machine qemu-68-instance-00000093.
Nov 29 08:21:30 compute-2 systemd-udevd[294007]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.076 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d16b9fbe-0d5e-4ec2-983b-96ef421513a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.0870] device (tapc66c1fa7-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.085 232432 DEBUG nova.network.neutron [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Updated VIF entry in instance network info cache for port c66c1fa7-438d-47f4-bbad-2e22ef5153af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.086 232432 DEBUG nova.network.neutron [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Updating instance_info_cache with network_info: [{"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.0891] device (tapc66c1fa7-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:21:30 compute-2 systemd[1]: Started Virtual Machine qemu-68-instance-00000093.
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.103 232432 DEBUG oslo_concurrency.lockutils [req-6fc4a890-7857-4005-8321-33bf7906a3ac req-9fe05cba-5558-423e-b687-b8a3301d3a5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-79ee67d4-e2fb-43f9-8489-337e78260613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.110 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.115 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[afae2635-615c-40f1-aebe-8a2b7362f7d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_controller[134375]: 2025-11-29T08:21:30Z|00683|binding|INFO|Setting lport c66c1fa7-438d-47f4-bbad-2e22ef5153af ovn-installed in OVS
Nov 29 08:21:30 compute-2 ovn_controller[134375]: 2025-11-29T08:21:30Z|00684|binding|INFO|Setting lport c66c1fa7-438d-47f4-bbad-2e22ef5153af up in Southbound
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.169 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3054e9-150c-416c-825b-883af096ce58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.177 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[232b426c-f38f-48e8-abc1-428b5b92c325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.1794] manager: (tap97e6ef02-60): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.232 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf4af9d-4f69-47c2-b031-26edf5d3288d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.236 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c9797560-b796-446e-8286-4ca87c31f8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.2648] device (tap97e6ef02-60): carrier: link connected
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.273 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0f78cc7d-5bdb-4b90-8c5b-2627cfb3e5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.304 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[17569040-02ce-46f0-a2ac-923f3ca95311]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745653, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294039, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.331 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9da21768-1ac6-4095-b17b-47cdab319fa3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:de28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745653, 'tstamp': 745653}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294040, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.364 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee28861f-433f-42f9-ad6c-7cadebcc2c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745653, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294041, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.379 232432 DEBUG nova.compute.manager [req-cd4fa952-fd51-4498-a8e4-cab796d2aabb req-ee962adb-f086-4308-ad04-e18bcd7fff0d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.379 232432 DEBUG oslo_concurrency.lockutils [req-cd4fa952-fd51-4498-a8e4-cab796d2aabb req-ee962adb-f086-4308-ad04-e18bcd7fff0d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.379 232432 DEBUG oslo_concurrency.lockutils [req-cd4fa952-fd51-4498-a8e4-cab796d2aabb req-ee962adb-f086-4308-ad04-e18bcd7fff0d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.380 232432 DEBUG oslo_concurrency.lockutils [req-cd4fa952-fd51-4498-a8e4-cab796d2aabb req-ee962adb-f086-4308-ad04-e18bcd7fff0d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.380 232432 DEBUG nova.compute.manager [req-cd4fa952-fd51-4498-a8e4-cab796d2aabb req-ee962adb-f086-4308-ad04-e18bcd7fff0d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Processing event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:21:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:30.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.422 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9a3131-ba79-43b0-954b-a14f66be123d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.530 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36846f48-f4f2-4794-a4e0-b5ea4cc074d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.532 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.533 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.533 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:30 compute-2 NetworkManager[48993]: <info>  [1764404490.5369] manager: (tap97e6ef02-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 kernel: tap97e6ef02-60: entered promiscuous mode
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.546 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.547 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_controller[134375]: 2025-11-29T08:21:30Z|00685|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.567 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.570 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.572 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[31276921-1167-4b9f-8759-25ce15e8fdad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.574 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:21:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:30.575 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'env', 'PROCESS_TAG=haproxy-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97e6ef02-6896-45a2-9eb9-28926c1a7400.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:21:30 compute-2 ceph-mon[77138]: pgmap v2488: 305 pgs: 305 active+clean; 690 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Nov 29 08:21:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.874 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.875 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404490.8748796, 79ee67d4-e2fb-43f9-8489-337e78260613 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.875 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] VM Started (Lifecycle Event)
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.880 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.886 232432 INFO nova.virt.libvirt.driver [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Instance spawned successfully.
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.887 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.900 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:30 compute-2 nova_compute[232428]: 2025-11-29 08:21:30.907 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:31 compute-2 podman[294114]: 2025-11-29 08:21:30.954762653 +0000 UTC m=+0.025727106 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.092 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.093 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.093 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.094 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.095 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.095 232432 DEBUG nova.virt.libvirt.driver [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.104 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.105 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404490.8791175, 79ee67d4-e2fb-43f9-8489-337e78260613 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.105 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] VM Paused (Lifecycle Event)
Nov 29 08:21:31 compute-2 podman[294114]: 2025-11-29 08:21:31.107161754 +0000 UTC m=+0.178126157 container create b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.135 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:31 compute-2 systemd[1]: Started libpod-conmon-b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d.scope.
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.151 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404490.8794255, 79ee67d4-e2fb-43f9-8489-337e78260613 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.151 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] VM Resumed (Lifecycle Event)
Nov 29 08:21:31 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:21:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c737b3f7718e75ddc34c0dfb7d6d5e9f818b7e69b99366ccccc68cf78675922/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.166 232432 INFO nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Took 10.29 seconds to spawn the instance on the hypervisor.
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.167 232432 DEBUG nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.172 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:31 compute-2 podman[294114]: 2025-11-29 08:21:31.178528738 +0000 UTC m=+0.249493131 container init b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.182 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:31 compute-2 podman[294114]: 2025-11-29 08:21:31.186829037 +0000 UTC m=+0.257793440 container start b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:21:31 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [NOTICE]   (294143) : New worker (294149) forked
Nov 29 08:21:31 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [NOTICE]   (294143) : Loading success.
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.225 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:31 compute-2 podman[294127]: 2025-11-29 08:21:31.236554614 +0000 UTC m=+0.084375243 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.245 232432 INFO nova.compute.manager [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Took 11.67 seconds to build instance.
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.261 232432 DEBUG oslo_concurrency.lockutils [None req-2cf624b9-d808-4ee4-93ff-e19a3f9404d7 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:31 compute-2 nova_compute[232428]: 2025-11-29 08:21:31.542 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:31.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:31 compute-2 ceph-mon[77138]: pgmap v2489: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 6.8 MiB/s wr, 145 op/s
Nov 29 08:21:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3401190389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1516825404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.3 total, 600.0 interval
                                           Cumulative writes: 10K writes, 52K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1773 writes, 8633 keys, 1773 commit groups, 1.0 writes per commit group, ingest: 16.81 MB, 0.03 MB/s
                                           Interval WAL: 1773 writes, 1773 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     81.1      0.76              0.27        30    0.025       0      0       0.0       0.0
                                             L6      1/0    9.35 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    120.0    100.8      2.71              1.00        29    0.093    185K    16K       0.0       0.0
                                            Sum      1/0    9.35 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     93.8     96.5      3.47              1.27        59    0.059    185K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7    105.7    107.1      0.68              0.32        12    0.056     49K   3092       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    120.0    100.8      2.71              1.00        29    0.093    185K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     91.4      0.67              0.27        29    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.060, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 3.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 39.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000327 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2137,37.73 MB,12.4117%) FilterBlock(59,524.55 KB,0.168504%) IndexBlock(59,867.66 KB,0.278724%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:21:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:32.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.505 232432 DEBUG nova.compute.manager [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.506 232432 DEBUG oslo_concurrency.lockutils [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.506 232432 DEBUG oslo_concurrency.lockutils [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.507 232432 DEBUG oslo_concurrency.lockutils [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.507 232432 DEBUG nova.compute.manager [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] No waiting events found dispatching network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:32 compute-2 nova_compute[232428]: 2025-11-29 08:21:32.508 232432 WARNING nova.compute.manager [req-4fa9123b-2bfa-4a11-85a1-ade55be469c3 req-ad832555-7334-4bd4-a2f9-1b206f0658c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received unexpected event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af for instance with vm_state active and task_state None.
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.632 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.632 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.632 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.633 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.633 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.634 232432 INFO nova.compute.manager [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Terminating instance
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.635 232432 DEBUG nova.compute.manager [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 kernel: tapc66c1fa7-43 (unregistering): left promiscuous mode
Nov 29 08:21:33 compute-2 NetworkManager[48993]: <info>  [1764404493.6751] device (tapc66c1fa7-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.686 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 ovn_controller[134375]: 2025-11-29T08:21:33Z|00686|binding|INFO|Releasing lport c66c1fa7-438d-47f4-bbad-2e22ef5153af from this chassis (sb_readonly=0)
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.687 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 ovn_controller[134375]: 2025-11-29T08:21:33Z|00687|binding|INFO|Setting lport c66c1fa7-438d-47f4-bbad-2e22ef5153af down in Southbound
Nov 29 08:21:33 compute-2 ovn_controller[134375]: 2025-11-29T08:21:33Z|00688|binding|INFO|Removing iface tapc66c1fa7-43 ovn-installed in OVS
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.696 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:0b:d4 10.100.0.11'], port_security=['fa:16:3e:73:0b:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '79ee67d4-e2fb-43f9-8489-337e78260613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c66c1fa7-438d-47f4-bbad-2e22ef5153af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.698 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c66c1fa7-438d-47f4-bbad-2e22ef5153af in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.700 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97e6ef02-6896-45a2-9eb9-28926c1a7400, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.701 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[315cf37d-922d-4ea0-a16e-3de6a87f6d87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.701 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace which is not needed anymore
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 29 08:21:33 compute-2 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000093.scope: Consumed 3.558s CPU time.
Nov 29 08:21:33 compute-2 systemd-machined[194747]: Machine qemu-68-instance-00000093 terminated.
Nov 29 08:21:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [NOTICE]   (294143) : haproxy version is 2.8.14-c23fe91
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [NOTICE]   (294143) : path to executable is /usr/sbin/haproxy
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [WARNING]  (294143) : Exiting Master process...
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [WARNING]  (294143) : Exiting Master process...
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [ALERT]    (294143) : Current worker (294149) exited with code 143 (Terminated)
Nov 29 08:21:33 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[294131]: [WARNING]  (294143) : All workers exited. Exiting... (0)
Nov 29 08:21:33 compute-2 systemd[1]: libpod-b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d.scope: Deactivated successfully.
Nov 29 08:21:33 compute-2 podman[294190]: 2025-11-29 08:21:33.847116863 +0000 UTC m=+0.050074649 container died b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.875 232432 INFO nova.virt.libvirt.driver [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Instance destroyed successfully.
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.875 232432 DEBUG nova.objects.instance [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid 79ee67d4-e2fb-43f9-8489-337e78260613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:21:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-2c737b3f7718e75ddc34c0dfb7d6d5e9f818b7e69b99366ccccc68cf78675922-merged.mount: Deactivated successfully.
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.889 232432 DEBUG nova.virt.libvirt.vif [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-589754215',display_name='tempest-ServersTestJSON-server-589754215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-589754215',id=147,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-7ef3g7v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:31Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=79ee67d4-e2fb-43f9-8489-337e78260613,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.890 232432 DEBUG nova.network.os_vif_util [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "address": "fa:16:3e:73:0b:d4", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c1fa7-43", "ovs_interfaceid": "c66c1fa7-438d-47f4-bbad-2e22ef5153af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.891 232432 DEBUG nova.network.os_vif_util [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.892 232432 DEBUG os_vif [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.894 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.894 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66c1fa7-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:33 compute-2 podman[294190]: 2025-11-29 08:21:33.898006766 +0000 UTC m=+0.100964542 container cleanup b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.900 232432 INFO os_vif [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:d4,bridge_name='br-int',has_traffic_filtering=True,id=c66c1fa7-438d-47f4-bbad-2e22ef5153af,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c1fa7-43')
Nov 29 08:21:33 compute-2 systemd[1]: libpod-conmon-b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d.scope: Deactivated successfully.
Nov 29 08:21:33 compute-2 podman[294244]: 2025-11-29 08:21:33.983564354 +0000 UTC m=+0.052033020 container remove b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.991 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[40db80f2-defe-462c-a9cd-89e715651314]: (4, ('Sat Nov 29 08:21:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d)\nb463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d\nSat Nov 29 08:21:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (b463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d)\nb463817276b0cfbbc93844948974c0c4d2318d3d6bbd39d4635227d6c369527d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.994 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc75bae3-d987-4be8-ba33-f4611380ee03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:33.995 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:33 compute-2 kernel: tap97e6ef02-60: left promiscuous mode
Nov 29 08:21:33 compute-2 nova_compute[232428]: 2025-11-29 08:21:33.997 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:34 compute-2 ceph-mon[77138]: pgmap v2490: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 6.8 MiB/s wr, 145 op/s
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.019 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c27015-b13d-45d4-83c9-0334ea4b5ea2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.031 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0f5898-6b77-4215-9eca-fa9ec2a4edff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.032 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad56490-77e6-4064-9547-5025257f90fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.055 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6323dd-98b1-4f72-b39f-1e3a1b33d290]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745642, 'reachable_time': 29759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294264, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:34 compute-2 systemd[1]: run-netns-ovnmeta\x2d97e6ef02\x2d6896\x2d45a2\x2d9eb9\x2d28926c1a7400.mount: Deactivated successfully.
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.059 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:21:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:34.060 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3264d0a6-8f8f-4515-a886-e8aac166a928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.355 232432 INFO nova.virt.libvirt.driver [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Deleting instance files /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613_del
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.356 232432 INFO nova.virt.libvirt.driver [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Deletion of /var/lib/nova/instances/79ee67d4-e2fb-43f9-8489-337e78260613_del complete
Nov 29 08:21:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:34.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.447 232432 INFO nova.compute.manager [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.448 232432 DEBUG oslo.service.loopingcall [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.448 232432 DEBUG nova.compute.manager [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.448 232432 DEBUG nova.network.neutron [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.610 232432 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-unplugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.611 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.611 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.611 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.611 232432 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] No waiting events found dispatching network-vif-unplugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.612 232432 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-unplugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.612 232432 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.612 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.612 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.612 232432 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.613 232432 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] No waiting events found dispatching network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:34 compute-2 nova_compute[232428]: 2025-11-29 08:21:34.613 232432 WARNING nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received unexpected event network-vif-plugged-c66c1fa7-438d-47f4-bbad-2e22ef5153af for instance with vm_state active and task_state deleting.
Nov 29 08:21:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.103 232432 DEBUG nova.network.neutron [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.119 232432 INFO nova.compute.manager [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Took 0.67 seconds to deallocate network for instance.
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.166 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.167 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.235 232432 DEBUG oslo_concurrency.processutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/541896551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.748 232432 DEBUG oslo_concurrency.processutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.757 232432 DEBUG nova.compute.provider_tree [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.774 232432 DEBUG nova.scheduler.client.report [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.797 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:21:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.837 232432 INFO nova.scheduler.client.report [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance 79ee67d4-e2fb-43f9-8489-337e78260613
Nov 29 08:21:35 compute-2 nova_compute[232428]: 2025-11-29 08:21:35.911 232432 DEBUG oslo_concurrency.lockutils [None req-26e63263-aa4b-4f7a-a265-40b7019cd460 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "79ee67d4-e2fb-43f9-8489-337e78260613" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 08:21:36 compute-2 ceph-mon[77138]: pgmap v2491: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.4 MiB/s wr, 170 op/s
Nov 29 08:21:36 compute-2 ceph-mon[77138]: osdmap e329: 3 total, 3 up, 3 in
Nov 29 08:21:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/541896551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:36 compute-2 sudo[294289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:36 compute-2 sudo[294289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:36 compute-2 sudo[294289]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:36 compute-2 podman[294313]: 2025-11-29 08:21:36.179198724 +0000 UTC m=+0.074603317 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:21:36 compute-2 sudo[294320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:36 compute-2 sudo[294320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:36 compute-2 sudo[294320]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:36.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:36 compute-2 nova_compute[232428]: 2025-11-29 08:21:36.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:36 compute-2 nova_compute[232428]: 2025-11-29 08:21:36.740 232432 DEBUG nova.compute.manager [req-0246d0e3-d262-4676-8d9e-1ae5ca5dd635 req-cad1e87e-3540-4d2d-bf7f-56cbf65eb6da 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Received event network-vif-deleted-c66c1fa7-438d-47f4-bbad-2e22ef5153af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:37 compute-2 ceph-mon[77138]: osdmap e330: 3 total, 3 up, 3 in
Nov 29 08:21:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 08:21:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:38 compute-2 ceph-mon[77138]: pgmap v2494: 305 pgs: 305 active+clean; 721 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 269 op/s
Nov 29 08:21:38 compute-2 ceph-mon[77138]: osdmap e331: 3 total, 3 up, 3 in
Nov 29 08:21:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:38.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:38 compute-2 nova_compute[232428]: 2025-11-29 08:21:38.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.476 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.476 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.497 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.570 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.571 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.581 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.581 232432 INFO nova.compute.claims [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.690 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:39.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.919 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.919 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:39 compute-2 nova_compute[232428]: 2025-11-29 08:21:39.942 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.017 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/145518366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:40 compute-2 ceph-mon[77138]: pgmap v2496: 305 pgs: 305 active+clean; 720 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Nov 29 08:21:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/145518366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.118 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.125 232432 DEBUG nova.compute.provider_tree [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.138 232432 DEBUG nova.scheduler.client.report [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.160 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.161 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.164 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.170 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.171 232432 INFO nova.compute.claims [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.230 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.231 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.253 232432 INFO nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.273 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.319 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:40 compute-2 sudo[294383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:40 compute-2 sudo[294383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:40 compute-2 sudo[294383]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.450 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.454 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.455 232432 INFO nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Creating image(s)
Nov 29 08:21:40 compute-2 sudo[294408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:40 compute-2 sudo[294408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:40 compute-2 sudo[294408]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.494 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.532 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:40 compute-2 sudo[294468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:40 compute-2 sudo[294468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:40 compute-2 sudo[294468]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.573 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.579 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:40 compute-2 sudo[294528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:21:40 compute-2 sudo[294528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.666 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.667 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.668 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.668 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.695 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.699 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.733 232432 DEBUG nova.policy [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:21:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3205484860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.792 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.798 232432 DEBUG nova.compute.provider_tree [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.819 232432 DEBUG nova.scheduler.client.report [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.857 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.858 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.926 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.927 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.948 232432 INFO nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:21:40 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.967 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:40.998 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.086 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:21:41 compute-2 sudo[294528]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.123 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.125 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.126 232432 INFO nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Creating image(s)
Nov 29 08:21:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3205484860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.159 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.190 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.218 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.222 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.261 232432 DEBUG nova.policy [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27fbef868fd944adb0787ac691f465f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4574185f65454582b56aa1dfb65251ba', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.323 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.324 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.325 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.325 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.356 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.360 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.402 232432 DEBUG nova.objects.instance [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.442 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.443 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Ensure instance console log exists: /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.443 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.444 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.444 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.547 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.621 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Successfully created port: 11a5e08c-f08a-467a-9562-ffb888b8adef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.693 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.756 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] resizing rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:21:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:41.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.863 232432 DEBUG nova.objects.instance [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'migration_context' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.885 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.885 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Ensure instance console log exists: /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.886 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.886 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:41 compute-2 nova_compute[232428]: 2025-11-29 08:21:41.886 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.017 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Successfully created port: d3f7613c-9105-484e-ae74-77f76d32b85b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:21:42 compute-2 ceph-mon[77138]: pgmap v2497: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.8 MiB/s wr, 459 op/s
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.361 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Successfully updated port: 11a5e08c-f08a-467a-9562-ffb888b8adef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.378 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.378 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.378 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:21:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.472 232432 DEBUG nova.compute.manager [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-changed-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.472 232432 DEBUG nova.compute.manager [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Refreshing instance network info cache due to event network-changed-11a5e08c-f08a-467a-9562-ffb888b8adef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.472 232432 DEBUG oslo_concurrency.lockutils [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.658 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:21:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835654211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.955 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Successfully updated port: d3f7613c-9105-484e-ae74-77f76d32b85b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.972 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.973 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:42 compute-2 nova_compute[232428]: 2025-11-29 08:21:42.973 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.060 232432 DEBUG nova.compute.manager [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.060 232432 DEBUG nova.compute.manager [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing instance network info cache due to event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.060 232432 DEBUG oslo_concurrency.lockutils [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.138 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:21:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:21:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:21:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2835654211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.456 232432 DEBUG nova.network.neutron [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Updating instance_info_cache with network_info: [{"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.476 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.476 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance network_info: |[{"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.476 232432 DEBUG oslo_concurrency.lockutils [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.477 232432 DEBUG nova.network.neutron [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Refreshing network info cache for port 11a5e08c-f08a-467a-9562-ffb888b8adef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.480 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Start _get_guest_xml network_info=[{"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.485 232432 WARNING nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.489 232432 DEBUG nova.virt.libvirt.host [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.490 232432 DEBUG nova.virt.libvirt.host [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.496 232432 DEBUG nova.virt.libvirt.host [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.496 232432 DEBUG nova.virt.libvirt.host [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.497 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.498 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.498 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.499 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.499 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.499 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.499 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.500 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.500 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.500 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.501 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.501 232432 DEBUG nova.virt.hardware [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.504 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 08:21:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.900 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.945 232432 DEBUG nova.network.neutron [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/677636791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.970 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.970 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance network_info: |[{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.971 232432 DEBUG oslo_concurrency.lockutils [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.971 232432 DEBUG nova.network.neutron [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.974 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start _get_guest_xml network_info=[{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.979 232432 WARNING nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.984 232432 DEBUG nova.virt.libvirt.host [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.985 232432 DEBUG nova.virt.libvirt.host [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.987 232432 DEBUG nova.virt.libvirt.host [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.988 232432 DEBUG nova.virt.libvirt.host [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.989 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.989 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.989 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.990 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.990 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.990 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.990 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.990 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.991 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.991 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.991 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.991 232432 DEBUG nova.virt.hardware [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:21:43 compute-2 nova_compute[232428]: 2025-11-29 08:21:43.995 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.032 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.070 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.076 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:44 compute-2 ceph-mon[77138]: pgmap v2498: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.3 MiB/s wr, 299 op/s
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:44 compute-2 ceph-mon[77138]: osdmap e332: 3 total, 3 up, 3 in
Nov 29 08:21:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/677636791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3349693915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.470 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2425033151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.508 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.513 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.550 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.553 232432 DEBUG nova.virt.libvirt.vif [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-555245383',display_name='tempest-ServersTestJSON-server-555245383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-555245383',id=149,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-6esrfovb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:40Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.553 232432 DEBUG nova.network.os_vif_util [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.554 232432 DEBUG nova.network.os_vif_util [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.556 232432 DEBUG nova.objects.instance [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.576 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <uuid>dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9</uuid>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <name>instance-00000095</name>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersTestJSON-server-555245383</nova:name>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:21:43</nova:creationTime>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <nova:port uuid="11a5e08c-f08a-467a-9562-ffb888b8adef">
Nov 29 08:21:44 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <system>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="serial">dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="uuid">dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </system>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <os>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </os>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <features>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </features>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk">
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config">
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:44 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:90:95:a1"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <target dev="tap11a5e08c-f0"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/console.log" append="off"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <video>
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </video>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:21:44 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:21:44 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:21:44 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:21:44 compute-2 nova_compute[232428]: </domain>
Nov 29 08:21:44 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.577 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Preparing to wait for external event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.578 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.578 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.579 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.580 232432 DEBUG nova.virt.libvirt.vif [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-555245383',display_name='tempest-ServersTestJSON-server-555245383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-555245383',id=149,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-6esrfovb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:40Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.580 232432 DEBUG nova.network.os_vif_util [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.581 232432 DEBUG nova.network.os_vif_util [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.581 232432 DEBUG os_vif [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.582 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.582 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.583 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.587 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11a5e08c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.587 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11a5e08c-f0, col_values=(('external_ids', {'iface-id': '11a5e08c-f08a-467a-9562-ffb888b8adef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:95:a1', 'vm-uuid': 'dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:44 compute-2 NetworkManager[48993]: <info>  [1764404504.5901] manager: (tap11a5e08c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.596 232432 INFO os_vif [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0')
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.663 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.664 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.665 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:90:95:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.666 232432 INFO nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Using config drive
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.711 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.844 232432 DEBUG nova.network.neutron [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Updated VIF entry in instance network info cache for port 11a5e08c-f08a-467a-9562-ffb888b8adef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.844 232432 DEBUG nova.network.neutron [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Updating instance_info_cache with network_info: [{"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:44 compute-2 nova_compute[232428]: 2025-11-29 08:21:44.881 232432 DEBUG oslo_concurrency.lockutils [req-d4959e33-6cdc-4531-9989-e019d70e1a9b req-621f2284-fd3d-47c3-81a1-ffc23d904e33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:21:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/634989070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.055 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.057 232432 DEBUG nova.virt.libvirt.vif [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1815513079',display_name='tempest-ServerRescueTestJSONUnderV235-server-1815513079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1815513079',id=150,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4574185f65454582b56aa1dfb65251ba',ramdisk_id='',reservation_id='r-yk8v0jhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1838033223',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1838033223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:41Z,user_data=None,user_id='27fbef868fd944adb0787ac691f465f5',uuid=6211bb9c-7da9-4ff8-8b1b-b47c237cf720,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.058 232432 DEBUG nova.network.os_vif_util [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converting VIF {"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.059 232432 DEBUG nova.network.os_vif_util [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.061 232432 DEBUG nova.objects.instance [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.153 232432 INFO nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Creating config drive at /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.158 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4mb_mrqt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3349693915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2425033151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/634989070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.259 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <uuid>6211bb9c-7da9-4ff8-8b1b-b47c237cf720</uuid>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <name>instance-00000096</name>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1815513079</nova:name>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:21:43</nova:creationTime>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:user uuid="27fbef868fd944adb0787ac691f465f5">tempest-ServerRescueTestJSONUnderV235-1838033223-project-member</nova:user>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:project uuid="4574185f65454582b56aa1dfb65251ba">tempest-ServerRescueTestJSONUnderV235-1838033223</nova:project>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <nova:port uuid="d3f7613c-9105-484e-ae74-77f76d32b85b">
Nov 29 08:21:45 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <system>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="serial">6211bb9c-7da9-4ff8-8b1b-b47c237cf720</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="uuid">6211bb9c-7da9-4ff8-8b1b-b47c237cf720</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </system>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <os>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </os>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <features>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </features>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk">
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config">
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:21:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:cb:56:33"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <target dev="tapd3f7613c-91"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/console.log" append="off"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <video>
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </video>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:21:45 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:21:45 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:21:45 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:21:45 compute-2 nova_compute[232428]: </domain>
Nov 29 08:21:45 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.261 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Preparing to wait for external event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.262 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.262 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.262 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.263 232432 DEBUG nova.virt.libvirt.vif [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1815513079',display_name='tempest-ServerRescueTestJSONUnderV235-server-1815513079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1815513079',id=150,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4574185f65454582b56aa1dfb65251ba',ramdisk_id='',reservation_id='r-yk8v0jhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1838033223',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1838033223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:41Z,user_data=None,user_id='27fbef868fd944adb0787ac691f465f5',uuid=6211bb9c-7da9-4ff8-8b1b-b47c237cf720,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.263 232432 DEBUG nova.network.os_vif_util [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converting VIF {"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.264 232432 DEBUG nova.network.os_vif_util [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.265 232432 DEBUG os_vif [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.266 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.266 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.269 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.269 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3f7613c-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.270 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3f7613c-91, col_values=(('external_ids', {'iface-id': 'd3f7613c-9105-484e-ae74-77f76d32b85b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:56:33', 'vm-uuid': '6211bb9c-7da9-4ff8-8b1b-b47c237cf720'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.271 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-2 NetworkManager[48993]: <info>  [1764404505.2722] manager: (tapd3f7613c-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.275 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.281 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.282 232432 INFO os_vif [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91')
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.284 232432 DEBUG nova.network.neutron [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updated VIF entry in instance network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.285 232432 DEBUG nova.network.neutron [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.300 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4mb_mrqt" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.484 232432 DEBUG nova.storage.rbd_utils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.490 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:45 compute-2 nova_compute[232428]: 2025-11-29 08:21:45.535 232432 DEBUG oslo_concurrency.lockutils [req-6b30de72-eff0-4058-929d-81b9e4db0e0d req-c0524558-c85d-4493-a8bb-f3c8db17b150 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:21:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:45.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.111 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.112 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.112 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No VIF found with MAC fa:16:3e:cb:56:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.114 232432 INFO nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Using config drive
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.160 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:46.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:46 compute-2 ceph-mon[77138]: pgmap v2500: 305 pgs: 305 active+clean; 814 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 216 op/s
Nov 29 08:21:46 compute-2 nova_compute[232428]: 2025-11-29 08:21:46.551 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:47 compute-2 ceph-mon[77138]: pgmap v2501: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.5 MiB/s wr, 257 op/s
Nov 29 08:21:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:47.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.313 232432 DEBUG oslo_concurrency.processutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.313 232432 INFO nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Deleting local config drive /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9/disk.config because it was imported into RBD.
Nov 29 08:21:48 compute-2 kernel: tap11a5e08c-f0: entered promiscuous mode
Nov 29 08:21:48 compute-2 NetworkManager[48993]: <info>  [1764404508.3809] manager: (tap11a5e08c-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 08:21:48 compute-2 ovn_controller[134375]: 2025-11-29T08:21:48Z|00689|binding|INFO|Claiming lport 11a5e08c-f08a-467a-9562-ffb888b8adef for this chassis.
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.387 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:48 compute-2 ovn_controller[134375]: 2025-11-29T08:21:48Z|00690|binding|INFO|11a5e08c-f08a-467a-9562-ffb888b8adef: Claiming fa:16:3e:90:95:a1 10.100.0.11
Nov 29 08:21:48 compute-2 ovn_controller[134375]: 2025-11-29T08:21:48Z|00691|binding|INFO|Setting lport 11a5e08c-f08a-467a-9562-ffb888b8adef ovn-installed in OVS
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.412 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.422 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:48 compute-2 systemd-udevd[295089]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:21:48 compute-2 NetworkManager[48993]: <info>  [1764404508.4482] device (tap11a5e08c-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:21:48 compute-2 systemd-machined[194747]: New machine qemu-69-instance-00000095.
Nov 29 08:21:48 compute-2 NetworkManager[48993]: <info>  [1764404508.4489] device (tap11a5e08c-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:21:48 compute-2 systemd[1]: Started Virtual Machine qemu-69-instance-00000095.
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.665 232432 INFO nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Creating config drive at /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.680 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl47s2j63 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:48 compute-2 ovn_controller[134375]: 2025-11-29T08:21:48Z|00692|binding|INFO|Setting lport 11a5e08c-f08a-467a-9562-ffb888b8adef up in Southbound
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.729 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:95:a1 10.100.0.11'], port_security=['fa:16:3e:90:95:a1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=11a5e08c-f08a-467a-9562-ffb888b8adef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.732 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 11a5e08c-f08a-467a-9562-ffb888b8adef in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.735 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.755 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[69576d8f-4d2d-472d-aef4-81ace8c5510f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.757 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97e6ef02-61 in ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.761 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97e6ef02-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.762 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9a6f42-cfaa-4603-b635-824275e17a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.763 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ce8ad-5509-4916-a920-5804d4321455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.789 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c09b0240-0886-4beb-abd0-2b5dd667a362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.817 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9230e88c-f54a-4be8-bd34-d977a360d9e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 nova_compute[232428]: 2025-11-29 08:21:48.844 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl47s2j63" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.881 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[92e51ea9-d5fc-49fb-95cc-d64446d58bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 NetworkManager[48993]: <info>  [1764404508.8927] manager: (tap97e6ef02-60): new Veth device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8074bf13-8d31-4376-b960-2a140939803e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 systemd-udevd[295095]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.946 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[07979aa5-2c5a-4218-940a-706923774a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:48.952 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[94e6c60b-a735-4005-b7a7-368f7ff0dd50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 NetworkManager[48993]: <info>  [1764404509.0000] device (tap97e6ef02-60): carrier: link connected
Nov 29 08:21:49 compute-2 podman[295125]: 2025-11-29 08:21:48.996982963 +0000 UTC m=+0.165525603 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.004 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[df070269-ead0-451c-9613-d095b2914b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.032 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b82dff6f-b050-4a03-8797-9b3f3f059641]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747526, 'reachable_time': 19809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295184, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.058 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[659481d1-0633-47d8-a21b-016d773c2c56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:de28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747526, 'tstamp': 747526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295186, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.088 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5b4eca-b422-4789-ad7b-ceab26c9a789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747526, 'reachable_time': 19809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295187, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.139 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7a457da4-7189-4972-941b-878c662d3a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.249 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f09483-a07a-48f9-bb92-58f73cb9949b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.252 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.252 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.253 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:49 compute-2 kernel: tap97e6ef02-60: entered promiscuous mode
Nov 29 08:21:49 compute-2 NetworkManager[48993]: <info>  [1764404509.2573] manager: (tap97e6ef02-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.265 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:49 compute-2 ovn_controller[134375]: 2025-11-29T08:21:49Z|00693|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=1)
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.310 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.312 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[30f9f204-46d5-4c6b-bc84-43ee528f999a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.313 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:21:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:49.314 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'env', 'PROCESS_TAG=haproxy-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97e6ef02-6896-45a2-9eb9-28926c1a7400.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:21:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:49.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:49 compute-2 podman[295230]: 2025-11-29 08:21:49.740047323 +0000 UTC m=+0.032303002 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.054 232432 DEBUG nova.storage.rbd_utils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.059 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.110 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.117 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404493.872247, 79ee67d4-e2fb-43f9-8489-337e78260613 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.118 232432 INFO nova.compute.manager [-] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] VM Stopped (Lifecycle Event)
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.123 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.273 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:50 compute-2 nova_compute[232428]: 2025-11-29 08:21:50.660 232432 DEBUG nova.compute.manager [None req-febc63e6-4a7d-4469-a6cb-19bba62d735a - - - - - -] [instance: 79ee67d4-e2fb-43f9-8489-337e78260613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:51 compute-2 podman[295230]: 2025-11-29 08:21:51.886759282 +0000 UTC m=+2.179014861 container create b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.933 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404511.932882, dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.933 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] VM Started (Lifecycle Event)
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.965 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.972 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404511.9345562, dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:51 compute-2 nova_compute[232428]: 2025-11-29 08:21:51.973 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] VM Paused (Lifecycle Event)
Nov 29 08:21:52 compute-2 ceph-mon[77138]: pgmap v2502: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.0 MiB/s wr, 227 op/s
Nov 29 08:21:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:52 compute-2 nova_compute[232428]: 2025-11-29 08:21:52.446 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:52 compute-2 nova_compute[232428]: 2025-11-29 08:21:52.451 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:52 compute-2 systemd[1]: Started libpod-conmon-b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44.scope.
Nov 29 08:21:52 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:21:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db09fc8d770a120e55e6024d881e5f96a7568f7730bc20351160074ad8be3350/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:53 compute-2 podman[295230]: 2025-11-29 08:21:53.203949875 +0000 UTC m=+3.496205524 container init b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:21:53 compute-2 podman[295230]: 2025-11-29 08:21:53.216644052 +0000 UTC m=+3.508899661 container start b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:21:53 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [NOTICE]   (295300) : New worker (295302) forked
Nov 29 08:21:53 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [NOTICE]   (295300) : Loading success.
Nov 29 08:21:53 compute-2 nova_compute[232428]: 2025-11-29 08:21:53.566 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:53 compute-2 ceph-mon[77138]: pgmap v2503: 305 pgs: 305 active+clean; 899 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 6.4 MiB/s wr, 106 op/s
Nov 29 08:21:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:21:53 compute-2 sudo[295311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:53 compute-2 sudo[295311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-2 sudo[295311]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-2 sudo[295337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:21:53 compute-2 sudo[295337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-2 sudo[295337]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.372 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.373 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.376 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.375 232432 DEBUG oslo_concurrency.processutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.376 232432 INFO nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Deleting local config drive /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config because it was imported into RBD.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.409 232432 DEBUG nova.compute.manager [req-9a548684-5ba6-4cc5-a5cd-4cc0c99cd09c req-84ab7454-53bf-4a6f-b690-3b5d467284d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.410 232432 DEBUG oslo_concurrency.lockutils [req-9a548684-5ba6-4cc5-a5cd-4cc0c99cd09c req-84ab7454-53bf-4a6f-b690-3b5d467284d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.410 232432 DEBUG oslo_concurrency.lockutils [req-9a548684-5ba6-4cc5-a5cd-4cc0c99cd09c req-84ab7454-53bf-4a6f-b690-3b5d467284d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.411 232432 DEBUG oslo_concurrency.lockutils [req-9a548684-5ba6-4cc5-a5cd-4cc0c99cd09c req-84ab7454-53bf-4a6f-b690-3b5d467284d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.411 232432 DEBUG nova.compute.manager [req-9a548684-5ba6-4cc5-a5cd-4cc0c99cd09c req-84ab7454-53bf-4a6f-b690-3b5d467284d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Processing event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.412 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.418 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404514.4187167, dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.419 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] VM Resumed (Lifecycle Event)
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.423 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:21:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:21:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.429 232432 INFO nova.virt.libvirt.driver [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance spawned successfully.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.429 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:21:54 compute-2 kernel: tapd3f7613c-91: entered promiscuous mode
Nov 29 08:21:54 compute-2 NetworkManager[48993]: <info>  [1764404514.4567] manager: (tapd3f7613c-91): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 08:21:54 compute-2 systemd-udevd[295290]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:21:54 compute-2 ovn_controller[134375]: 2025-11-29T08:21:54Z|00694|binding|INFO|Claiming lport d3f7613c-9105-484e-ae74-77f76d32b85b for this chassis.
Nov 29 08:21:54 compute-2 ovn_controller[134375]: 2025-11-29T08:21:54Z|00695|binding|INFO|d3f7613c-9105-484e-ae74-77f76d32b85b: Claiming fa:16:3e:cb:56:33 10.100.0.3
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:54 compute-2 NetworkManager[48993]: <info>  [1764404514.4690] device (tapd3f7613c-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:21:54 compute-2 NetworkManager[48993]: <info>  [1764404514.4719] device (tapd3f7613c-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.492 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:56:33 10.100.0.3'], port_security=['fa:16:3e:cb:56:33 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6211bb9c-7da9-4ff8-8b1b-b47c237cf720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a9406b-a63a-4191-b15b-28d172b27b82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4574185f65454582b56aa1dfb65251ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3509650f-8fc6-4e5a-a78c-3e75e9be7304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70f4cb21-f00c-43e5-959d-eb5b0d04bd3d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d3f7613c-9105-484e-ae74-77f76d32b85b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.493 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d3f7613c-9105-484e-ae74-77f76d32b85b in datapath a5a9406b-a63a-4191-b15b-28d172b27b82 bound to our chassis
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.494 143801 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5a9406b-a63a-4191-b15b-28d172b27b82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 29 08:21:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:21:54.495 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef06d158-3592-4d8e-b6be-14525249227e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:54 compute-2 systemd-machined[194747]: New machine qemu-70-instance-00000096.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.516 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.523 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.526 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.526 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.527 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.527 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.527 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.528 232432 DEBUG nova.virt.libvirt.driver [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:54 compute-2 systemd[1]: Started Virtual Machine qemu-70-instance-00000096.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:54 compute-2 ovn_controller[134375]: 2025-11-29T08:21:54Z|00696|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b ovn-installed in OVS
Nov 29 08:21:54 compute-2 ovn_controller[134375]: 2025-11-29T08:21:54Z|00697|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b up in Southbound
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.612 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:54 compute-2 ceph-mon[77138]: pgmap v2504: 305 pgs: 305 active+clean; 899 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 6.4 MiB/s wr, 106 op/s
Nov 29 08:21:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.686 232432 INFO nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Took 14.23 seconds to spawn the instance on the hypervisor.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.686 232432 DEBUG nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.805 232432 INFO nova.compute.manager [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Took 15.26 seconds to build instance.
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.852 232432 DEBUG oslo_concurrency.lockutils [None req-dad30bc3-b721-495a-b28b-c78425210e59 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.931 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404514.9312944, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.931 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Started (Lifecycle Event)
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.971 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.976 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404514.9320738, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:54 compute-2 nova_compute[232428]: 2025-11-29 08:21:54.976 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Paused (Lifecycle Event)
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.027 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.030 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.069 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.276 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.876 232432 DEBUG nova.compute.manager [req-dbe6dcb7-4629-449b-a839-2b47ab0e6c16 req-9d83d753-dfd5-4598-bb70-1ce430069035 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.877 232432 DEBUG oslo_concurrency.lockutils [req-dbe6dcb7-4629-449b-a839-2b47ab0e6c16 req-9d83d753-dfd5-4598-bb70-1ce430069035 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.877 232432 DEBUG oslo_concurrency.lockutils [req-dbe6dcb7-4629-449b-a839-2b47ab0e6c16 req-9d83d753-dfd5-4598-bb70-1ce430069035 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.878 232432 DEBUG oslo_concurrency.lockutils [req-dbe6dcb7-4629-449b-a839-2b47ab0e6c16 req-9d83d753-dfd5-4598-bb70-1ce430069035 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.878 232432 DEBUG nova.compute.manager [req-dbe6dcb7-4629-449b-a839-2b47ab0e6c16 req-9d83d753-dfd5-4598-bb70-1ce430069035 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Processing event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.879 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.884 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404515.883812, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.885 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Resumed (Lifecycle Event)
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.889 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.894 232432 INFO nova.virt.libvirt.driver [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance spawned successfully.
Nov 29 08:21:55 compute-2 nova_compute[232428]: 2025-11-29 08:21:55.895 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:21:56 compute-2 sudo[295431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:56 compute-2 sudo[295431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-2 sudo[295431]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:56 compute-2 sudo[295456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:56 compute-2 sudo[295456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-2 sudo[295456]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.419 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.420 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.420 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.421 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.421 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.422 232432 DEBUG nova.virt.libvirt.driver [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.427 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.430 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:21:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.557 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:56 compute-2 nova_compute[232428]: 2025-11-29 08:21:56.717 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.056 232432 DEBUG nova.compute.manager [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.056 232432 DEBUG oslo_concurrency.lockutils [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.056 232432 DEBUG oslo_concurrency.lockutils [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.056 232432 DEBUG oslo_concurrency.lockutils [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.056 232432 DEBUG nova.compute.manager [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] No waiting events found dispatching network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.057 232432 WARNING nova.compute.manager [req-9b1ed57b-0fae-4172-83a2-c229f737f70b req-b254ed63-13b4-4a3f-bb58-a03326a489b7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received unexpected event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef for instance with vm_state active and task_state None.
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.544 232432 INFO nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Took 16.42 seconds to spawn the instance on the hypervisor.
Nov 29 08:21:57 compute-2 nova_compute[232428]: 2025-11-29 08:21:57.544 232432 DEBUG nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:58 compute-2 ceph-mon[77138]: pgmap v2505: 305 pgs: 305 active+clean; 900 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.1 MiB/s wr, 99 op/s
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.211 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.213 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.238 232432 INFO nova.compute.manager [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Took 18.24 seconds to build instance.
Nov 29 08:21:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:21:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.507 232432 DEBUG nova.compute.manager [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.507 232432 DEBUG oslo_concurrency.lockutils [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.508 232432 DEBUG oslo_concurrency.lockutils [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.508 232432 DEBUG oslo_concurrency.lockutils [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.508 232432 DEBUG nova.compute.manager [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.508 232432 WARNING nova.compute.manager [req-59fd2bf9-6dcd-47c2-9f1a-f8755b7d9f2c req-df1469a2-45a4-4211-a274-1821868068e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state active and task_state None.
Nov 29 08:21:58 compute-2 nova_compute[232428]: 2025-11-29 08:21:58.735 232432 DEBUG oslo_concurrency.lockutils [None req-dabe0ec8-8312-4b27-85d4-b0a5bcf60d65 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:59 compute-2 nova_compute[232428]: 2025-11-29 08:21:59.634 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:21:59 compute-2 nova_compute[232428]: 2025-11-29 08:21:59.634 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:59 compute-2 nova_compute[232428]: 2025-11-29 08:21:59.634 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:59 compute-2 ceph-mon[77138]: pgmap v2506: 305 pgs: 305 active+clean; 901 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 4.8 MiB/s wr, 118 op/s
Nov 29 08:21:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:21:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:21:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.123 232432 DEBUG oslo_concurrency.lockutils [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.124 232432 DEBUG oslo_concurrency.lockutils [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.125 232432 DEBUG nova.compute.manager [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.128 232432 DEBUG nova.compute.manager [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.131 232432 DEBUG nova.objects.instance [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'flavor' on Instance uuid dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.173 232432 DEBUG nova.virt.libvirt.driver [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.278 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.327 232432 INFO nova.compute.manager [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Rescuing
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.328 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.328 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:00 compute-2 nova_compute[232428]: 2025-11-29 08:22:00.329 232432 DEBUG nova.network.neutron [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:22:00 compute-2 sshd-session[295484]: Invalid user sol from 45.148.10.240 port 47984
Nov 29 08:22:00 compute-2 sshd-session[295484]: Connection closed by invalid user sol 45.148.10.240 port 47984 [preauth]
Nov 29 08:22:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:00 compute-2 ceph-mon[77138]: pgmap v2507: 305 pgs: 305 active+clean; 905 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 29 08:22:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:01 compute-2 nova_compute[232428]: 2025-11-29 08:22:01.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:01 compute-2 podman[295486]: 2025-11-29 08:22:01.656980342 +0000 UTC m=+0.054309820 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 08:22:01 compute-2 ceph-mon[77138]: pgmap v2508: 305 pgs: 305 active+clean; 906 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 29 08:22:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:02.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/236404037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:03.326 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:03.378 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:03 compute-2 ceph-mon[77138]: pgmap v2509: 305 pgs: 305 active+clean; 906 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 381 KiB/s wr, 170 op/s
Nov 29 08:22:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3270619874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1421348518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:03.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:03 compute-2 nova_compute[232428]: 2025-11-29 08:22:03.895 232432 DEBUG nova.network.neutron [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:03 compute-2 nova_compute[232428]: 2025-11-29 08:22:03.975 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:04 compute-2 nova_compute[232428]: 2025-11-29 08:22:04.297 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:22:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:04.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3785667739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:05 compute-2 nova_compute[232428]: 2025-11-29 08:22:05.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:05 compute-2 ceph-mon[77138]: pgmap v2510: 305 pgs: 305 active+clean; 907 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 383 KiB/s wr, 176 op/s
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.239 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.240 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:06 compute-2 podman[295528]: 2025-11-29 08:22:06.663071209 +0000 UTC m=+0.067969329 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:22:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/670604029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.739 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.911 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.912 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.915 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:22:06 compute-2 nova_compute[232428]: 2025-11-29 08:22:06.915 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:22:07 compute-2 nova_compute[232428]: 2025-11-29 08:22:07.083 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:07 compute-2 nova_compute[232428]: 2025-11-29 08:22:07.084 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3945MB free_disk=20.673568725585938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:22:07 compute-2 nova_compute[232428]: 2025-11-29 08:22:07.085 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:07 compute-2 nova_compute[232428]: 2025-11-29 08:22:07.085 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/381954643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/454350748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/670604029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.120 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.120 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.121 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.121 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.198 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:08 compute-2 ceph-mon[77138]: pgmap v2511: 305 pgs: 305 active+clean; 913 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 390 KiB/s wr, 201 op/s
Nov 29 08:22:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2978950074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:08.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3898826114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:08 compute-2 ovn_controller[134375]: 2025-11-29T08:22:08Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:95:a1 10.100.0.11
Nov 29 08:22:08 compute-2 ovn_controller[134375]: 2025-11-29T08:22:08Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:95:a1 10.100.0.11
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.852 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.859 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.876 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.900 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:22:08 compute-2 nova_compute[232428]: 2025-11-29 08:22:08.901 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:09 compute-2 nova_compute[232428]: 2025-11-29 08:22:09.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:09 compute-2 nova_compute[232428]: 2025-11-29 08:22:09.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:22:09 compute-2 nova_compute[232428]: 2025-11-29 08:22:09.217 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:22:09 compute-2 ceph-mon[77138]: pgmap v2512: 305 pgs: 305 active+clean; 918 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 780 KiB/s wr, 185 op/s
Nov 29 08:22:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3898826114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:09.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:10 compute-2 nova_compute[232428]: 2025-11-29 08:22:10.262 232432 DEBUG nova.virt.libvirt.driver [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:22:10 compute-2 nova_compute[232428]: 2025-11-29 08:22:10.283 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:10.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:11 compute-2 ceph-mon[77138]: pgmap v2513: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 279 op/s
Nov 29 08:22:11 compute-2 nova_compute[232428]: 2025-11-29 08:22:11.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:11.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:13 compute-2 kernel: tap11a5e08c-f0 (unregistering): left promiscuous mode
Nov 29 08:22:13 compute-2 NetworkManager[48993]: <info>  [1764404533.1936] device (tap11a5e08c-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:22:13 compute-2 ovn_controller[134375]: 2025-11-29T08:22:13Z|00698|binding|INFO|Releasing lport 11a5e08c-f08a-467a-9562-ffb888b8adef from this chassis (sb_readonly=0)
Nov 29 08:22:13 compute-2 ovn_controller[134375]: 2025-11-29T08:22:13Z|00699|binding|INFO|Setting lport 11a5e08c-f08a-467a-9562-ffb888b8adef down in Southbound
Nov 29 08:22:13 compute-2 ovn_controller[134375]: 2025-11-29T08:22:13Z|00700|binding|INFO|Removing iface tap11a5e08c-f0 ovn-installed in OVS
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.212 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.221 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:95:a1 10.100.0.11'], port_security=['fa:16:3e:90:95:a1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=11a5e08c-f08a-467a-9562-ffb888b8adef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.227 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 11a5e08c-f08a-467a-9562-ffb888b8adef in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.229 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97e6ef02-6896-45a2-9eb9-28926c1a7400, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.231 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d1728319-0468-41fe-9fb3-5cf50d3587e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.232 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace which is not needed anymore
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.253 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 29 08:22:13 compute-2 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000095.scope: Consumed 14.862s CPU time.
Nov 29 08:22:13 compute-2 systemd-machined[194747]: Machine qemu-69-instance-00000095 terminated.
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [NOTICE]   (295300) : haproxy version is 2.8.14-c23fe91
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [NOTICE]   (295300) : path to executable is /usr/sbin/haproxy
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [WARNING]  (295300) : Exiting Master process...
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [WARNING]  (295300) : Exiting Master process...
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [ALERT]    (295300) : Current worker (295302) exited with code 143 (Terminated)
Nov 29 08:22:13 compute-2 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[295296]: [WARNING]  (295300) : All workers exited. Exiting... (0)
Nov 29 08:22:13 compute-2 systemd[1]: libpod-b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44.scope: Deactivated successfully.
Nov 29 08:22:13 compute-2 podman[295597]: 2025-11-29 08:22:13.397254301 +0000 UTC m=+0.052670970 container died b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.429 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44-userdata-shm.mount: Deactivated successfully.
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 systemd[1]: var-lib-containers-storage-overlay-db09fc8d770a120e55e6024d881e5f96a7568f7730bc20351160074ad8be3350-merged.mount: Deactivated successfully.
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.442 232432 INFO nova.virt.libvirt.driver [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance shutdown successfully after 13 seconds.
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.450 232432 INFO nova.virt.libvirt.driver [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance destroyed successfully.
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.451 232432 DEBUG nova.objects.instance [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'numa_topology' on Instance uuid dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:13 compute-2 podman[295597]: 2025-11-29 08:22:13.45378196 +0000 UTC m=+0.109198629 container cleanup b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:22:13 compute-2 systemd[1]: libpod-conmon-b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44.scope: Deactivated successfully.
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.479 232432 DEBUG nova.compute.manager [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:13 compute-2 podman[295638]: 2025-11-29 08:22:13.527780167 +0000 UTC m=+0.046031183 container remove b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.535 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cec3e2a-d2e6-4440-9e8b-c20d7c235244]: (4, ('Sat Nov 29 08:22:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44)\nb4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44\nSat Nov 29 08:22:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (b4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44)\nb4ee269e0ed34a19ad8d9ab2059a6069ff36620836c105f9e478006eec17fd44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.537 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8c12ad-8b84-48f9-8dc9-f8cc681761c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.539 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.541 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 kernel: tap97e6ef02-60: left promiscuous mode
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.571 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.575 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c454d4c0-b81b-4a40-9901-bf2db89cd74b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.591 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ff10d817-c88d-40ab-ace0-b0711be61ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.592 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[54d56d06-7f9d-46c7-b15b-46c3cb446c6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.622 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1d650c89-8aa3-4701-9762-fd25d6b4b681]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747513, 'reachable_time': 30007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295656, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 systemd[1]: run-netns-ovnmeta\x2d97e6ef02\x2d6896\x2d45a2\x2d9eb9\x2d28926c1a7400.mount: Deactivated successfully.
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.626 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:22:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:13.626 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd221fc-a0a1-442a-a506-86e18da16295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:13 compute-2 nova_compute[232428]: 2025-11-29 08:22:13.629 232432 DEBUG oslo_concurrency.lockutils [None req-6f5cb7b6-3f0b-4266-af1a-daae680efbfc 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:13 compute-2 ceph-mon[77138]: pgmap v2514: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Nov 29 08:22:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.000 232432 DEBUG nova.compute.manager [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-vif-unplugged-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.001 232432 DEBUG oslo_concurrency.lockutils [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.001 232432 DEBUG oslo_concurrency.lockutils [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.001 232432 DEBUG oslo_concurrency.lockutils [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.001 232432 DEBUG nova.compute.manager [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] No waiting events found dispatching network-vif-unplugged-11a5e08c-f08a-467a-9562-ffb888b8adef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.002 232432 WARNING nova.compute.manager [req-b8e45720-a889-4a4e-ae6c-7e20b834322b req-a3807ad3-3d56-45b5-b17c-f890a63bea8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received unexpected event network-vif-unplugged-11a5e08c-f08a-467a-9562-ffb888b8adef for instance with vm_state stopped and task_state None.
Nov 29 08:22:14 compute-2 nova_compute[232428]: 2025-11-29 08:22:14.342 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:22:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2480426081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.675823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534675957, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2521, "num_deletes": 257, "total_data_size": 5577045, "memory_usage": 5661376, "flush_reason": "Manual Compaction"}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534702073, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3640641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51167, "largest_seqno": 53683, "table_properties": {"data_size": 3630466, "index_size": 6413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22445, "raw_average_key_size": 21, "raw_value_size": 3609524, "raw_average_value_size": 3389, "num_data_blocks": 277, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404349, "oldest_key_time": 1764404349, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 26323 microseconds, and 11149 cpu microseconds.
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.702151) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3640641 bytes OK
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.702189) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.705564) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.705581) EVENT_LOG_v1 {"time_micros": 1764404534705576, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.705601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5565849, prev total WAL file size 5565849, number of live WAL files 2.
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.707163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3555KB)], [99(9578KB)]
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534707265, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13448857, "oldest_snapshot_seqno": -1}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8354 keys, 11592498 bytes, temperature: kUnknown
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534792640, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11592498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11537393, "index_size": 33105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 216897, "raw_average_key_size": 25, "raw_value_size": 11389174, "raw_average_value_size": 1363, "num_data_blocks": 1294, "num_entries": 8354, "num_filter_entries": 8354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.793844) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11592498 bytes
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.795626) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.5 rd, 134.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8886, records dropped: 532 output_compression: NoCompression
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.795683) EVENT_LOG_v1 {"time_micros": 1764404534795658, "job": 62, "event": "compaction_finished", "compaction_time_micros": 85949, "compaction_time_cpu_micros": 29146, "output_level": 6, "num_output_files": 1, "total_output_size": 11592498, "num_input_records": 8886, "num_output_records": 8354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534798171, "job": 62, "event": "table_file_deletion", "file_number": 101}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534801709, "job": 62, "event": "table_file_deletion", "file_number": 99}
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.706965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.801893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.801898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.801900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.801902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:14 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:22:14.801903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:22:15 compute-2 nova_compute[232428]: 2025-11-29 08:22:15.287 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:15.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:16 compute-2 ceph-mon[77138]: pgmap v2515: 305 pgs: 305 active+clean; 966 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 225 op/s
Nov 29 08:22:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:16 compute-2 sudo[295659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:16 compute-2 sudo[295659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:16 compute-2 sudo[295659]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.567 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-2 sudo[295684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:16 compute-2 sudo[295684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:16 compute-2 sudo[295684]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.724 232432 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.725 232432 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.725 232432 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.726 232432 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.726 232432 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] No waiting events found dispatching network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.726 232432 WARNING nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received unexpected event network-vif-plugged-11a5e08c-f08a-467a-9562-ffb888b8adef for instance with vm_state stopped and task_state None.
Nov 29 08:22:16 compute-2 kernel: tapd3f7613c-91 (unregistering): left promiscuous mode
Nov 29 08:22:16 compute-2 NetworkManager[48993]: <info>  [1764404536.7636] device (tapd3f7613c-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-2 ovn_controller[134375]: 2025-11-29T08:22:16Z|00701|binding|INFO|Releasing lport d3f7613c-9105-484e-ae74-77f76d32b85b from this chassis (sb_readonly=0)
Nov 29 08:22:16 compute-2 ovn_controller[134375]: 2025-11-29T08:22:16Z|00702|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b down in Southbound
Nov 29 08:22:16 compute-2 ovn_controller[134375]: 2025-11-29T08:22:16Z|00703|binding|INFO|Removing iface tapd3f7613c-91 ovn-installed in OVS
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.782 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:16.789 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:56:33 10.100.0.3'], port_security=['fa:16:3e:cb:56:33 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6211bb9c-7da9-4ff8-8b1b-b47c237cf720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a9406b-a63a-4191-b15b-28d172b27b82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4574185f65454582b56aa1dfb65251ba', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3509650f-8fc6-4e5a-a78c-3e75e9be7304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70f4cb21-f00c-43e5-959d-eb5b0d04bd3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d3f7613c-9105-484e-ae74-77f76d32b85b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:16.790 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d3f7613c-9105-484e-ae74-77f76d32b85b in datapath a5a9406b-a63a-4191-b15b-28d172b27b82 unbound from our chassis
Nov 29 08:22:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:16.791 143801 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5a9406b-a63a-4191-b15b-28d172b27b82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 29 08:22:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:16.793 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8281ff-4c55-4ed1-aac8-58e7dabaaee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-2 nova_compute[232428]: 2025-11-29 08:22:16.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-2 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 29 08:22:16 compute-2 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000096.scope: Consumed 14.738s CPU time.
Nov 29 08:22:16 compute-2 systemd-machined[194747]: Machine qemu-70-instance-00000096 terminated.
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.233 232432 DEBUG nova.compute.manager [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.234 232432 DEBUG oslo_concurrency.lockutils [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.234 232432 DEBUG oslo_concurrency.lockutils [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.234 232432 DEBUG oslo_concurrency.lockutils [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.234 232432 DEBUG nova.compute.manager [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.234 232432 WARNING nova.compute.manager [req-dbf0d9ad-3c92-4972-939f-83b2fce06992 req-10b6b48b-71ca-4dfc-9429-4f9d2546f4d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state active and task_state rescuing.
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.359 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance shutdown successfully after 13 seconds.
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.364 232432 INFO nova.virt.libvirt.driver [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance destroyed successfully.
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.364 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'numa_topology' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.384 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Attempting rescue
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.385 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.388 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.389 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Creating image(s)
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.412 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.415 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.499 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.524 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.528 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.603 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.604 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.605 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.605 232432 DEBUG oslo_concurrency.lockutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.707 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:17 compute-2 nova_compute[232428]: 2025-11-29 08:22:17.711 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:18 compute-2 ceph-mon[77138]: pgmap v2516: 305 pgs: 305 active+clean; 979 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 247 op/s
Nov 29 08:22:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:18.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.540 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.830s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.542 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'migration_context' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.557 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.558 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start _get_guest_xml network_info=[{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "vif_mac": "fa:16:3e:cb:56:33"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.558 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'resources' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.579 232432 WARNING nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.586 232432 DEBUG nova.virt.libvirt.host [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.587 232432 DEBUG nova.virt.libvirt.host [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.591 232432 DEBUG nova.virt.libvirt.host [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.591 232432 DEBUG nova.virt.libvirt.host [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.593 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.593 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.593 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.594 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.594 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.594 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.595 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.595 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.595 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.596 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.596 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.596 232432 DEBUG nova.virt.hardware [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.597 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:18 compute-2 nova_compute[232428]: 2025-11-29 08:22:18.625 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 52K writes, 207K keys, 52K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.05 MB/s
                                           Cumulative WAL: 52K writes, 19K syncs, 2.74 writes per sync, written: 0.21 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 55K keys, 13K commit groups, 1.0 writes per commit group, ingest: 60.38 MB, 0.10 MB/s
                                           Interval WAL: 13K writes, 5336 syncs, 2.59 writes per sync, written: 0.06 GB, 0.10 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:22:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3255902581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.049 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.052 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.383 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.384 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.384 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.384 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.385 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.386 232432 INFO nova.compute.manager [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Terminating instance
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.387 232432 DEBUG nova.compute.manager [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.393 232432 INFO nova.virt.libvirt.driver [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Instance destroyed successfully.
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.394 232432 DEBUG nova.objects.instance [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.433 232432 DEBUG nova.compute.manager [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.433 232432 DEBUG oslo_concurrency.lockutils [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.433 232432 DEBUG oslo_concurrency.lockutils [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.434 232432 DEBUG oslo_concurrency.lockutils [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.434 232432 DEBUG nova.compute.manager [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.434 232432 WARNING nova.compute.manager [req-a36f6b36-bc36-4418-8d3e-95d59726b28d req-15ab3345-ab1e-4772-a47e-b21201e988db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state active and task_state rescuing.
Nov 29 08:22:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2492491067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.454 232432 DEBUG nova.virt.libvirt.vif [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-555245383',display_name='tempest-Íñstáñcé-1030975085',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-555245383',id=149,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-6esrfovb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:16Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.454 232432 DEBUG nova.network.os_vif_util [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "11a5e08c-f08a-467a-9562-ffb888b8adef", "address": "fa:16:3e:90:95:a1", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11a5e08c-f0", "ovs_interfaceid": "11a5e08c-f08a-467a-9562-ffb888b8adef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.455 232432 DEBUG nova.network.os_vif_util [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.456 232432 DEBUG os_vif [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.457 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.458 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11a5e08c-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.460 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.461 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.462 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.497 232432 INFO os_vif [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:95:a1,bridge_name='br-int',has_traffic_filtering=True,id=11a5e08c-f08a-467a-9562-ffb888b8adef,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11a5e08c-f0')
Nov 29 08:22:19 compute-2 podman[295890]: 2025-11-29 08:22:19.684597625 +0000 UTC m=+0.082701340 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:22:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:19.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.850 232432 INFO nova.virt.libvirt.driver [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Deleting instance files /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_del
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.851 232432 INFO nova.virt.libvirt.driver [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Deletion of /var/lib/nova/instances/dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9_del complete
Nov 29 08:22:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3332573873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.910 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.911 232432 DEBUG nova.virt.libvirt.vif [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1815513079',display_name='tempest-ServerRescueTestJSONUnderV235-server-1815513079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1815513079',id=150,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4574185f65454582b56aa1dfb65251ba',ramdisk_id='',reservation_id='r-yk8v0jhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1838033223',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1838033223-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:57Z,user_data=None,user_id='27fbef868fd944adb0787ac691f465f5',uuid=6211bb9c-7da9-4ff8-8b1b-b47c237cf720,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "vif_mac": "fa:16:3e:cb:56:33"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.912 232432 DEBUG nova.network.os_vif_util [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converting VIF {"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "vif_mac": "fa:16:3e:cb:56:33"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.913 232432 DEBUG nova.network.os_vif_util [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.914 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.945 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <uuid>6211bb9c-7da9-4ff8-8b1b-b47c237cf720</uuid>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <name>instance-00000096</name>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1815513079</nova:name>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:22:18</nova:creationTime>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:user uuid="27fbef868fd944adb0787ac691f465f5">tempest-ServerRescueTestJSONUnderV235-1838033223-project-member</nova:user>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:project uuid="4574185f65454582b56aa1dfb65251ba">tempest-ServerRescueTestJSONUnderV235-1838033223</nova:project>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <nova:port uuid="d3f7613c-9105-484e-ae74-77f76d32b85b">
Nov 29 08:22:19 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <system>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="serial">6211bb9c-7da9-4ff8-8b1b-b47c237cf720</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="uuid">6211bb9c-7da9-4ff8-8b1b-b47c237cf720</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </system>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <os>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </os>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <features>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </features>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.rescue">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </source>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </source>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config.rescue">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </source>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:22:19 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:cb:56:33"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <target dev="tapd3f7613c-91"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/console.log" append="off"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <video>
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </video>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:22:19 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:22:19 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:22:19 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:22:19 compute-2 nova_compute[232428]: </domain>
Nov 29 08:22:19 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.948 232432 INFO nova.compute.manager [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Took 0.56 seconds to destroy the instance on the hypervisor.
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.949 232432 DEBUG oslo.service.loopingcall [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.949 232432 DEBUG nova.compute.manager [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.950 232432 DEBUG nova.network.neutron [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:22:19 compute-2 nova_compute[232428]: 2025-11-29 08:22:19.964 232432 INFO nova.virt.libvirt.driver [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance destroyed successfully.
Nov 29 08:22:20 compute-2 ceph-mon[77138]: pgmap v2517: 305 pgs: 305 active+clean; 979 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Nov 29 08:22:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3255902581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2492491067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3332573873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.041 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.041 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.042 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.042 232432 DEBUG nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] No VIF found with MAC fa:16:3e:cb:56:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.042 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Using config drive
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.066 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.092 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.117 232432 DEBUG nova.objects.instance [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'keypairs' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.692 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Creating config drive at /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.700 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpem2likpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.843 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpem2likpm" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.876 232432 DEBUG nova.storage.rbd_utils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] rbd image 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:20 compute-2 nova_compute[232428]: 2025-11-29 08:22:20.881 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.015 232432 DEBUG nova.network.neutron [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.049 232432 INFO nova.compute.manager [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Took 1.10 seconds to deallocate network for instance.
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.055 232432 DEBUG oslo_concurrency.processutils [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue 6211bb9c-7da9-4ff8-8b1b-b47c237cf720_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.056 232432 INFO nova.virt.libvirt.driver [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Deleting local config drive /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720/disk.config.rescue because it was imported into RBD.
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.106 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.108 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:21 compute-2 kernel: tapd3f7613c-91: entered promiscuous mode
Nov 29 08:22:21 compute-2 NetworkManager[48993]: <info>  [1764404541.1259] manager: (tapd3f7613c-91): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 08:22:21 compute-2 ovn_controller[134375]: 2025-11-29T08:22:21Z|00704|binding|INFO|Claiming lport d3f7613c-9105-484e-ae74-77f76d32b85b for this chassis.
Nov 29 08:22:21 compute-2 ovn_controller[134375]: 2025-11-29T08:22:21Z|00705|binding|INFO|d3f7613c-9105-484e-ae74-77f76d32b85b: Claiming fa:16:3e:cb:56:33 10.100.0.3
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:21.136 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:56:33 10.100.0.3'], port_security=['fa:16:3e:cb:56:33 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6211bb9c-7da9-4ff8-8b1b-b47c237cf720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a9406b-a63a-4191-b15b-28d172b27b82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4574185f65454582b56aa1dfb65251ba', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3509650f-8fc6-4e5a-a78c-3e75e9be7304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70f4cb21-f00c-43e5-959d-eb5b0d04bd3d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d3f7613c-9105-484e-ae74-77f76d32b85b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:21.137 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d3f7613c-9105-484e-ae74-77f76d32b85b in datapath a5a9406b-a63a-4191-b15b-28d172b27b82 bound to our chassis
Nov 29 08:22:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:21.139 143801 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5a9406b-a63a-4191-b15b-28d172b27b82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 29 08:22:21 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:21.140 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[433637f6-9d81-4be7-b23f-596660d7fb87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:21 compute-2 ovn_controller[134375]: 2025-11-29T08:22:21Z|00706|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b up in Southbound
Nov 29 08:22:21 compute-2 ovn_controller[134375]: 2025-11-29T08:22:21Z|00707|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b ovn-installed in OVS
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.150 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.157 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:21 compute-2 systemd-udevd[296008]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:22:21 compute-2 systemd-machined[194747]: New machine qemu-71-instance-00000096.
Nov 29 08:22:21 compute-2 NetworkManager[48993]: <info>  [1764404541.1777] device (tapd3f7613c-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:22:21 compute-2 NetworkManager[48993]: <info>  [1764404541.1792] device (tapd3f7613c-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:22:21 compute-2 systemd[1]: Started Virtual Machine qemu-71-instance-00000096.
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.294 232432 DEBUG oslo_concurrency.processutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.666 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.667 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404541.6658554, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.667 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Resumed (Lifecycle Event)
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.675 232432 DEBUG nova.compute.manager [None req-314d5f01-9181-464d-96d3-e7c98a50bbe0 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.701 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.705 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1778555291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.733 232432 DEBUG oslo_concurrency.processutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.737 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.737 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404541.6723633, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.738 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Started (Lifecycle Event)
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.741 232432 DEBUG nova.compute.provider_tree [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.770 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.774 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.779 232432 DEBUG nova.scheduler.client.report [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.814 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.843 232432 INFO nova.scheduler.client.report [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9
Nov 29 08:22:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.903 232432 DEBUG nova.compute.manager [req-9255e23b-e8d7-454a-b389-2472ee40215e req-8f67cf30-001e-4cd3-810f-3f6bb38bfbe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Received event network-vif-deleted-11a5e08c-f08a-467a-9562-ffb888b8adef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:21 compute-2 nova_compute[232428]: 2025-11-29 08:22:21.914 232432 DEBUG oslo_concurrency.lockutils [None req-4bca2edf-855b-495b-938d-4daab43cb781 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:22 compute-2 ceph-mon[77138]: pgmap v2518: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.3 MiB/s wr, 360 op/s
Nov 29 08:22:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1778555291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2893105580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:24 compute-2 ceph-mon[77138]: pgmap v2519: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.173 232432 DEBUG nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.174 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.174 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.174 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.175 232432 DEBUG nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.175 232432 WARNING nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state rescued and task_state None.
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.175 232432 DEBUG nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.176 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.176 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.176 232432 DEBUG oslo_concurrency.lockutils [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.177 232432 DEBUG nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.177 232432 WARNING nova.compute.manager [req-77dd5ca9-e7a4-47f5-a257-c5a8e1cdf577 req-056b07a4-a0d2-40ab-846b-2ca41652c010 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state rescued and task_state None.
Nov 29 08:22:24 compute-2 nova_compute[232428]: 2025-11-29 08:22:24.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:25 compute-2 nova_compute[232428]: 2025-11-29 08:22:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:25 compute-2 nova_compute[232428]: 2025-11-29 08:22:25.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:22:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:22:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:25.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:22:26 compute-2 ceph-mon[77138]: pgmap v2520: 305 pgs: 305 active+clean; 887 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 255 op/s
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.343 232432 DEBUG nova.compute.manager [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.343 232432 DEBUG nova.compute.manager [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing instance network info cache due to event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.343 232432 DEBUG oslo_concurrency.lockutils [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.343 232432 DEBUG oslo_concurrency.lockutils [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.344 232432 DEBUG nova.network.neutron [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:26.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:26 compute-2 nova_compute[232428]: 2025-11-29 08:22:26.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:28 compute-2 ceph-mon[77138]: pgmap v2521: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 271 op/s
Nov 29 08:22:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3688258654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2338995606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:22:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2338995606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:22:28 compute-2 nova_compute[232428]: 2025-11-29 08:22:28.442 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404533.4410574, dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:28 compute-2 nova_compute[232428]: 2025-11-29 08:22:28.443 232432 INFO nova.compute.manager [-] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] VM Stopped (Lifecycle Event)
Nov 29 08:22:28 compute-2 nova_compute[232428]: 2025-11-29 08:22:28.460 232432 DEBUG nova.compute.manager [None req-acc678d7-07dc-438a-9c6c-e61c1a63859f - - - - - -] [instance: dfd4c790-f1bd-4488-b3ae-94b8d39cfaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:28.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:29 compute-2 nova_compute[232428]: 2025-11-29 08:22:29.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:29 compute-2 nova_compute[232428]: 2025-11-29 08:22:29.740 232432 DEBUG nova.network.neutron [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updated VIF entry in instance network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:29 compute-2 nova_compute[232428]: 2025-11-29 08:22:29.741 232432 DEBUG nova.network.neutron [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:29 compute-2 nova_compute[232428]: 2025-11-29 08:22:29.768 232432 DEBUG oslo_concurrency.lockutils [req-9482a173-8926-45b8-80b3-f0974f77afea req-a247bd0d-b5b8-4be8-95a6-1de4628cb733 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:29.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:30 compute-2 ceph-mon[77138]: pgmap v2522: 305 pgs: 305 active+clean; 837 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 29 08:22:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:30.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:31 compute-2 nova_compute[232428]: 2025-11-29 08:22:31.576 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:22:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:22:32 compute-2 nova_compute[232428]: 2025-11-29 08:22:32.160 232432 DEBUG nova.compute.manager [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:32 compute-2 nova_compute[232428]: 2025-11-29 08:22:32.161 232432 DEBUG nova.compute.manager [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing instance network info cache due to event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:32 compute-2 nova_compute[232428]: 2025-11-29 08:22:32.161 232432 DEBUG oslo_concurrency.lockutils [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:32 compute-2 nova_compute[232428]: 2025-11-29 08:22:32.162 232432 DEBUG oslo_concurrency.lockutils [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:32 compute-2 nova_compute[232428]: 2025-11-29 08:22:32.162 232432 DEBUG nova.network.neutron [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:32 compute-2 ceph-mon[77138]: pgmap v2523: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 296 op/s
Nov 29 08:22:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:32.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:32 compute-2 podman[296106]: 2025-11-29 08:22:32.693774946 +0000 UTC m=+0.090183034 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:22:33 compute-2 ceph-mon[77138]: pgmap v2524: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 228 KiB/s wr, 155 op/s
Nov 29 08:22:33 compute-2 nova_compute[232428]: 2025-11-29 08:22:33.614 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:33 compute-2 NetworkManager[48993]: <info>  [1764404553.7710] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 08:22:33 compute-2 nova_compute[232428]: 2025-11-29 08:22:33.770 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:33 compute-2 NetworkManager[48993]: <info>  [1764404553.7726] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 08:22:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:33.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:33 compute-2 nova_compute[232428]: 2025-11-29 08:22:33.920 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:33 compute-2 nova_compute[232428]: 2025-11-29 08:22:33.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2114051630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:34 compute-2 nova_compute[232428]: 2025-11-29 08:22:34.466 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:34.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:34 compute-2 nova_compute[232428]: 2025-11-29 08:22:34.982 232432 DEBUG nova.network.neutron [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updated VIF entry in instance network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:34 compute-2 nova_compute[232428]: 2025-11-29 08:22:34.983 232432 DEBUG nova.network.neutron [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.010 232432 DEBUG oslo_concurrency.lockutils [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.295 232432 DEBUG nova.compute.manager [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.296 232432 DEBUG nova.compute.manager [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing instance network info cache due to event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.296 232432 DEBUG oslo_concurrency.lockutils [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.296 232432 DEBUG oslo_concurrency.lockutils [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:35 compute-2 nova_compute[232428]: 2025-11-29 08:22:35.296 232432 DEBUG nova.network.neutron [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:35 compute-2 ceph-mon[77138]: pgmap v2525: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 229 KiB/s wr, 158 op/s
Nov 29 08:22:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:36 compute-2 nova_compute[232428]: 2025-11-29 08:22:36.577 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:36 compute-2 sudo[296128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:36 compute-2 sudo[296128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:36 compute-2 sudo[296128]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:36 compute-2 sudo[296154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:36 compute-2 sudo[296154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:36 compute-2 sudo[296154]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:36 compute-2 podman[296152]: 2025-11-29 08:22:36.839041266 +0000 UTC m=+0.111026366 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 08:22:37 compute-2 ceph-mon[77138]: pgmap v2526: 305 pgs: 305 active+clean; 811 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 182 op/s
Nov 29 08:22:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:22:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:22:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:38 compute-2 nova_compute[232428]: 2025-11-29 08:22:38.736 232432 DEBUG nova.network.neutron [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updated VIF entry in instance network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:38 compute-2 nova_compute[232428]: 2025-11-29 08:22:38.737 232432 DEBUG nova.network.neutron [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:38 compute-2 nova_compute[232428]: 2025-11-29 08:22:38.777 232432 DEBUG oslo_concurrency.lockutils [req-aa3fb080-3600-4797-b00a-f1e2c0c7d4e5 req-161862fa-776f-4af0-aea8-c78160857d1c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:39 compute-2 nova_compute[232428]: 2025-11-29 08:22:39.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:39.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:40 compute-2 ceph-mon[77138]: pgmap v2527: 305 pgs: 305 active+clean; 814 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 1.3 MiB/s wr, 125 op/s
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.523 232432 DEBUG nova.compute.manager [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.524 232432 DEBUG nova.compute.manager [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing instance network info cache due to event network-changed-d3f7613c-9105-484e-ae74-77f76d32b85b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.524 232432 DEBUG oslo_concurrency.lockutils [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.525 232432 DEBUG oslo_concurrency.lockutils [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.525 232432 DEBUG nova.network.neutron [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Refreshing network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:41 compute-2 ceph-mon[77138]: pgmap v2528: 305 pgs: 305 active+clean; 835 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Nov 29 08:22:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1325680114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3125967921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:41 compute-2 nova_compute[232428]: 2025-11-29 08:22:41.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:22:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.299 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.299 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.313 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.412 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.413 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.422 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.422 232432 INFO nova.compute.claims [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:22:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:42 compute-2 nova_compute[232428]: 2025-11-29 08:22:42.615 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1523862967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.082 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.094 232432 DEBUG nova.compute.provider_tree [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.161 232432 DEBUG nova.scheduler.client.report [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.214 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.215 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.273 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.291 232432 INFO nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.315 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:22:43 compute-2 ceph-mon[77138]: pgmap v2529: 305 pgs: 305 active+clean; 835 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 29 08:22:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1523862967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.697 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.698 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.699 232432 INFO nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating image(s)
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.736 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.778 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.815 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.820 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:22:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.929 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.930 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.931 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.931 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.972 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:43 compute-2 nova_compute[232428]: 2025-11-29 08:22:43.979 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.325 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.434 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] resizing rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.488 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:44.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.628 232432 DEBUG nova.objects.instance [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'migration_context' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.668 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.669 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Ensure instance console log exists: /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.670 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.671 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.671 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.674 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.682 232432 WARNING nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.688 232432 DEBUG nova.virt.libvirt.host [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.689 232432 DEBUG nova.virt.libvirt.host [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.695 232432 DEBUG nova.virt.libvirt.host [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.696 232432 DEBUG nova.virt.libvirt.host [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.698 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.699 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.700 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.700 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.701 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.702 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.702 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.703 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.704 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.704 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.705 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.706 232432 DEBUG nova.virt.hardware [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.713 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.808 232432 DEBUG nova.network.neutron [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updated VIF entry in instance network info cache for port d3f7613c-9105-484e-ae74-77f76d32b85b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.810 232432 DEBUG nova.network.neutron [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [{"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:44 compute-2 nova_compute[232428]: 2025-11-29 08:22:44.842 232432 DEBUG oslo_concurrency.lockutils [req-8419a14c-d7c7-4004-98a5-3898bde6032f req-b68018e4-fa1d-4fc1-b50e-77e90f4a1650 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6211bb9c-7da9-4ff8-8b1b-b47c237cf720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/794816950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.188 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.233 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.238 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:45 compute-2 ceph-mon[77138]: pgmap v2530: 305 pgs: 305 active+clean; 837 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 29 08:22:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/794816950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/996442045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.722 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.724 232432 DEBUG nova.objects.instance [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'pci_devices' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.740 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <uuid>da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</uuid>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <name>instance-00000098</name>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerShowV257Test-server-281509839</nova:name>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:22:44</nova:creationTime>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:user uuid="0508bf8e51cf4e00992499288c702602">tempest-ServerShowV257Test-382407045-project-member</nova:user>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <nova:project uuid="42dc6c0cde654493984d9a0a65843d9a">tempest-ServerShowV257Test-382407045</nova:project>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <system>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="serial">da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="uuid">da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </system>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <os>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </os>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <features>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </features>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk">
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config">
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:22:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/console.log" append="off"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <video>
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </video>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:22:45 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:22:45 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:22:45 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:22:45 compute-2 nova_compute[232428]: </domain>
Nov 29 08:22:45 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.791 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.791 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.792 232432 INFO nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Using config drive
Nov 29 08:22:45 compute-2 nova_compute[232428]: 2025-11-29 08:22:45.815 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:45.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.582 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.684 232432 INFO nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating config drive at /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.689 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptnzhehrx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.847 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptnzhehrx" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.877 232432 DEBUG nova.storage.rbd_utils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:46 compute-2 nova_compute[232428]: 2025-11-29 08:22:46.881 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/996442045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.232 232432 DEBUG oslo_concurrency.processutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.234 232432 INFO nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deleting local config drive /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config because it was imported into RBD.
Nov 29 08:22:47 compute-2 systemd-machined[194747]: New machine qemu-72-instance-00000098.
Nov 29 08:22:47 compute-2 systemd[1]: Started Virtual Machine qemu-72-instance-00000098.
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.698 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.699 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.699 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.699 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.699 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.701 232432 INFO nova.compute.manager [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Terminating instance
Nov 29 08:22:47 compute-2 nova_compute[232428]: 2025-11-29 08:22:47.702 232432 DEBUG nova.compute.manager [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:22:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.076 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404569.0756648, da9e5ad2-575c-4af3-a5b1-4f7ed1513aca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.076 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] VM Resumed (Lifecycle Event)
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.078 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.078 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.082 232432 INFO nova.virt.libvirt.driver [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance spawned successfully.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.082 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.172 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.178 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.182 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.182 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.183 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.183 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.184 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.184 232432 DEBUG nova.virt.libvirt.driver [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.216 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.217 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404569.0766106, da9e5ad2-575c-4af3-a5b1-4f7ed1513aca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.217 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] VM Started (Lifecycle Event)
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.256 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.260 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.274 232432 INFO nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Took 5.58 seconds to spawn the instance on the hypervisor.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.274 232432 DEBUG nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.285 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.351 232432 INFO nova.compute.manager [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Took 6.99 seconds to build instance.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.370 232432 DEBUG oslo_concurrency.lockutils [None req-f6132f54-a41c-4fb7-b7e7-00e98d0c4d45 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:49 compute-2 ceph-mon[77138]: pgmap v2531: 305 pgs: 305 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Nov 29 08:22:49 compute-2 kernel: tapd3f7613c-91 (unregistering): left promiscuous mode
Nov 29 08:22:49 compute-2 NetworkManager[48993]: <info>  [1764404569.7599] device (tapd3f7613c-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:22:49 compute-2 ovn_controller[134375]: 2025-11-29T08:22:49Z|00708|binding|INFO|Releasing lport d3f7613c-9105-484e-ae74-77f76d32b85b from this chassis (sb_readonly=0)
Nov 29 08:22:49 compute-2 ovn_controller[134375]: 2025-11-29T08:22:49Z|00709|binding|INFO|Setting lport d3f7613c-9105-484e-ae74-77f76d32b85b down in Southbound
Nov 29 08:22:49 compute-2 ovn_controller[134375]: 2025-11-29T08:22:49Z|00710|binding|INFO|Removing iface tapd3f7613c-91 ovn-installed in OVS
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.775 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:49.783 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:56:33 10.100.0.3'], port_security=['fa:16:3e:cb:56:33 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6211bb9c-7da9-4ff8-8b1b-b47c237cf720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a9406b-a63a-4191-b15b-28d172b27b82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4574185f65454582b56aa1dfb65251ba', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3509650f-8fc6-4e5a-a78c-3e75e9be7304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70f4cb21-f00c-43e5-959d-eb5b0d04bd3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=d3f7613c-9105-484e-ae74-77f76d32b85b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:49.786 143801 INFO neutron.agent.ovn.metadata.agent [-] Port d3f7613c-9105-484e-ae74-77f76d32b85b in datapath a5a9406b-a63a-4191-b15b-28d172b27b82 unbound from our chassis
Nov 29 08:22:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:49.788 143801 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5a9406b-a63a-4191-b15b-28d172b27b82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 29 08:22:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:49.790 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3a021e-0a40-4566-aad8-03659890a1d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:49 compute-2 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 29 08:22:49 compute-2 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000096.scope: Consumed 15.606s CPU time.
Nov 29 08:22:49 compute-2 systemd-machined[194747]: Machine qemu-71-instance-00000096 terminated.
Nov 29 08:22:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:49 compute-2 podman[296573]: 2025-11-29 08:22:49.935402687 +0000 UTC m=+0.134925075 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.949 232432 INFO nova.virt.libvirt.driver [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Instance destroyed successfully.
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.950 232432 DEBUG nova.objects.instance [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lazy-loading 'resources' on Instance uuid 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.977 232432 DEBUG nova.virt.libvirt.vif [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1815513079',display_name='tempest-ServerRescueTestJSONUnderV235-server-1815513079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1815513079',id=150,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4574185f65454582b56aa1dfb65251ba',ramdisk_id='',reservation_id='r-yk8v0jhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1838033223',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1838033223-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:21Z,user_data=None,user_id='27fbef868fd944adb0787ac691f465f5',uuid=6211bb9c-7da9-4ff8-8b1b-b47c237cf720,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.978 232432 DEBUG nova.network.os_vif_util [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converting VIF {"id": "d3f7613c-9105-484e-ae74-77f76d32b85b", "address": "fa:16:3e:cb:56:33", "network": {"id": "a5a9406b-a63a-4191-b15b-28d172b27b82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181611711-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4574185f65454582b56aa1dfb65251ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3f7613c-91", "ovs_interfaceid": "d3f7613c-9105-484e-ae74-77f76d32b85b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.979 232432 DEBUG nova.network.os_vif_util [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.980 232432 DEBUG os_vif [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.984 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.984 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3f7613c-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.986 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.989 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:22:49 compute-2 nova_compute[232428]: 2025-11-29 08:22:49.993 232432 INFO os_vif [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:56:33,bridge_name='br-int',has_traffic_filtering=True,id=d3f7613c-9105-484e-ae74-77f76d32b85b,network=Network(a5a9406b-a63a-4191-b15b-28d172b27b82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3f7613c-91')
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.032 232432 DEBUG nova.compute.manager [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.032 232432 DEBUG oslo_concurrency.lockutils [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.033 232432 DEBUG oslo_concurrency.lockutils [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.033 232432 DEBUG oslo_concurrency.lockutils [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.033 232432 DEBUG nova.compute.manager [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:50 compute-2 nova_compute[232428]: 2025-11-29 08:22:50.033 232432 DEBUG nova.compute.manager [req-22ae7a00-ffa2-4028-9357-55b1ac5306bb req-b5fd86d7-03ae-46f7-8790-f98c426406fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-unplugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:22:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:51 compute-2 ceph-mon[77138]: pgmap v2532: 305 pgs: 305 active+clean; 884 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 29 08:22:51 compute-2 ceph-mon[77138]: osdmap e333: 3 total, 3 up, 3 in
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.274 232432 INFO nova.virt.libvirt.driver [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Deleting instance files /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_del
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.275 232432 INFO nova.virt.libvirt.driver [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Deletion of /var/lib/nova/instances/6211bb9c-7da9-4ff8-8b1b-b47c237cf720_del complete
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.324 232432 INFO nova.compute.manager [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Took 3.62 seconds to destroy the instance on the hypervisor.
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.325 232432 DEBUG oslo.service.loopingcall [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.325 232432 DEBUG nova.compute.manager [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.325 232432 DEBUG nova.network.neutron [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:22:51 compute-2 nova_compute[232428]: 2025-11-29 08:22:51.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:52 compute-2 ceph-mon[77138]: pgmap v2534: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.026 232432 INFO nova.compute.manager [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Rebuilding instance
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.144 232432 DEBUG nova.compute.manager [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.145 232432 DEBUG oslo_concurrency.lockutils [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.145 232432 DEBUG oslo_concurrency.lockutils [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.146 232432 DEBUG oslo_concurrency.lockutils [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.146 232432 DEBUG nova.compute.manager [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] No waiting events found dispatching network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.146 232432 WARNING nova.compute.manager [req-ddb6c158-1865-44b7-8867-928abe7f6247 req-623ae68a-c17e-4c11-980a-c0c0c7e57f79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received unexpected event network-vif-plugged-d3f7613c-9105-484e-ae74-77f76d32b85b for instance with vm_state rescued and task_state deleting.
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.244 232432 DEBUG nova.network.neutron [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.274 232432 INFO nova.compute.manager [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Took 0.95 seconds to deallocate network for instance.
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.331 232432 DEBUG nova.compute.manager [req-85b05177-5ac5-4153-8058-dcfcacec5dd8 req-22792b03-2d0f-494b-a5f3-db3abec80709 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Received event network-vif-deleted-d3f7613c-9105-484e-ae74-77f76d32b85b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.378 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.380 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.418 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.438 232432 DEBUG nova.compute.manager [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.484 232432 DEBUG oslo_concurrency.processutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.539 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'pci_requests' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.562 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'pci_devices' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.580 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'resources' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.596 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'migration_context' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.612 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 08:22:52 compute-2 nova_compute[232428]: 2025-11-29 08:22:52.619 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:22:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4152628012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.005 232432 DEBUG oslo_concurrency.processutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.013 232432 DEBUG nova.compute.provider_tree [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4152628012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.033 232432 DEBUG nova.scheduler.client.report [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.062 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.090 232432 INFO nova.scheduler.client.report [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Deleted allocations for instance 6211bb9c-7da9-4ff8-8b1b-b47c237cf720
Nov 29 08:22:53 compute-2 nova_compute[232428]: 2025-11-29 08:22:53.223 232432 DEBUG oslo_concurrency.lockutils [None req-c98a9e45-1320-4844-82a7-3df54632fa8e 27fbef868fd944adb0787ac691f465f5 4574185f65454582b56aa1dfb65251ba - - default default] Lock "6211bb9c-7da9-4ff8-8b1b-b47c237cf720" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:53 compute-2 sudo[296658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:53 compute-2 sudo[296658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:53 compute-2 sudo[296658]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:53 compute-2 sudo[296683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:22:54 compute-2 sudo[296683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:54 compute-2 sudo[296683]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:54 compute-2 ceph-mon[77138]: pgmap v2535: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Nov 29 08:22:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2494565161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:54 compute-2 sudo[296708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:54 compute-2 sudo[296708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:54 compute-2 sudo[296708]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:54 compute-2 sudo[296733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:22:54 compute-2 sudo[296733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:54 compute-2 sudo[296733]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:54 compute-2 nova_compute[232428]: 2025-11-29 08:22:54.987 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:22:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:22:55 compute-2 nova_compute[232428]: 2025-11-29 08:22:55.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:55 compute-2 nova_compute[232428]: 2025-11-29 08:22:55.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:22:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:22:56 compute-2 ceph-mon[77138]: pgmap v2536: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Nov 29 08:22:56 compute-2 nova_compute[232428]: 2025-11-29 08:22:56.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:56 compute-2 nova_compute[232428]: 2025-11-29 08:22:56.590 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:56 compute-2 sudo[296790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:56 compute-2 sudo[296790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:56 compute-2 sudo[296790]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:56 compute-2 nova_compute[232428]: 2025-11-29 08:22:56.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:56.935 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:22:56.936 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:22:56 compute-2 sudo[296815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:56 compute-2 sudo[296815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:56 compute-2 sudo[296815]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1170531116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:22:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1170531116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:22:57 compute-2 nova_compute[232428]: 2025-11-29 08:22:57.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:57 compute-2 nova_compute[232428]: 2025-11-29 08:22:57.293 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:57.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 08:22:58 compute-2 ceph-mon[77138]: pgmap v2537: 305 pgs: 305 active+clean; 596 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 437 KiB/s wr, 415 op/s
Nov 29 08:22:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 08:22:59 compute-2 ceph-mon[77138]: osdmap e334: 3 total, 3 up, 3 in
Nov 29 08:22:59 compute-2 ceph-mon[77138]: osdmap e335: 3 total, 3 up, 3 in
Nov 29 08:22:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:22:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:22:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:22:59 compute-2 nova_compute[232428]: 2025-11-29 08:22:59.990 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:00 compute-2 ceph-mon[77138]: pgmap v2539: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 549 KiB/s wr, 498 op/s
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.223 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.224 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.224 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:00 compute-2 nova_compute[232428]: 2025-11-29 08:23:00.663 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:23:01 compute-2 sudo[296844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:01 compute-2 sudo[296844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:01 compute-2 sudo[296844]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-2 nova_compute[232428]: 2025-11-29 08:23:01.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:01 compute-2 sudo[296869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:23:01 compute-2 sudo[296869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:01 compute-2 sudo[296869]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.181 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.215 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.215 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.216 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.216 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:02 compute-2 ceph-mon[77138]: pgmap v2541: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 616 op/s
Nov 29 08:23:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:23:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:23:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2081328841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:02 compute-2 nova_compute[232428]: 2025-11-29 08:23:02.677 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 29 08:23:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3916305998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:03.327 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:03 compute-2 podman[296895]: 2025-11-29 08:23:03.707047816 +0000 UTC m=+0.093889320 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 08:23:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 08:23:04 compute-2 ceph-mon[77138]: pgmap v2542: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 401 op/s
Nov 29 08:23:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3067972468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:04 compute-2 nova_compute[232428]: 2025-11-29 08:23:04.945 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404569.9440718, 6211bb9c-7da9-4ff8-8b1b-b47c237cf720 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:04 compute-2 nova_compute[232428]: 2025-11-29 08:23:04.945 232432 INFO nova.compute.manager [-] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] VM Stopped (Lifecycle Event)
Nov 29 08:23:04 compute-2 nova_compute[232428]: 2025-11-29 08:23:04.966 232432 DEBUG nova.compute.manager [None req-1e961d9d-0c8c-47c7-bf37-d76e90921b6c - - - - - -] [instance: 6211bb9c-7da9-4ff8-8b1b-b47c237cf720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:04 compute-2 nova_compute[232428]: 2025-11-29 08:23:04.993 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:05 compute-2 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 29 08:23:05 compute-2 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000098.scope: Consumed 13.809s CPU time.
Nov 29 08:23:05 compute-2 systemd-machined[194747]: Machine qemu-72-instance-00000098 terminated.
Nov 29 08:23:05 compute-2 ceph-mon[77138]: osdmap e336: 3 total, 3 up, 3 in
Nov 29 08:23:05 compute-2 nova_compute[232428]: 2025-11-29 08:23:05.695 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance shutdown successfully after 13 seconds.
Nov 29 08:23:05 compute-2 nova_compute[232428]: 2025-11-29 08:23:05.703 232432 INFO nova.virt.libvirt.driver [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance destroyed successfully.
Nov 29 08:23:05 compute-2 nova_compute[232428]: 2025-11-29 08:23:05.711 232432 INFO nova.virt.libvirt.driver [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance destroyed successfully.
Nov 29 08:23:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:05.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.252 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deleting instance files /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_del
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.253 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deletion of /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_del complete
Nov 29 08:23:06 compute-2 ceph-mon[77138]: pgmap v2544: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 525 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 876 KiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.463 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.464 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating image(s)
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.498 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.535 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.576 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.583 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.671 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.672 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.673 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.674 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.711 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:06 compute-2 nova_compute[232428]: 2025-11-29 08:23:06.715 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:06.938 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.015 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:23:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4228414801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.086 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] resizing rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.192 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.193 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Ensure instance console log exists: /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.194 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.194 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.194 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.196 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.201 232432 WARNING nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.207 232432 DEBUG nova.virt.libvirt.host [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.208 232432 DEBUG nova.virt.libvirt.host [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.212 232432 DEBUG nova.virt.libvirt.host [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.212 232432 DEBUG nova.virt.libvirt.host [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.214 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.214 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.214 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.215 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.215 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.215 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.215 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.215 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.216 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.216 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.216 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.216 232432 DEBUG nova.virt.hardware [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.217 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.241 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:07 compute-2 ceph-mon[77138]: pgmap v2545: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 521 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 5.8 MiB/s wr, 288 op/s
Nov 29 08:23:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4279436585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4228414801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:07 compute-2 podman[297125]: 2025-11-29 08:23:07.654889895 +0000 UTC m=+0.058164591 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:23:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2317320003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.697 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.725 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:07 compute-2 nova_compute[232428]: 2025-11-29 08:23:07.728 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:23:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4175836486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.255 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.256 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.256 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.256 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.257 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.300 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.307 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <uuid>da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</uuid>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <name>instance-00000098</name>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerShowV257Test-server-281509839</nova:name>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:23:07</nova:creationTime>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:user uuid="0508bf8e51cf4e00992499288c702602">tempest-ServerShowV257Test-382407045-project-member</nova:user>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <nova:project uuid="42dc6c0cde654493984d9a0a65843d9a">tempest-ServerShowV257Test-382407045</nova:project>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <nova:ports/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <system>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="serial">da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="uuid">da9e5ad2-575c-4af3-a5b1-4f7ed1513aca</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </system>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <os>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </os>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <features>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </features>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk">
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config">
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:23:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/console.log" append="off"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <video>
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </video>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:23:08 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:23:08 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:23:08 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:23:08 compute-2 nova_compute[232428]: </domain>
Nov 29 08:23:08 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:23:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2317320003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/318140726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4175836486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.485 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.486 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.487 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Using config drive
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.529 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.551 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'ec2_ids' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:08.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.727 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'keypairs' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2853704429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.773 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.843 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:23:08 compute-2 nova_compute[232428]: 2025-11-29 08:23:08.844 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.005 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Creating config drive at /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.019 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3cbo2kx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.136 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.137 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4293MB free_disk=20.806087493896484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.138 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.138 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.187 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3cbo2kx" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.229 232432 DEBUG nova.storage.rbd_utils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] rbd image da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.235 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.339 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance da9e5ad2-575c-4af3-a5b1-4f7ed1513aca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.340 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.340 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:23:09 compute-2 ceph-mon[77138]: pgmap v2546: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 825 KiB/s rd, 5.3 MiB/s wr, 275 op/s
Nov 29 08:23:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3428384830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2853704429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:09 compute-2 ceph-mon[77138]: osdmap e337: 3 total, 3 up, 3 in
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.454 232432 DEBUG oslo_concurrency.processutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.455 232432 INFO nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deleting local config drive /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca/disk.config because it was imported into RBD.
Nov 29 08:23:09 compute-2 systemd-machined[194747]: New machine qemu-73-instance-00000098.
Nov 29 08:23:09 compute-2 systemd[1]: Started Virtual Machine qemu-73-instance-00000098.
Nov 29 08:23:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:23:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.957 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for da9e5ad2-575c-4af3-a5b1-4f7ed1513aca due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.957 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404589.9564636, da9e5ad2-575c-4af3-a5b1-4f7ed1513aca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.957 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] VM Resumed (Lifecycle Event)
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.960 232432 DEBUG nova.compute.manager [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.960 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.965 232432 INFO nova.virt.libvirt.driver [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance spawned successfully.
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.966 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:23:09 compute-2 nova_compute[232428]: 2025-11-29 08:23:09.996 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.016 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.056 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.065 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.072 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.073 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.073 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.074 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.075 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.076 232432 DEBUG nova.virt.libvirt.driver [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.083 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.084 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404589.9576688, da9e5ad2-575c-4af3-a5b1-4f7ed1513aca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.084 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] VM Started (Lifecycle Event)
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.127 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.131 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.150 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.156 232432 DEBUG nova.compute.manager [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.231 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3847367406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.480 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.486 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.506 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:23:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3847367406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.533 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.533 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.534 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.534 232432 DEBUG nova.objects.instance [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 29 08:23:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:10 compute-2 nova_compute[232428]: 2025-11-29 08:23:10.586 232432 DEBUG oslo_concurrency.lockutils [None req-59d24faa-8aba-4344-a56e-aa49a9ab61af 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 08:23:11 compute-2 ceph-mon[77138]: pgmap v2548: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 5.9 MiB/s wr, 304 op/s
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.588 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.589 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.590 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.590 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.590 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.592 232432 INFO nova.compute.manager [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Terminating instance
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.594 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.594 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquired lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.594 232432 DEBUG nova.network.neutron [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:11 compute-2 nova_compute[232428]: 2025-11-29 08:23:11.773 232432 DEBUG nova.network.neutron [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:23:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:11.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.076 232432 DEBUG nova.network.neutron [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.098 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Releasing lock "refresh_cache-da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.099 232432 DEBUG nova.compute.manager [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:23:12 compute-2 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 29 08:23:12 compute-2 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000098.scope: Consumed 2.795s CPU time.
Nov 29 08:23:12 compute-2 systemd-machined[194747]: Machine qemu-73-instance-00000098 terminated.
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.319 232432 INFO nova.virt.libvirt.driver [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance destroyed successfully.
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.320 232432 DEBUG nova.objects.instance [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lazy-loading 'resources' on Instance uuid da9e5ad2-575c-4af3-a5b1-4f7ed1513aca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:12 compute-2 ceph-mon[77138]: osdmap e338: 3 total, 3 up, 3 in
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.776 232432 INFO nova.virt.libvirt.driver [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deleting instance files /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_del
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.777 232432 INFO nova.virt.libvirt.driver [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deletion of /var/lib/nova/instances/da9e5ad2-575c-4af3-a5b1-4f7ed1513aca_del complete
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.846 232432 INFO nova.compute.manager [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.847 232432 DEBUG oslo.service.loopingcall [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.847 232432 DEBUG nova.compute.manager [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:23:12 compute-2 nova_compute[232428]: 2025-11-29 08:23:12.847 232432 DEBUG nova.network.neutron [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.032 232432 DEBUG nova.network.neutron [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.221 232432 DEBUG nova.network.neutron [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.239 232432 INFO nova.compute.manager [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Took 0.39 seconds to deallocate network for instance.
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.283 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.284 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.337 232432 DEBUG oslo_concurrency.processutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:13 compute-2 ceph-mon[77138]: pgmap v2550: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 29 08:23:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/324055808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.789 232432 DEBUG oslo_concurrency.processutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.796 232432 DEBUG nova.compute.provider_tree [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.846 232432 DEBUG nova.scheduler.client.report [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.874 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:13 compute-2 nova_compute[232428]: 2025-11-29 08:23:13.906 232432 INFO nova.scheduler.client.report [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Deleted allocations for instance da9e5ad2-575c-4af3-a5b1-4f7ed1513aca
Nov 29 08:23:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:14 compute-2 nova_compute[232428]: 2025-11-29 08:23:14.118 232432 DEBUG oslo_concurrency.lockutils [None req-a68809ed-a421-4671-8f40-6dd96e1ed0e9 0508bf8e51cf4e00992499288c702602 42dc6c0cde654493984d9a0a65843d9a - - default default] Lock "da9e5ad2-575c-4af3-a5b1-4f7ed1513aca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 08:23:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/324055808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:14 compute-2 ceph-mon[77138]: osdmap e339: 3 total, 3 up, 3 in
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.000 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.316 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.316 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.332 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.431 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.432 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.437 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.438 232432 INFO nova.compute.claims [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:23:15 compute-2 nova_compute[232428]: 2025-11-29 08:23:15.564 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:15 compute-2 ceph-mon[77138]: pgmap v2552: 305 pgs: 305 active+clean; 216 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 366 op/s
Nov 29 08:23:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2233120286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/52536175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.033 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.039 232432 DEBUG nova.compute.provider_tree [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.090 232432 DEBUG nova.scheduler.client.report [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.288 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.289 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.359 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.360 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.402 232432 INFO nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.425 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.520 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.522 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.523 232432 INFO nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating image(s)
Nov 29 08:23:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.573 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4042379002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/52536175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.624 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.664 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.668 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.707 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.769 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.770 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.770 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.771 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.799 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.802 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:16 compute-2 nova_compute[232428]: 2025-11-29 08:23:16.843 232432 DEBUG nova.policy [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a37c720b9bb4273b66cd2dce30fbf48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:23:17 compute-2 sudo[297511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:17 compute-2 sudo[297511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:17 compute-2 sudo[297511]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.185 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:17 compute-2 sudo[297536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:17 compute-2 sudo[297536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:17 compute-2 sudo[297536]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.281 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] resizing rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.417 232432 DEBUG nova.objects.instance [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.446 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.447 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Ensure instance console log exists: /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.448 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.448 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.448 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:17 compute-2 ceph-mon[77138]: pgmap v2553: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 405 op/s
Nov 29 08:23:17 compute-2 nova_compute[232428]: 2025-11-29 08:23:17.765 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Successfully created port: 654e5561-248d-48f1-9b25-da86880e3041 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:23:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.274 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Successfully updated port: 654e5561-248d-48f1-9b25-da86880e3041 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.302 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.303 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.303 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.584 232432 DEBUG nova.compute.manager [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-changed-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.585 232432 DEBUG nova.compute.manager [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing instance network info cache due to event network-changed-654e5561-248d-48f1-9b25-da86880e3041. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.585 232432 DEBUG oslo_concurrency.lockutils [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:23:19 compute-2 nova_compute[232428]: 2025-11-29 08:23:19.642 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:23:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:20 compute-2 nova_compute[232428]: 2025-11-29 08:23:20.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:20 compute-2 ceph-mon[77138]: pgmap v2554: 305 pgs: 305 active+clean; 127 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 363 KiB/s wr, 268 op/s
Nov 29 08:23:20 compute-2 ceph-mon[77138]: osdmap e340: 3 total, 3 up, 3 in
Nov 29 08:23:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:23:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2374454669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:23:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2374454669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:20 compute-2 podman[297635]: 2025-11-29 08:23:20.744889465 +0000 UTC m=+0.130863768 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:23:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2374454669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2374454669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:21 compute-2 nova_compute[232428]: 2025-11-29 08:23:21.602 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:21 compute-2 nova_compute[232428]: 2025-11-29 08:23:21.635 232432 DEBUG nova.network.neutron [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:21.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.008 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.009 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance network_info: |[{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.010 232432 DEBUG oslo_concurrency.lockutils [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.010 232432 DEBUG nova.network.neutron [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing network info cache for port 654e5561-248d-48f1-9b25-da86880e3041 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.016 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start _get_guest_xml network_info=[{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.022 232432 WARNING nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.028 232432 DEBUG nova.virt.libvirt.host [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.029 232432 DEBUG nova.virt.libvirt.host [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.036 232432 DEBUG nova.virt.libvirt.host [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.037 232432 DEBUG nova.virt.libvirt.host [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.038 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.038 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.038 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.039 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.040 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.040 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.040 232432 DEBUG nova.virt.hardware [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.043 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:22 compute-2 ceph-mon[77138]: pgmap v2556: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 293 op/s
Nov 29 08:23:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:23:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4214727057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.540 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.584 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:22 compute-2 nova_compute[232428]: 2025-11-29 08:23:22.590 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:23:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/736695069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.101 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.102 232432 DEBUG nova.virt.libvirt.vif [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:16Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.103 232432 DEBUG nova.network.os_vif_util [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.104 232432 DEBUG nova.network.os_vif_util [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.105 232432 DEBUG nova.objects.instance [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.291 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <uuid>9c6c5334-4e97-46b8-9013-cc5269d8c1c1</uuid>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <name>instance-00000099</name>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:name>tempest-ServersNegativeTestJSON-server-1933381778</nova:name>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:23:22</nova:creationTime>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:user uuid="3a37c720b9bb4273b66cd2dce30fbf48">tempest-ServersNegativeTestJSON-213437080-project-member</nova:user>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:project uuid="d9406fbc6fef486fa5b0e79549e78d00">tempest-ServersNegativeTestJSON-213437080</nova:project>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <nova:port uuid="654e5561-248d-48f1-9b25-da86880e3041">
Nov 29 08:23:23 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <system>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="serial">9c6c5334-4e97-46b8-9013-cc5269d8c1c1</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="uuid">9c6c5334-4e97-46b8-9013-cc5269d8c1c1</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </system>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <os>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </os>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <features>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </features>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk">
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </source>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config">
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </source>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:23:23 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:65:8b:2b"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <target dev="tap654e5561-24"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/console.log" append="off"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <video>
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </video>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:23:23 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:23:23 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:23:23 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:23:23 compute-2 nova_compute[232428]: </domain>
Nov 29 08:23:23 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.292 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Preparing to wait for external event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.293 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.293 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.293 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.294 232432 DEBUG nova.virt.libvirt.vif [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:16Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.294 232432 DEBUG nova.network.os_vif_util [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.295 232432 DEBUG nova.network.os_vif_util [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.295 232432 DEBUG os_vif [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.297 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.297 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.301 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap654e5561-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.301 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap654e5561-24, col_values=(('external_ids', {'iface-id': '654e5561-248d-48f1-9b25-da86880e3041', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:8b:2b', 'vm-uuid': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:23 compute-2 NetworkManager[48993]: <info>  [1764404603.3045] manager: (tap654e5561-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.314 232432 INFO os_vif [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24')
Nov 29 08:23:23 compute-2 ceph-mon[77138]: pgmap v2557: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 233 op/s
Nov 29 08:23:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4214727057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/736695069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.602 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.603 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.603 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No VIF found with MAC fa:16:3e:65:8b:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.603 232432 INFO nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Using config drive
Nov 29 08:23:23 compute-2 nova_compute[232428]: 2025-11-29 08:23:23.637 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:25.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:26 compute-2 ceph-mon[77138]: pgmap v2558: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 29 08:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1117672145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1117672145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:26.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:26 compute-2 nova_compute[232428]: 2025-11-29 08:23:26.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:27 compute-2 nova_compute[232428]: 2025-11-29 08:23:27.318 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404592.317018, da9e5ad2-575c-4af3-a5b1-4f7ed1513aca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:27 compute-2 nova_compute[232428]: 2025-11-29 08:23:27.319 232432 INFO nova.compute.manager [-] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] VM Stopped (Lifecycle Event)
Nov 29 08:23:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:28 compute-2 ceph-mon[77138]: pgmap v2559: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2664543326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2664543326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.638 232432 DEBUG nova.compute.manager [None req-75b761f5-e3ea-45a7-ace7-f35ebe401d72 - - - - - -] [instance: da9e5ad2-575c-4af3-a5b1-4f7ed1513aca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.727 232432 INFO nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating config drive at /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.739 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofog_8za execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.881 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofog_8za" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.932 232432 DEBUG nova.storage.rbd_utils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:23:28 compute-2 nova_compute[232428]: 2025-11-29 08:23:28.936 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.119 232432 DEBUG oslo_concurrency.processutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.122 232432 INFO nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deleting local config drive /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config because it was imported into RBD.
Nov 29 08:23:29 compute-2 kernel: tap654e5561-24: entered promiscuous mode
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.2048] manager: (tap654e5561-24): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 08:23:29 compute-2 ovn_controller[134375]: 2025-11-29T08:23:29Z|00711|binding|INFO|Claiming lport 654e5561-248d-48f1-9b25-da86880e3041 for this chassis.
Nov 29 08:23:29 compute-2 ovn_controller[134375]: 2025-11-29T08:23:29Z|00712|binding|INFO|654e5561-248d-48f1-9b25-da86880e3041: Claiming fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.206 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.214 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.217 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.233 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.234 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 bound to our chassis
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.236 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 258f6232-6798-4075-adab-c07c4559ef67
Nov 29 08:23:29 compute-2 systemd-udevd[297798]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.252 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc62bbb-17a8-49d8-920e-b69a65b888c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.254 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap258f6232-61 in ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.256 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap258f6232-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.256 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f1226383-d23c-4c10-8fb2-cfd002b617e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.2577] device (tap654e5561-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.258 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[518170d9-a6c5-4a3e-b205-6ad398c2e1f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.2601] device (tap654e5561-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:23:29 compute-2 systemd-machined[194747]: New machine qemu-74-instance-00000099.
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.278 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f7be01ab-9694-4e0d-b9c6-17c6f0375385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 systemd[1]: Started Virtual Machine qemu-74-instance-00000099.
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 ovn_controller[134375]: 2025-11-29T08:23:29Z|00713|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 ovn-installed in OVS
Nov 29 08:23:29 compute-2 ovn_controller[134375]: 2025-11-29T08:23:29Z|00714|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 up in Southbound
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.308 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf8f9cf-9ac5-4660-9caa-d07d6ce0d6db]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.364 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7597f677-f5b7-4e15-a546-e59690b7acec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 systemd-udevd[297803]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.370 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d28f15-c9c9-46e0-9cfc-45af13e9c301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.3711] manager: (tap258f6232-60): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 08:23:29 compute-2 ceph-mon[77138]: pgmap v2560: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.9 MiB/s wr, 57 op/s
Nov 29 08:23:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2722934986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2722934986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.407 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dffe69-ad37-4f2b-b121-e3b8e642a030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.410 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf1a192-3264-4b5b-948a-16c642db7894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.4375] device (tap258f6232-60): carrier: link connected
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.448 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf79e8d-6463-4211-80be-fedf40dffb29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2d99db-3b17-4fd7-92d7-8fc1a382e589]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757570, 'reachable_time': 25369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297834, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.488 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[db80b871-b644-40b6-bd58-3f8eb24b4405]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:63e2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757570, 'tstamp': 757570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297835, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.515 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3a31e6f2-0784-4b14-b3a1-f61992e2a189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757570, 'reachable_time': 25369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297836, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.558 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e795afed-1154-44f7-bb17-e03cca91f432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.631 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2ffc26-2a92-452b-8c17-842f558640d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.632 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.632 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.633 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap258f6232-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 NetworkManager[48993]: <info>  [1764404609.6356] manager: (tap258f6232-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 08:23:29 compute-2 kernel: tap258f6232-60: entered promiscuous mode
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.637 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap258f6232-60, col_values=(('external_ids', {'iface-id': 'c87f2e10-0d06-412e-bd89-4b9ab0d16c96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.638 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 ovn_controller[134375]: 2025-11-29T08:23:29Z|00715|binding|INFO|Releasing lport c87f2e10-0d06-412e-bd89-4b9ab0d16c96 from this chassis (sb_readonly=0)
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.655 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.656 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.657 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2ec773-3973-4f1f-8aee-2c3c7f1e40ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.658 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-258f6232-6798-4075-adab-c07c4559ef67
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 258f6232-6798-4075-adab-c07c4559ef67
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:29.659 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'env', 'PROCESS_TAG=haproxy-258f6232-6798-4075-adab-c07c4559ef67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/258f6232-6798-4075-adab-c07c4559ef67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.732 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404609.7320127, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:29 compute-2 nova_compute[232428]: 2025-11-29 08:23:29.733 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Started (Lifecycle Event)
Nov 29 08:23:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:29.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:30 compute-2 podman[297911]: 2025-11-29 08:23:30.011210903 +0000 UTC m=+0.025090526 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:23:30 compute-2 podman[297911]: 2025-11-29 08:23:30.429841698 +0000 UTC m=+0.443721321 container create 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:23:30 compute-2 systemd[1]: Started libpod-conmon-62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44.scope.
Nov 29 08:23:30 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:23:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96873edf09bd28b57b1406687cc21d6e45c3c3aad182c7eae888ab280a3ef515/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:30 compute-2 podman[297911]: 2025-11-29 08:23:30.54938721 +0000 UTC m=+0.563266813 container init 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:23:30 compute-2 podman[297911]: 2025-11-29 08:23:30.557759422 +0000 UTC m=+0.571639005 container start 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:30 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [NOTICE]   (297930) : New worker (297932) forked
Nov 29 08:23:30 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [NOTICE]   (297930) : Loading success.
Nov 29 08:23:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:30.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.118 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.125 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404609.7322779, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.126 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Paused (Lifecycle Event)
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.178 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.184 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.208 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:23:31 compute-2 ceph-mon[77138]: pgmap v2561: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.6 MiB/s wr, 59 op/s
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.727 232432 DEBUG nova.compute.manager [req-d1dce56b-0c17-4b58-b7c6-254c09a7a3a8 req-0543f109-2088-404e-8588-924f373e7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.727 232432 DEBUG oslo_concurrency.lockutils [req-d1dce56b-0c17-4b58-b7c6-254c09a7a3a8 req-0543f109-2088-404e-8588-924f373e7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.728 232432 DEBUG oslo_concurrency.lockutils [req-d1dce56b-0c17-4b58-b7c6-254c09a7a3a8 req-0543f109-2088-404e-8588-924f373e7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.728 232432 DEBUG oslo_concurrency.lockutils [req-d1dce56b-0c17-4b58-b7c6-254c09a7a3a8 req-0543f109-2088-404e-8588-924f373e7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.728 232432 DEBUG nova.compute.manager [req-d1dce56b-0c17-4b58-b7c6-254c09a7a3a8 req-0543f109-2088-404e-8588-924f373e7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Processing event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.729 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.733 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404611.7334917, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.733 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Resumed (Lifecycle Event)
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.735 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.738 232432 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance spawned successfully.
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.738 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.765 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.765 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.766 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.766 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.766 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.767 232432 DEBUG nova.virt.libvirt.driver [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.778 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.781 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.806 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.830 232432 INFO nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Took 15.31 seconds to spawn the instance on the hypervisor.
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.830 232432 DEBUG nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:31.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.942 232432 INFO nova.compute.manager [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Took 16.54 seconds to build instance.
Nov 29 08:23:31 compute-2 nova_compute[232428]: 2025-11-29 08:23:31.973 232432 DEBUG oslo_concurrency.lockutils [None req-7cff8169-f1b8-4547-aa90-e8a08b3ce596 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:32 compute-2 nova_compute[232428]: 2025-11-29 08:23:32.212 232432 DEBUG nova.network.neutron [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updated VIF entry in instance network info cache for port 654e5561-248d-48f1-9b25-da86880e3041. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:23:32 compute-2 nova_compute[232428]: 2025-11-29 08:23:32.213 232432 DEBUG nova.network.neutron [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:32 compute-2 nova_compute[232428]: 2025-11-29 08:23:32.238 232432 DEBUG oslo_concurrency.lockutils [req-36a6854d-88b0-48a2-8101-a024e5029e93 req-e4918f28-7725-4327-8eb6-6bcf9eb2ded7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:23:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:33 compute-2 nova_compute[232428]: 2025-11-29 08:23:33.307 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:33 compute-2 ceph-mon[77138]: pgmap v2562: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 40 op/s
Nov 29 08:23:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.553 232432 DEBUG nova.compute.manager [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.554 232432 DEBUG oslo_concurrency.lockutils [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.554 232432 DEBUG oslo_concurrency.lockutils [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.554 232432 DEBUG oslo_concurrency.lockutils [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.554 232432 DEBUG nova.compute.manager [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:23:34 compute-2 nova_compute[232428]: 2025-11-29 08:23:34.555 232432 WARNING nova.compute.manager [req-22cf9c0e-f6ac-43d4-b118-fa1524922339 req-feb913d4-748e-43ad-ac43-8721df4c94f0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state active and task_state None.
Nov 29 08:23:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:34.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:34 compute-2 podman[297943]: 2025-11-29 08:23:34.665067474 +0000 UTC m=+0.066495042 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:23:35 compute-2 ceph-mon[77138]: pgmap v2563: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 13 KiB/s wr, 65 op/s
Nov 29 08:23:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:35.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:36 compute-2 nova_compute[232428]: 2025-11-29 08:23:36.608 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:37 compute-2 sudo[297964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:37 compute-2 sudo[297964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:37 compute-2 sudo[297964]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:37 compute-2 sudo[297989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:37 compute-2 sudo[297989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:37 compute-2 sudo[297989]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:37.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:38 compute-2 ceph-mon[77138]: pgmap v2564: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Nov 29 08:23:38 compute-2 nova_compute[232428]: 2025-11-29 08:23:38.308 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:38 compute-2 podman[298015]: 2025-11-29 08:23:38.678672974 +0000 UTC m=+0.073159442 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:23:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:39.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:40 compute-2 ceph-mon[77138]: pgmap v2565: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 08:23:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:40.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:41 compute-2 nova_compute[232428]: 2025-11-29 08:23:41.615 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:41 compute-2 ceph-mon[77138]: pgmap v2566: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Nov 29 08:23:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:41.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:42.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:43 compute-2 nova_compute[232428]: 2025-11-29 08:23:43.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:43.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:44 compute-2 ceph-mon[77138]: pgmap v2567: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 73 op/s
Nov 29 08:23:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:45.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:46 compute-2 ceph-mon[77138]: pgmap v2568: 305 pgs: 305 active+clean; 177 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 586 KiB/s wr, 80 op/s
Nov 29 08:23:46 compute-2 nova_compute[232428]: 2025-11-29 08:23:46.618 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:47 compute-2 ceph-mon[77138]: pgmap v2569: 305 pgs: 305 active+clean; 179 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 61 op/s
Nov 29 08:23:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:47.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:48 compute-2 nova_compute[232428]: 2025-11-29 08:23:48.316 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:48 compute-2 ovn_controller[134375]: 2025-11-29T08:23:48Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 08:23:48 compute-2 ovn_controller[134375]: 2025-11-29T08:23:48Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 08:23:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:48.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:23:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2712794285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:49.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:49 compute-2 ceph-mon[77138]: pgmap v2570: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.4 MiB/s wr, 21 op/s
Nov 29 08:23:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:51 compute-2 nova_compute[232428]: 2025-11-29 08:23:51.536 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:51 compute-2 nova_compute[232428]: 2025-11-29 08:23:51.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:51 compute-2 podman[298039]: 2025-11-29 08:23:51.689521136 +0000 UTC m=+0.087688706 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 08:23:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:52 compute-2 ceph-mon[77138]: pgmap v2571: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Nov 29 08:23:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:53 compute-2 nova_compute[232428]: 2025-11-29 08:23:53.318 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:53 compute-2 ceph-mon[77138]: pgmap v2572: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Nov 29 08:23:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:53.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:56 compute-2 nova_compute[232428]: 2025-11-29 08:23:56.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:56 compute-2 nova_compute[232428]: 2025-11-29 08:23:56.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:56 compute-2 nova_compute[232428]: 2025-11-29 08:23:56.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:23:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:23:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:56 compute-2 ceph-mon[77138]: pgmap v2573: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 08:23:57 compute-2 sudo[298069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:57 compute-2 sudo[298069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:57 compute-2 sudo[298069]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:57 compute-2 sudo[298094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:57 compute-2 sudo[298094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:57 compute-2 sudo[298094]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:58 compute-2 nova_compute[232428]: 2025-11-29 08:23:58.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:58.211 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:23:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:23:58.214 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:23:58 compute-2 nova_compute[232428]: 2025-11-29 08:23:58.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:58 compute-2 ceph-mon[77138]: pgmap v2574: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 3.3 MiB/s wr, 82 op/s
Nov 29 08:23:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3860970169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2730004005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:23:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:23:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:23:59 compute-2 ceph-mon[77138]: pgmap v2575: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Nov 29 08:23:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:23:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:23:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:59.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:00 compute-2 nova_compute[232428]: 2025-11-29 08:24:00.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:01 compute-2 nova_compute[232428]: 2025-11-29 08:24:01.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:01 compute-2 nova_compute[232428]: 2025-11-29 08:24:01.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:24:01 compute-2 nova_compute[232428]: 2025-11-29 08:24:01.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:24:01 compute-2 nova_compute[232428]: 2025-11-29 08:24:01.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:01 compute-2 ceph-mon[77138]: pgmap v2576: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 75 op/s
Nov 29 08:24:01 compute-2 sudo[298123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:01 compute-2 sudo[298123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:01 compute-2 sudo[298123]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:01 compute-2 sudo[298148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:01 compute-2 sudo[298148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:01 compute-2 sudo[298148]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 08:24:01 compute-2 sudo[298173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:01 compute-2 sudo[298173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:01 compute-2 sudo[298173]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:01 compute-2 sudo[298198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:24:01 compute-2 sudo[298198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:02 compute-2 nova_compute[232428]: 2025-11-29 08:24:02.124 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:24:02 compute-2 nova_compute[232428]: 2025-11-29 08:24:02.125 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:24:02 compute-2 nova_compute[232428]: 2025-11-29 08:24:02.125 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:24:02 compute-2 nova_compute[232428]: 2025-11-29 08:24:02.125 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:24:02 compute-2 podman[298297]: 2025-11-29 08:24:02.618471682 +0000 UTC m=+0.100931781 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:24:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:02 compute-2 podman[298297]: 2025-11-29 08:24:02.75354172 +0000 UTC m=+0.236001849 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 08:24:02 compute-2 ceph-mon[77138]: osdmap e341: 3 total, 3 up, 3 in
Nov 29 08:24:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:03 compute-2 ceph-osd[79833]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 08:24:03 compute-2 nova_compute[232428]: 2025-11-29 08:24:03.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:03.328 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:03.328 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:03.330 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:03.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:04 compute-2 podman[298445]: 2025-11-29 08:24:04.143104127 +0000 UTC m=+0.544577137 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:24:04 compute-2 podman[298445]: 2025-11-29 08:24:04.160652827 +0000 UTC m=+0.562125777 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:24:04 compute-2 ceph-mon[77138]: pgmap v2578: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 43 op/s
Nov 29 08:24:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1531296144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:04 compute-2 podman[298512]: 2025-11-29 08:24:04.488308873 +0000 UTC m=+0.079226551 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, name=keepalived)
Nov 29 08:24:04 compute-2 podman[298512]: 2025-11-29 08:24:04.500693191 +0000 UTC m=+0.091610819 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 08:24:04 compute-2 sudo[298198]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:24:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:05 compute-2 sudo[298545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:05 compute-2 sudo[298545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:05 compute-2 sudo[298545]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:05 compute-2 sudo[298576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:05 compute-2 podman[298569]: 2025-11-29 08:24:05.117143248 +0000 UTC m=+0.085299191 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:24:05 compute-2 sudo[298576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:05 compute-2 sudo[298576]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:05 compute-2 sudo[298614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:05 compute-2 sudo[298614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:05 compute-2 sudo[298614]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:05.216 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:05 compute-2 sudo[298639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:24:05 compute-2 sudo[298639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:05 compute-2 nova_compute[232428]: 2025-11-29 08:24:05.558 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:24:05 compute-2 nova_compute[232428]: 2025-11-29 08:24:05.647 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:24:05 compute-2 nova_compute[232428]: 2025-11-29 08:24:05.648 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:24:05 compute-2 nova_compute[232428]: 2025-11-29 08:24:05.649 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:05 compute-2 sudo[298639]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:05 compute-2 ceph-mon[77138]: pgmap v2579: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 31 KiB/s wr, 77 op/s
Nov 29 08:24:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1007952135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 08:24:06 compute-2 nova_compute[232428]: 2025-11-29 08:24:06.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:24:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:06 compute-2 ceph-mon[77138]: osdmap e342: 3 total, 3 up, 3 in
Nov 29 08:24:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:08 compute-2 nova_compute[232428]: 2025-11-29 08:24:08.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:08 compute-2 ceph-mon[77138]: pgmap v2581: 305 pgs: 305 active+clean; 217 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 22 KiB/s wr, 155 op/s
Nov 29 08:24:08 compute-2 nova_compute[232428]: 2025-11-29 08:24:08.324 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:08.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.233 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.234 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.235 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2651255764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:09 compute-2 podman[298717]: 2025-11-29 08:24:09.689208378 +0000 UTC m=+0.078766506 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:24:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:24:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/46637385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:09 compute-2 nova_compute[232428]: 2025-11-29 08:24:09.745 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:09 compute-2 sshd-session[298727]: Invalid user sol from 45.148.10.240 port 44840
Nov 29 08:24:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:10 compute-2 sshd-session[298727]: Connection closed by invalid user sol 45.148.10.240 port 44840 [preauth]
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.096 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.096 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.278 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.279 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4156MB free_disk=20.942535400390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.279 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.280 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:24:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753756326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:24:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753756326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.433 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.434 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.434 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:24:10 compute-2 ceph-mon[77138]: pgmap v2582: 305 pgs: 305 active+clean; 203 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 KiB/s wr, 156 op/s
Nov 29 08:24:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1975245960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/46637385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1753756326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1753756326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.499 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.543 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.544 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.569 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.603 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:24:10 compute-2 nova_compute[232428]: 2025-11-29 08:24:10.657 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:24:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2560569550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.087 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.097 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.119 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.152 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.153 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2023723965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:11 compute-2 ceph-mon[77138]: pgmap v2583: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 KiB/s wr, 176 op/s
Nov 29 08:24:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2088208941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2560569550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:11 compute-2 nova_compute[232428]: 2025-11-29 08:24:11.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:13 compute-2 nova_compute[232428]: 2025-11-29 08:24:13.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:13 compute-2 sudo[298764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:13 compute-2 sudo[298764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:13 compute-2 sudo[298764]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:13 compute-2 sudo[298789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:24:13 compute-2 sudo[298789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:13 compute-2 sudo[298789]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:14 compute-2 ceph-mon[77138]: pgmap v2584: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.9 KiB/s wr, 154 op/s
Nov 29 08:24:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:24:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 08:24:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:14.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:15 compute-2 ceph-mon[77138]: osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:24:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4089003676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:16 compute-2 ceph-mon[77138]: pgmap v2586: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Nov 29 08:24:16 compute-2 nova_compute[232428]: 2025-11-29 08:24:16.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:17 compute-2 sudo[298817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:17 compute-2 sudo[298817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:17 compute-2 sudo[298817]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:17 compute-2 sudo[298843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:17 compute-2 sudo[298843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:17 compute-2 sudo[298843]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.549 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.550 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.567 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.660 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.662 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.668 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.669 232432 INFO nova.compute.claims [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:24:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:18 compute-2 nova_compute[232428]: 2025-11-29 08:24:18.884 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:24:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/612788630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.425 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.434 232432 DEBUG nova.compute.provider_tree [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.501 232432 DEBUG nova.scheduler.client.report [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.531 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.533 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.598 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.599 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.626 232432 INFO nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.652 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:24:19 compute-2 nova_compute[232428]: 2025-11-29 08:24:19.768 232432 INFO nova.virt.block_device [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Booting with volume a9f0bfef-3469-4919-82f4-bea2fb541eb3 at /dev/vda
Nov 29 08:24:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.447 232432 DEBUG os_brick.utils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.451 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.474 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.475 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd16e37-2672-4cc1-8822-e9c8e649abc0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.477 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.491 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.492 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[92a9115f-256f-41fa-bf61-0f12e9029db1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.494 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.510 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.510 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[84b09c98-4a1d-4dbd-8c1a-caaee8945e8e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.512 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[939e50fb-5772-4098-8f7b-aa1263322123]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.513 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.548 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.552 232432 DEBUG os_brick.initiator.connectors.lightos [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.553 232432 DEBUG os_brick.initiator.connectors.lightos [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.553 232432 DEBUG os_brick.initiator.connectors.lightos [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.554 232432 DEBUG os_brick.utils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (105ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:24:20 compute-2 nova_compute[232428]: 2025-11-29 08:24:20.554 232432 DEBUG nova.virt.block_device [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating existing volume attachment record: ab08dc2a-c6fd-45ce-9480-8e15d0112313 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:24:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:24:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:24:21 compute-2 nova_compute[232428]: 2025-11-29 08:24:21.009 232432 DEBUG nova.policy [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:24:21 compute-2 ceph-mon[77138]: pgmap v2587: 305 pgs: 305 active+clean; 292 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Nov 29 08:24:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:24:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670256014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:21 compute-2 nova_compute[232428]: 2025-11-29 08:24:21.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.150 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.153 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.154 232432 INFO nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Creating image(s)
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.155 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.156 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Ensure instance console log exists: /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.156 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.157 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.158 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:22 compute-2 ceph-mon[77138]: pgmap v2588: 305 pgs: 305 active+clean; 294 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.4 MiB/s wr, 110 op/s
Nov 29 08:24:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/612788630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2420874830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/335840357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:22 compute-2 ceph-mon[77138]: pgmap v2589: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 6.4 MiB/s wr, 115 op/s
Nov 29 08:24:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/670256014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:22 compute-2 nova_compute[232428]: 2025-11-29 08:24:22.339 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Successfully created port: eff55416-acbb-4845-9fd5-369e04da8afd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:24:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:22 compute-2 podman[298899]: 2025-11-29 08:24:22.804052214 +0000 UTC m=+0.188577574 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.329 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1060176328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1509376654' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.621 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Successfully updated port: eff55416-acbb-4845-9fd5-369e04da8afd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.644 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.645 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.645 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.760 232432 DEBUG nova.compute.manager [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Received event network-changed-eff55416-acbb-4845-9fd5-369e04da8afd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.761 232432 DEBUG nova.compute.manager [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Refreshing instance network info cache due to event network-changed-eff55416-acbb-4845-9fd5-369e04da8afd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.761 232432 DEBUG oslo_concurrency.lockutils [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:24:23 compute-2 nova_compute[232428]: 2025-11-29 08:24:23.967 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:24:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.165 232432 DEBUG nova.network.neutron [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.189 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.189 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Instance network_info: |[{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.190 232432 DEBUG oslo_concurrency.lockutils [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.190 232432 DEBUG nova.network.neutron [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Refreshing network info cache for port eff55416-acbb-4845-9fd5-369e04da8afd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.195 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Start _get_guest_xml network_info=[{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a9f0bfef-3469-4919-82f4-bea2fb541eb3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a9f0bfef-3469-4919-82f4-bea2fb541eb3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5d2af1c0-e1ed-48f9-beda-42cc37212de7', 'attached_at': '', 'detached_at': '', 'volume_id': 'a9f0bfef-3469-4919-82f4-bea2fb541eb3', 'serial': 'a9f0bfef-3469-4919-82f4-bea2fb541eb3', 'multiattach': True}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'ab08dc2a-c6fd-45ce-9480-8e15d0112313', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.201 232432 WARNING nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.207 232432 DEBUG nova.virt.libvirt.host [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.208 232432 DEBUG nova.virt.libvirt.host [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.215 232432 DEBUG nova.virt.libvirt.host [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.215 232432 DEBUG nova.virt.libvirt.host [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.217 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.218 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.218 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.219 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.219 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.219 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.220 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.220 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.220 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.221 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.221 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.221 232432 DEBUG nova.virt.hardware [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.260 232432 DEBUG nova.storage.rbd_utils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:24:25 compute-2 nova_compute[232428]: 2025-11-29 08:24:25.266 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:24:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3072335090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:26 compute-2 nova_compute[232428]: 2025-11-29 08:24:26.636 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:26 compute-2 nova_compute[232428]: 2025-11-29 08:24:26.663 232432 DEBUG nova.network.neutron [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updated VIF entry in instance network info cache for port eff55416-acbb-4845-9fd5-369e04da8afd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:24:26 compute-2 nova_compute[232428]: 2025-11-29 08:24:26.664 232432 DEBUG nova.network.neutron [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:24:26 compute-2 nova_compute[232428]: 2025-11-29 08:24:26.681 232432 DEBUG oslo_concurrency.lockutils [req-a9800a28-a1d1-4f82-9d29-ad3f15fa973d req-234724ea-d36f-4cb7-a1bb-45eb33599ec5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:24:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:26.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:24:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116305221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:24:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116305221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:24:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:28.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:28 compute-2 nova_compute[232428]: 2025-11-29 08:24:28.332 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:28.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:30.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:30 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 08:24:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:30.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:31 compute-2 nova_compute[232428]: 2025-11-29 08:24:31.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:24:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:32.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:33 compute-2 nova_compute[232428]: 2025-11-29 08:24:33.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).paxos(paxos updating c 4519..5190) lease_timeout -- calling new election
Nov 29 08:24:33 compute-2 ceph-mon[77138]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 08:24:33 compute-2 ceph-mon[77138]: paxos.1).electionLogic(52) init, last seen epoch 52
Nov 29 08:24:33 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 08:24:33 compute-2 ceph-mon[77138]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 08:24:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:34.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:34 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 08:24:34 compute-2 ceph-mon[77138]: pgmap v2590: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 6.4 MiB/s wr, 115 op/s
Nov 29 08:24:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.359 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.412 232432 DEBUG nova.virt.libvirt.vif [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1682933774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1682933774',id=157,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-p7c8uwd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:19Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=5d2af1c0-e1ed-48f9-beda-42cc37212de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.413 232432 DEBUG nova.network.os_vif_util [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.414 232432 DEBUG nova.network.os_vif_util [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.416 232432 DEBUG nova.objects.instance [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d2af1c0-e1ed-48f9-beda-42cc37212de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.450 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <uuid>5d2af1c0-e1ed-48f9-beda-42cc37212de7</uuid>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <name>instance-0000009d</name>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-1682933774</nova:name>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:24:25</nova:creationTime>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <nova:port uuid="eff55416-acbb-4845-9fd5-369e04da8afd">
Nov 29 08:24:34 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <system>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="serial">5d2af1c0-e1ed-48f9-beda-42cc37212de7</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="uuid">5d2af1c0-e1ed-48f9-beda-42cc37212de7</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </system>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <os>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </os>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <features>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </features>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config">
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-a9f0bfef-3469-4919-82f4-bea2fb541eb3">
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </source>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:24:34 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <serial>a9f0bfef-3469-4919-82f4-bea2fb541eb3</serial>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <shareable/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:ae:f4:28"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <target dev="tapeff55416-ac"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/console.log" append="off"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <video>
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </video>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:24:34 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:24:34 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:24:34 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:24:34 compute-2 nova_compute[232428]: </domain>
Nov 29 08:24:34 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.450 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Preparing to wait for external event network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.451 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.452 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.452 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.453 232432 DEBUG nova.virt.libvirt.vif [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1682933774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1682933774',id=157,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-p7c8uwd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:19Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=5d2af1c0-e1ed-48f9-beda-42cc37212de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.453 232432 DEBUG nova.network.os_vif_util [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.454 232432 DEBUG nova.network.os_vif_util [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.454 232432 DEBUG os_vif [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.455 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.456 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.456 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.460 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.461 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeff55416-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.461 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeff55416-ac, col_values=(('external_ids', {'iface-id': 'eff55416-acbb-4845-9fd5-369e04da8afd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:f4:28', 'vm-uuid': '5d2af1c0-e1ed-48f9-beda-42cc37212de7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:34 compute-2 NetworkManager[48993]: <info>  [1764404674.4664] manager: (tapeff55416-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.480 232432 INFO os_vif [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac')
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.554 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.554 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.554 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:ae:f4:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.555 232432 INFO nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Using config drive
Nov 29 08:24:34 compute-2 nova_compute[232428]: 2025-11-29 08:24:34.584 232432 DEBUG nova.storage.rbd_utils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:24:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:35 compute-2 nova_compute[232428]: 2025-11-29 08:24:35.118 232432 INFO nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Creating config drive at /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config
Nov 29 08:24:35 compute-2 nova_compute[232428]: 2025-11-29 08:24:35.130 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkp8jt7t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:35 compute-2 nova_compute[232428]: 2025-11-29 08:24:35.273 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkp8jt7t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:35 compute-2 nova_compute[232428]: 2025-11-29 08:24:35.321 232432 DEBUG nova.storage.rbd_utils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:24:35 compute-2 nova_compute[232428]: 2025-11-29 08:24:35.326 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config 5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:35 compute-2 ceph-mon[77138]: pgmap v2591: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 776 KiB/s rd, 5.2 MiB/s wr, 115 op/s
Nov 29 08:24:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3072335090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:35 compute-2 ceph-mon[77138]: pgmap v2592: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 29 08:24:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/116305221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/116305221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:35 compute-2 ceph-mon[77138]: pgmap v2593: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 29 08:24:35 compute-2 ceph-mon[77138]: pgmap v2594: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 79 op/s
Nov 29 08:24:35 compute-2 ceph-mon[77138]: pgmap v2595: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 52 op/s
Nov 29 08:24:35 compute-2 ceph-mon[77138]: mon.compute-2 calling monitor election
Nov 29 08:24:35 compute-2 ceph-mon[77138]: mon.compute-0 calling monitor election
Nov 29 08:24:35 compute-2 ceph-mon[77138]: mon.compute-1 calling monitor election
Nov 29 08:24:35 compute-2 ceph-mon[77138]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 08:24:35 compute-2 ceph-mon[77138]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 08:24:35 compute-2 ceph-mon[77138]: fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 08:24:35 compute-2 ceph-mon[77138]: osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:24:35 compute-2 ceph-mon[77138]: mgrmap e10: compute-0.rotard(active, since 75m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 08:24:35 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:24:35 compute-2 podman[299028]: 2025-11-29 08:24:35.65941107 +0000 UTC m=+0.061798335 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 08:24:35 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 08:24:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:36.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:36 compute-2 ceph-mon[77138]: pgmap v2596: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.640 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.708 232432 DEBUG oslo_concurrency.processutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config 5d2af1c0-e1ed-48f9-beda-42cc37212de7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.709 232432 INFO nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Deleting local config drive /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7/disk.config because it was imported into RBD.
Nov 29 08:24:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:24:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:36.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:36 compute-2 kernel: tapeff55416-ac: entered promiscuous mode
Nov 29 08:24:36 compute-2 NetworkManager[48993]: <info>  [1764404676.7738] manager: (tapeff55416-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:36 compute-2 ovn_controller[134375]: 2025-11-29T08:24:36Z|00716|binding|INFO|Claiming lport eff55416-acbb-4845-9fd5-369e04da8afd for this chassis.
Nov 29 08:24:36 compute-2 ovn_controller[134375]: 2025-11-29T08:24:36Z|00717|binding|INFO|eff55416-acbb-4845-9fd5-369e04da8afd: Claiming fa:16:3e:ae:f4:28 10.100.0.14
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.787 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:f4:28 10.100.0.14'], port_security=['fa:16:3e:ae:f4:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5d2af1c0-e1ed-48f9-beda-42cc37212de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=eff55416-acbb-4845-9fd5-369e04da8afd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.788 143801 INFO neutron.agent.ovn.metadata.agent [-] Port eff55416-acbb-4845-9fd5-369e04da8afd in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.790 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.806 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2334d1b5-6381-460c-b706-1e1ff814ed29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.807 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabbc8daa-d1 in ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.810 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabbc8daa-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.810 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e46921-8c8a-481e-a9b7-e4a50dc7ae92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[81f36267-d79a-4a41-ab2a-53bf0eb3f812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 systemd-machined[194747]: New machine qemu-75-instance-0000009d.
Nov 29 08:24:36 compute-2 systemd-udevd[299065]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.826 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[5301082b-9cb8-4acd-9352-e278152b9070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 NetworkManager[48993]: <info>  [1764404676.8315] device (tapeff55416-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:24:36 compute-2 NetworkManager[48993]: <info>  [1764404676.8325] device (tapeff55416-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.843 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[677acd5a-c4eb-4694-b94c-bc5055ec18d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 systemd[1]: Started Virtual Machine qemu-75-instance-0000009d.
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.847 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:36 compute-2 ovn_controller[134375]: 2025-11-29T08:24:36Z|00718|binding|INFO|Setting lport eff55416-acbb-4845-9fd5-369e04da8afd ovn-installed in OVS
Nov 29 08:24:36 compute-2 ovn_controller[134375]: 2025-11-29T08:24:36Z|00719|binding|INFO|Setting lport eff55416-acbb-4845-9fd5-369e04da8afd up in Southbound
Nov 29 08:24:36 compute-2 nova_compute[232428]: 2025-11-29 08:24:36.852 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.874 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[af0bab14-1786-4a84-9c10-9840eff3e9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.880 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[03e90004-3a95-47ee-89aa-fe6d997ddfa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 NetworkManager[48993]: <info>  [1764404676.8812] manager: (tapabbc8daa-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.912 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[435b5126-8982-4f86-b3bd-9ac5892042fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.915 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2273f6-729c-4725-bdd2-af1a3c668d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 NetworkManager[48993]: <info>  [1764404676.9390] device (tapabbc8daa-d0): carrier: link connected
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.944 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6544ad-8916-4c89-9aca-bfffa64acf5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.966 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c94f6d3-a3df-416a-9700-f129615ab8de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764320, 'reachable_time': 38487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299097, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:36.982 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1156c2d-55e5-4d95-a186-7a2004a1639d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:892d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764320, 'tstamp': 764320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299098, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.001 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9cc63a-87aa-47c0-8c52-341ff0a8194b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764320, 'reachable_time': 38487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299099, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.034 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae396a80-029c-4417-8bc4-beaab8516872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.103 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bffc04cf-cb55-4e19-ba15-69840b2f0403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.104 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.104 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.105 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:37 compute-2 kernel: tapabbc8daa-d0: entered promiscuous mode
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.106 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:37 compute-2 NetworkManager[48993]: <info>  [1764404677.1070] manager: (tapabbc8daa-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.111 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.112 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:37 compute-2 ovn_controller[134375]: 2025-11-29T08:24:37Z|00720|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.129 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.131 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.132 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa0965d-f3c7-4f62-95a1-9d92edb6b5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.133 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.134 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'env', 'PROCESS_TAG=haproxy-abbc8daa-d665-4e2f-bf74-9e57db481441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abbc8daa-d665-4e2f-bf74-9e57db481441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:24:37 compute-2 podman[299167]: 2025-11-29 08:24:37.492624996 +0000 UTC m=+0.052067791 container create 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:24:37 compute-2 systemd[1]: Started libpod-conmon-539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc.scope.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.540 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404677.5402484, 5d2af1c0-e1ed-48f9-beda-42cc37212de7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.541 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] VM Started (Lifecycle Event)
Nov 29 08:24:37 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:24:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ed13964f2f846764fcac123f714b2aa91c259c5ce23c4beb5b019a38e8ccbd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:37 compute-2 podman[299167]: 2025-11-29 08:24:37.464129084 +0000 UTC m=+0.023571899 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.562 232432 DEBUG nova.compute.manager [req-3ccddcc0-8295-4f28-b8f3-990abe243a25 req-ee92ecf0-2596-4353-a537-427e82a489b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Received event network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.562 232432 DEBUG oslo_concurrency.lockutils [req-3ccddcc0-8295-4f28-b8f3-990abe243a25 req-ee92ecf0-2596-4353-a537-427e82a489b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.563 232432 DEBUG oslo_concurrency.lockutils [req-3ccddcc0-8295-4f28-b8f3-990abe243a25 req-ee92ecf0-2596-4353-a537-427e82a489b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.563 232432 DEBUG oslo_concurrency.lockutils [req-3ccddcc0-8295-4f28-b8f3-990abe243a25 req-ee92ecf0-2596-4353-a537-427e82a489b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.563 232432 DEBUG nova.compute.manager [req-3ccddcc0-8295-4f28-b8f3-990abe243a25 req-ee92ecf0-2596-4353-a537-427e82a489b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Processing event network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.564 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.569 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:24:37 compute-2 podman[299167]: 2025-11-29 08:24:37.571723472 +0000 UTC m=+0.131166277 container init 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.574 232432 INFO nova.virt.libvirt.driver [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Instance spawned successfully.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.574 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:24:37 compute-2 podman[299167]: 2025-11-29 08:24:37.580412674 +0000 UTC m=+0.139855469 container start 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.602 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:24:37 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [NOTICE]   (299192) : New worker (299194) forked
Nov 29 08:24:37 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [NOTICE]   (299192) : Loading success.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.607 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.610 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.611 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.611 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.611 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.612 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.612 232432 DEBUG nova.virt.libvirt.driver [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:24:37 compute-2 ceph-mon[77138]: pgmap v2597: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 KiB/s wr, 51 op/s
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.658 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.659 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404677.5404077, 5d2af1c0-e1ed-48f9-beda-42cc37212de7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.659 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] VM Paused (Lifecycle Event)
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.696 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.700 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404677.5684638, 5d2af1c0-e1ed-48f9-beda-42cc37212de7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.700 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] VM Resumed (Lifecycle Event)
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.721 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.725 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.726 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:24:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:37.727 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.755 232432 INFO nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Took 15.60 seconds to spawn the instance on the hypervisor.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.755 232432 DEBUG nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.770 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.846 232432 INFO nova.compute.manager [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Took 19.23 seconds to build instance.
Nov 29 08:24:37 compute-2 nova_compute[232428]: 2025-11-29 08:24:37.866 232432 DEBUG oslo_concurrency.lockutils [None req-fab35dfa-22a7-4f2e-a4d5-4b57cef63180 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:37 compute-2 sudo[299204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:37 compute-2 sudo[299204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:37 compute-2 sudo[299204]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:37 compute-2 sudo[299229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:37 compute-2 sudo[299229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:37 compute-2 sudo[299229]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.464 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.664 232432 DEBUG nova.compute.manager [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Received event network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.665 232432 DEBUG oslo_concurrency.lockutils [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.665 232432 DEBUG oslo_concurrency.lockutils [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.665 232432 DEBUG oslo_concurrency.lockutils [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.665 232432 DEBUG nova.compute.manager [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] No waiting events found dispatching network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:24:39 compute-2 nova_compute[232428]: 2025-11-29 08:24:39.666 232432 WARNING nova.compute.manager [req-31aef30a-7c97-4142-84de-83acd13a4cea req-659f05cf-ccc4-400a-9799-245c43346d9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Received unexpected event network-vif-plugged-eff55416-acbb-4845-9fd5-369e04da8afd for instance with vm_state active and task_state None.
Nov 29 08:24:39 compute-2 ceph-mon[77138]: pgmap v2598: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 98 op/s
Nov 29 08:24:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:40 compute-2 podman[299255]: 2025-11-29 08:24:40.659908593 +0000 UTC m=+0.064398087 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:24:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:40.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:41 compute-2 nova_compute[232428]: 2025-11-29 08:24:41.642 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:24:41.728 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:24:41 compute-2 ceph-mon[77138]: pgmap v2599: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Nov 29 08:24:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3356872865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3356872865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:42.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:43 compute-2 ceph-mon[77138]: pgmap v2600: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Nov 29 08:24:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3743860223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3743860223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:24:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868102001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:24:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868102001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:44 compute-2 nova_compute[232428]: 2025-11-29 08:24:44.469 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:44.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/868102001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/868102001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:45 compute-2 ceph-mon[77138]: pgmap v2601: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Nov 29 08:24:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:46.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:46 compute-2 nova_compute[232428]: 2025-11-29 08:24:46.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:47 compute-2 ceph-mon[77138]: pgmap v2602: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Nov 29 08:24:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1357165522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:48.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.698649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688698764, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1965, "num_deletes": 264, "total_data_size": 4260573, "memory_usage": 4309672, "flush_reason": "Manual Compaction"}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688719903, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 2784471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53688, "largest_seqno": 55648, "table_properties": {"data_size": 2776241, "index_size": 4916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18307, "raw_average_key_size": 20, "raw_value_size": 2759359, "raw_average_value_size": 3146, "num_data_blocks": 213, "num_entries": 877, "num_filter_entries": 877, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404535, "oldest_key_time": 1764404535, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 21302 microseconds, and 8247 cpu microseconds.
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.719959) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 2784471 bytes OK
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.719989) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.725088) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.725112) EVENT_LOG_v1 {"time_micros": 1764404688725107, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.725133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4251642, prev total WAL file size 4251642, number of live WAL files 2.
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.726975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373633' seq:72057594037927935, type:22 .. '6C6F676D0032303136' seq:0, type:0; will stop at (end)
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(2719KB)], [102(11MB)]
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688727076, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14376969, "oldest_snapshot_seqno": -1}
Nov 29 08:24:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8682 keys, 14207311 bytes, temperature: kUnknown
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688851278, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14207311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14147140, "index_size": 37339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 225060, "raw_average_key_size": 25, "raw_value_size": 13990342, "raw_average_value_size": 1611, "num_data_blocks": 1470, "num_entries": 8682, "num_filter_entries": 8682, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.851884) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14207311 bytes
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.856147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.5 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.1 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 9231, records dropped: 549 output_compression: NoCompression
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.856210) EVENT_LOG_v1 {"time_micros": 1764404688856188, "job": 64, "event": "compaction_finished", "compaction_time_micros": 124503, "compaction_time_cpu_micros": 49202, "output_level": 6, "num_output_files": 1, "total_output_size": 14207311, "num_input_records": 9231, "num_output_records": 8682, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688857522, "job": 64, "event": "table_file_deletion", "file_number": 104}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688860066, "job": 64, "event": "table_file_deletion", "file_number": 102}
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.726864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.860168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.860174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.860175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.860177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:24:48.860178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:49 compute-2 nova_compute[232428]: 2025-11-29 08:24:49.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:49 compute-2 ceph-mon[77138]: pgmap v2603: 305 pgs: 305 active+clean; 376 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Nov 29 08:24:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:50.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:51 compute-2 nova_compute[232428]: 2025-11-29 08:24:51.649 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:51 compute-2 ovn_controller[134375]: 2025-11-29T08:24:51Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:f4:28 10.100.0.14
Nov 29 08:24:51 compute-2 ovn_controller[134375]: 2025-11-29T08:24:51Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:f4:28 10.100.0.14
Nov 29 08:24:51 compute-2 ceph-mon[77138]: pgmap v2604: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Nov 29 08:24:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:52.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:53 compute-2 nova_compute[232428]: 2025-11-29 08:24:53.153 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:53 compute-2 podman[299282]: 2025-11-29 08:24:53.742992507 +0000 UTC m=+0.134379507 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 08:24:53 compute-2 ceph-mon[77138]: pgmap v2605: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Nov 29 08:24:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/97075944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:54.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:54 compute-2 nova_compute[232428]: 2025-11-29 08:24:54.474 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:54.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/524950244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:24:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:24:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:56.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:24:56 compute-2 nova_compute[232428]: 2025-11-29 08:24:56.190 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:57 compute-2 nova_compute[232428]: 2025-11-29 08:24:57.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:57 compute-2 ceph-mon[77138]: pgmap v2606: 305 pgs: 305 active+clean; 479 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Nov 29 08:24:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:24:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:58.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:24:58 compute-2 sudo[299312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:58 compute-2 sudo[299312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:58 compute-2 sudo[299312]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:58 compute-2 nova_compute[232428]: 2025-11-29 08:24:58.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:58 compute-2 sudo[299337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:58 compute-2 sudo[299337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:58 compute-2 sudo[299337]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:24:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:24:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:24:59 compute-2 ceph-mon[77138]: pgmap v2607: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 29 08:24:59 compute-2 nova_compute[232428]: 2025-11-29 08:24:59.477 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:00.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:00 compute-2 nova_compute[232428]: 2025-11-29 08:25:00.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:00 compute-2 ceph-mon[77138]: pgmap v2608: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 681 KiB/s rd, 6.1 MiB/s wr, 163 op/s
Nov 29 08:25:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:00.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.654 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.917 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.918 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.918 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:25:01 compute-2 nova_compute[232428]: 2025-11-29 08:25:01.918 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:25:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:02 compute-2 ceph-mon[77138]: pgmap v2609: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 5.6 MiB/s wr, 140 op/s
Nov 29 08:25:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2812038550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3360976604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:02.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/246337387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:03.328 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:03.330 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:03.331 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:04.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:04 compute-2 ceph-mon[77138]: pgmap v2610: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 3.3 MiB/s wr, 120 op/s
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.479 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.671 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.698 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.698 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.698 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.699 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:04 compute-2 NetworkManager[48993]: <info>  [1764404704.7123] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 08:25:04 compute-2 NetworkManager[48993]: <info>  [1764404704.7139] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 08:25:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.837 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:04 compute-2 ovn_controller[134375]: 2025-11-29T08:25:04Z|00721|binding|INFO|Releasing lport c87f2e10-0d06-412e-bd89-4b9ab0d16c96 from this chassis (sb_readonly=0)
Nov 29 08:25:04 compute-2 ovn_controller[134375]: 2025-11-29T08:25:04Z|00722|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:25:04 compute-2 nova_compute[232428]: 2025-11-29 08:25:04.852 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:05 compute-2 nova_compute[232428]: 2025-11-29 08:25:05.044 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:06.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:06 compute-2 ceph-mon[77138]: pgmap v2611: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Nov 29 08:25:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3346172956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:06 compute-2 nova_compute[232428]: 2025-11-29 08:25:06.656 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:06 compute-2 podman[299368]: 2025-11-29 08:25:06.667224789 +0000 UTC m=+0.058412119 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:25:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:06.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3048243618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:08.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:08 compute-2 ceph-mon[77138]: pgmap v2612: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Nov 29 08:25:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:08.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:09 compute-2 nova_compute[232428]: 2025-11-29 08:25:09.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:09 compute-2 nova_compute[232428]: 2025-11-29 08:25:09.483 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:09 compute-2 ceph-mon[77138]: pgmap v2613: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 29 08:25:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1154480924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:10 compute-2 nova_compute[232428]: 2025-11-29 08:25:10.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:10 compute-2 nova_compute[232428]: 2025-11-29 08:25:10.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:25:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.236 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:25:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3431828444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:11 compute-2 ceph-mon[77138]: pgmap v2614: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.659 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:11 compute-2 podman[299409]: 2025-11-29 08:25:11.667916757 +0000 UTC m=+0.067890896 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:25:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:25:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2908682452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.708 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.851 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.852 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.855 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:25:11 compute-2 nova_compute[232428]: 2025-11-29 08:25:11.855 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.014 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.015 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3912MB free_disk=20.809829711914062GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.015 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.016 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:12.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.113 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.114 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.114 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.114 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.181 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:25:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2908682452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2934959806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:25:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1611997048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.619 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.626 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.650 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.677 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.678 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:12 compute-2 nova_compute[232428]: 2025-11-29 08:25:12.754 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:25:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:25:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:13 compute-2 sudo[299456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:13 compute-2 sudo[299456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:13 compute-2 sudo[299456]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:13 compute-2 sudo[299482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:25:13 compute-2 sudo[299482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:13 compute-2 sudo[299482]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:13 compute-2 sudo[299507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:13 compute-2 sudo[299507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:13 compute-2 sudo[299507]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:13 compute-2 sudo[299532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:25:13 compute-2 sudo[299532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:14.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:14 compute-2 sudo[299532]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:14 compute-2 nova_compute[232428]: 2025-11-29 08:25:14.486 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:14 compute-2 ceph-mon[77138]: pgmap v2615: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 29 08:25:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1611997048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:16.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:16 compute-2 nova_compute[232428]: 2025-11-29 08:25:16.662 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:25:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:18 compute-2 sudo[299590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:18 compute-2 sudo[299590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:18 compute-2 sudo[299590]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:18 compute-2 sudo[299615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:18 compute-2 sudo[299615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:18 compute-2 sudo[299615]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:18.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:18 compute-2 ceph-mon[77138]: pgmap v2616: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Nov 29 08:25:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:25:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:25:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:25:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:18 compute-2 ceph-mon[77138]: pgmap v2617: 305 pgs: 305 active+clean; 584 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Nov 29 08:25:19 compute-2 nova_compute[232428]: 2025-11-29 08:25:19.489 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:20.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:20.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:21 compute-2 nova_compute[232428]: 2025-11-29 08:25:21.665 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:22.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:22 compute-2 ceph-mon[77138]: pgmap v2618: 305 pgs: 305 active+clean; 595 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Nov 29 08:25:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:22.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:23 compute-2 ceph-mon[77138]: pgmap v2619: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 108 op/s
Nov 29 08:25:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:24.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:24 compute-2 nova_compute[232428]: 2025-11-29 08:25:24.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:24 compute-2 podman[299643]: 2025-11-29 08:25:24.80755089 +0000 UTC m=+0.177516167 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 08:25:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:25 compute-2 ceph-mon[77138]: pgmap v2620: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Nov 29 08:25:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:25:25 compute-2 sudo[299669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:25 compute-2 sudo[299669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:25 compute-2 sudo[299669]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:25 compute-2 sudo[299694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:25:25 compute-2 sudo[299694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:25 compute-2 sudo[299694]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:25 compute-2 ceph-mon[77138]: pgmap v2621: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.7 MiB/s wr, 68 op/s
Nov 29 08:25:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:25:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:26.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:26 compute-2 nova_compute[232428]: 2025-11-29 08:25:26.667 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:26.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:28 compute-2 ceph-mon[77138]: pgmap v2622: 305 pgs: 305 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.3 MiB/s wr, 58 op/s
Nov 29 08:25:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:25:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159639654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:25:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:25:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159639654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:25:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:29 compute-2 ceph-mon[77138]: pgmap v2623: 305 pgs: 305 active+clean; 606 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 115 op/s
Nov 29 08:25:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/159639654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:25:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/159639654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:25:29 compute-2 nova_compute[232428]: 2025-11-29 08:25:29.494 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.642 232432 INFO nova.compute.manager [None req-e09ba50c-3fbf-4934-bb80-e64128fb0618 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Pausing
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.643 232432 DEBUG nova.objects.instance [None req-e09ba50c-3fbf-4934-bb80-e64128fb0618 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'flavor' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.673 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404730.6729465, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.673 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Paused (Lifecycle Event)
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.675 232432 DEBUG nova.compute.manager [None req-e09ba50c-3fbf-4934-bb80-e64128fb0618 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.707 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:25:30 compute-2 nova_compute[232428]: 2025-11-29 08:25:30.711 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:25:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:31 compute-2 ceph-mon[77138]: pgmap v2624: 305 pgs: 305 active+clean; 610 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1017 KiB/s rd, 459 KiB/s wr, 102 op/s
Nov 29 08:25:31 compute-2 nova_compute[232428]: 2025-11-29 08:25:31.669 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:33 compute-2 ceph-mon[77138]: pgmap v2625: 305 pgs: 305 active+clean; 610 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 292 KiB/s wr, 96 op/s
Nov 29 08:25:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1835030847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:34 compute-2 nova_compute[232428]: 2025-11-29 08:25:34.497 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3754352276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:34.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.865 232432 INFO nova.compute.manager [None req-bbba7687-119e-48aa-b23a-d7df203809ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Unpausing
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.867 232432 DEBUG nova.objects.instance [None req-bbba7687-119e-48aa-b23a-d7df203809ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'flavor' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.935 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404735.9357948, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.936 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Resumed (Lifecycle Event)
Nov 29 08:25:35 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.939 232432 DEBUG nova.virt.libvirt.guest [None req-bbba7687-119e-48aa-b23a-d7df203809ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.939 232432 DEBUG nova.compute.manager [None req-bbba7687-119e-48aa-b23a-d7df203809ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.973 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:25:35 compute-2 nova_compute[232428]: 2025-11-29 08:25:35.979 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:25:35 compute-2 ceph-mon[77138]: pgmap v2626: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 995 KiB/s rd, 309 KiB/s wr, 98 op/s
Nov 29 08:25:36 compute-2 nova_compute[232428]: 2025-11-29 08:25:36.019 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 29 08:25:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:36.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:36 compute-2 nova_compute[232428]: 2025-11-29 08:25:36.673 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1283844332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:37 compute-2 podman[299725]: 2025-11-29 08:25:37.698348572 +0000 UTC m=+0.091215137 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:25:37 compute-2 nova_compute[232428]: 2025-11-29 08:25:37.964 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:38 compute-2 sudo[299745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:38 compute-2 sudo[299745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:38 compute-2 sudo[299745]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:38 compute-2 sudo[299770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:38 compute-2 sudo[299770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:38 compute-2 sudo[299770]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:38 compute-2 nova_compute[232428]: 2025-11-29 08:25:38.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:38.569 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:25:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:38.572 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:25:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:25:38.573 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:25:38 compute-2 ceph-mon[77138]: pgmap v2627: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 995 KiB/s rd, 215 KiB/s wr, 96 op/s
Nov 29 08:25:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4116594488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3158111379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:38.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:39 compute-2 nova_compute[232428]: 2025-11-29 08:25:39.500 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:39 compute-2 ceph-mon[77138]: pgmap v2628: 305 pgs: 305 active+clean; 633 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 977 KiB/s rd, 1.3 MiB/s wr, 94 op/s
Nov 29 08:25:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 08:25:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:41 compute-2 ceph-mon[77138]: pgmap v2629: 305 pgs: 305 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Nov 29 08:25:41 compute-2 ceph-mon[77138]: osdmap e344: 3 total, 3 up, 3 in
Nov 29 08:25:41 compute-2 nova_compute[232428]: 2025-11-29 08:25:41.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:42.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.634588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742634684, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 789, "num_deletes": 251, "total_data_size": 1468149, "memory_usage": 1485888, "flush_reason": "Manual Compaction"}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742652928, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 968521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55653, "largest_seqno": 56437, "table_properties": {"data_size": 964736, "index_size": 1565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8869, "raw_average_key_size": 19, "raw_value_size": 957031, "raw_average_value_size": 2131, "num_data_blocks": 68, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404688, "oldest_key_time": 1764404688, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 18420 microseconds, and 3931 cpu microseconds.
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.653009) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 968521 bytes OK
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.653042) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.655247) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.655285) EVENT_LOG_v1 {"time_micros": 1764404742655278, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.655306) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1463995, prev total WAL file size 1463995, number of live WAL files 2.
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.656114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(945KB)], [105(13MB)]
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742656181, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 15175832, "oldest_snapshot_seqno": -1}
Nov 29 08:25:42 compute-2 podman[299797]: 2025-11-29 08:25:42.657331695 +0000 UTC m=+0.064045186 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8613 keys, 13314398 bytes, temperature: kUnknown
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742738481, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 13314398, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13255517, "index_size": 36241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 224383, "raw_average_key_size": 26, "raw_value_size": 13100763, "raw_average_value_size": 1521, "num_data_blocks": 1417, "num_entries": 8613, "num_filter_entries": 8613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.738781) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 13314398 bytes
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.739993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.2 rd, 161.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.5 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(29.4) write-amplify(13.7) OK, records in: 9131, records dropped: 518 output_compression: NoCompression
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.740014) EVENT_LOG_v1 {"time_micros": 1764404742740004, "job": 66, "event": "compaction_finished", "compaction_time_micros": 82409, "compaction_time_cpu_micros": 34868, "output_level": 6, "num_output_files": 1, "total_output_size": 13314398, "num_input_records": 9131, "num_output_records": 8613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742740387, "job": 66, "event": "table_file_deletion", "file_number": 107}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742744358, "job": 66, "event": "table_file_deletion", "file_number": 105}
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.656015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.744453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.744462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.744464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.744466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:25:42.744468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:25:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:42.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:43 compute-2 ceph-mon[77138]: pgmap v2631: 305 pgs: 305 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 740 KiB/s rd, 2.6 MiB/s wr, 64 op/s
Nov 29 08:25:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:44.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:44 compute-2 nova_compute[232428]: 2025-11-29 08:25:44.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:44.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:45 compute-2 ceph-mon[77138]: pgmap v2632: 305 pgs: 305 active+clean; 679 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.2 MiB/s wr, 186 op/s
Nov 29 08:25:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/781650083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:25:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4196292985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:46 compute-2 nova_compute[232428]: 2025-11-29 08:25:46.677 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4196292985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:25:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:25:47 compute-2 ceph-mon[77138]: pgmap v2633: 305 pgs: 305 active+clean; 679 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Nov 29 08:25:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:48.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:49 compute-2 nova_compute[232428]: 2025-11-29 08:25:49.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:49 compute-2 ceph-mon[77138]: pgmap v2634: 305 pgs: 305 active+clean; 672 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Nov 29 08:25:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:25:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670240288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:25:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:25:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:50.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2670240288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1181668759' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:51 compute-2 nova_compute[232428]: 2025-11-29 08:25:51.681 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:25:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:25:52 compute-2 ceph-mon[77138]: pgmap v2635: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.9 MiB/s wr, 295 op/s
Nov 29 08:25:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3091909248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:25:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:52.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:53 compute-2 nova_compute[232428]: 2025-11-29 08:25:53.678 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:54 compute-2 ceph-mon[77138]: pgmap v2636: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.5 MiB/s wr, 246 op/s
Nov 29 08:25:54 compute-2 nova_compute[232428]: 2025-11-29 08:25:54.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:54.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:55 compute-2 podman[299823]: 2025-11-29 08:25:55.71235789 +0000 UTC m=+0.111440539 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:25:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:56.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:56 compute-2 ceph-mon[77138]: pgmap v2637: 305 pgs: 305 active+clean; 690 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.0 MiB/s wr, 318 op/s
Nov 29 08:25:56 compute-2 nova_compute[232428]: 2025-11-29 08:25:56.683 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:57 compute-2 ceph-mon[77138]: pgmap v2638: 305 pgs: 305 active+clean; 712 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.9 MiB/s wr, 254 op/s
Nov 29 08:25:57 compute-2 nova_compute[232428]: 2025-11-29 08:25:57.928 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:57 compute-2 nova_compute[232428]: 2025-11-29 08:25:57.928 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:57 compute-2 nova_compute[232428]: 2025-11-29 08:25:57.962 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.086 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.087 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.095 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.095 232432 INFO nova.compute.claims [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:25:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.262 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:25:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:58 compute-2 sudo[299872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:58 compute-2 sudo[299872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:58 compute-2 sudo[299872]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:25:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2748836526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.751 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.759 232432 DEBUG nova.compute.provider_tree [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:25:58 compute-2 sudo[299897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:58 compute-2 sudo[299897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:58 compute-2 sudo[299897]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.782 232432 DEBUG nova.scheduler.client.report [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.805 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.806 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.862 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.863 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.886 232432 INFO nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:25:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:25:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:25:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:58.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:25:58 compute-2 nova_compute[232428]: 2025-11-29 08:25:58.913 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.011 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.013 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.014 232432 INFO nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Creating image(s)
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.049 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.092 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.138 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.143 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.182 232432 DEBUG nova.policy [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfcf2db50da745c09bffcf32ec016854', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09cc8c3182d845f597dda064f9013941', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.220 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.221 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.222 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.223 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.257 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.262 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1118af2-2266-48e4-a246-9549c68ddaa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.575 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1118af2-2266-48e4-a246-9549c68ddaa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:25:59 compute-2 ceph-mon[77138]: pgmap v2639: 305 pgs: 305 active+clean; 712 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 286 op/s
Nov 29 08:25:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2748836526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.680 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] resizing rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.800 232432 DEBUG nova.objects.instance [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.814 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.815 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Ensure instance console log exists: /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.815 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.815 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:59 compute-2 nova_compute[232428]: 2025-11-29 08:25:59.815 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:00.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:26:01 compute-2 ceph-mon[77138]: pgmap v2640: 305 pgs: 305 active+clean; 734 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Nov 29 08:26:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/548668422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.658 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.658 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.658 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.685 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:01 compute-2 nova_compute[232428]: 2025-11-29 08:26:01.844 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Successfully created port: dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:26:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:02.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:02 compute-2 nova_compute[232428]: 2025-11-29 08:26:02.870 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:02 compute-2 nova_compute[232428]: 2025-11-29 08:26:02.870 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:02 compute-2 nova_compute[232428]: 2025-11-29 08:26:02.871 232432 INFO nova.compute.manager [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Shelving
Nov 29 08:26:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:02 compute-2 nova_compute[232428]: 2025-11-29 08:26:02.897 232432 DEBUG nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.258 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Successfully updated port: dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.302 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.303 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.303 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:03.329 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:03.330 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:03.331 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.445 232432 DEBUG nova.compute.manager [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.445 232432 DEBUG nova.compute.manager [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing instance network info cache due to event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.446 232432 DEBUG oslo_concurrency.lockutils [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:03 compute-2 ceph-mon[77138]: pgmap v2641: 305 pgs: 305 active+clean; 734 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Nov 29 08:26:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1752763440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.906 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.919 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.943 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.944 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:26:03 compute-2 nova_compute[232428]: 2025-11-29 08:26:03.944 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:04 compute-2 nova_compute[232428]: 2025-11-29 08:26:04.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:04 compute-2 nova_compute[232428]: 2025-11-29 08:26:04.519 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:04 compute-2 ceph-mon[77138]: osdmap e345: 3 total, 3 up, 3 in
Nov 29 08:26:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:05 compute-2 kernel: tap654e5561-24 (unregistering): left promiscuous mode
Nov 29 08:26:05 compute-2 NetworkManager[48993]: <info>  [1764404765.1705] device (tap654e5561-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:26:05 compute-2 ovn_controller[134375]: 2025-11-29T08:26:05Z|00723|binding|INFO|Releasing lport 654e5561-248d-48f1-9b25-da86880e3041 from this chassis (sb_readonly=0)
Nov 29 08:26:05 compute-2 ovn_controller[134375]: 2025-11-29T08:26:05Z|00724|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 down in Southbound
Nov 29 08:26:05 compute-2 ovn_controller[134375]: 2025-11-29T08:26:05Z|00725|binding|INFO|Removing iface tap654e5561-24 ovn-installed in OVS
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.187 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.204 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 29 08:26:05 compute-2 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000099.scope: Consumed 20.304s CPU time.
Nov 29 08:26:05 compute-2 systemd-machined[194747]: Machine qemu-74-instance-00000099 terminated.
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.248 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.249 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.251 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.252 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[01bff480-be80-4279-a055-d273a23be340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.253 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace which is not needed anymore
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [NOTICE]   (297930) : haproxy version is 2.8.14-c23fe91
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [NOTICE]   (297930) : path to executable is /usr/sbin/haproxy
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [WARNING]  (297930) : Exiting Master process...
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [WARNING]  (297930) : Exiting Master process...
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [ALERT]    (297930) : Current worker (297932) exited with code 143 (Terminated)
Nov 29 08:26:05 compute-2 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[297926]: [WARNING]  (297930) : All workers exited. Exiting... (0)
Nov 29 08:26:05 compute-2 systemd[1]: libpod-62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44.scope: Deactivated successfully.
Nov 29 08:26:05 compute-2 podman[300118]: 2025-11-29 08:26:05.387348138 +0000 UTC m=+0.044449902 container died 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.401 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44-userdata-shm.mount: Deactivated successfully.
Nov 29 08:26:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-96873edf09bd28b57b1406687cc21d6e45c3c3aad182c7eae888ab280a3ef515-merged.mount: Deactivated successfully.
Nov 29 08:26:05 compute-2 podman[300118]: 2025-11-29 08:26:05.433993659 +0000 UTC m=+0.091095423 container cleanup 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:26:05 compute-2 systemd[1]: libpod-conmon-62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44.scope: Deactivated successfully.
Nov 29 08:26:05 compute-2 podman[300156]: 2025-11-29 08:26:05.516223242 +0000 UTC m=+0.054223518 container remove 62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.522 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ed9bf2-69f3-47ae-bfad-976021288723]: (4, ('Sat Nov 29 08:26:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44)\n62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44\nSat Nov 29 08:26:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44)\n62c64741725b6c702b8123cccc5d053a6cf47f9c97073b88626fe226a7f52a44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.525 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8336c921-78bd-4e35-8cfe-b1e552af2c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.526 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 kernel: tap258f6232-60: left promiscuous mode
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.547 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9fec71cc-5fb6-4e95-b992-6d6787fce6fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.564 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[62ea2e23-2312-416b-8d47-f624c36e4a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.565 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2f4794-ed88-4720-bbf9-c8b89841603e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.583 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[77f318d6-01fb-42e4-ab45-bbbb9e9a68cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757562, 'reachable_time': 18744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300174, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 systemd[1]: run-netns-ovnmeta\x2d258f6232\x2d6798\x2d4075\x2dadab\x2dc07c4559ef67.mount: Deactivated successfully.
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.587 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:26:05 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:05.587 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[b263457c-5c09-47bb-a9c1-1a3f0b9c8ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.660 232432 DEBUG nova.compute.manager [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.660 232432 DEBUG oslo_concurrency.lockutils [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.660 232432 DEBUG oslo_concurrency.lockutils [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.661 232432 DEBUG oslo_concurrency.lockutils [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.661 232432 DEBUG nova.compute.manager [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.661 232432 WARNING nova.compute.manager [req-66dba1b3-28b2-454a-8666-0008f2657d4f req-a754cf89-bf80-41f9-a506-77b01bbb2717 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state active and task_state shelving.
Nov 29 08:26:05 compute-2 ceph-mon[77138]: pgmap v2643: 305 pgs: 305 active+clean; 758 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.769 232432 DEBUG nova.network.neutron [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.799 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.799 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance network_info: |[{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.799 232432 DEBUG oslo_concurrency.lockutils [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.799 232432 DEBUG nova.network.neutron [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.802 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start _get_guest_xml network_info=[{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.807 232432 WARNING nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.816 232432 DEBUG nova.virt.libvirt.host [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.817 232432 DEBUG nova.virt.libvirt.host [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.823 232432 DEBUG nova.virt.libvirt.host [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.824 232432 DEBUG nova.virt.libvirt.host [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.825 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.825 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.825 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.826 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.827 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.827 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.827 232432 DEBUG nova.virt.hardware [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.830 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.919 232432 INFO nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance shutdown successfully after 3 seconds.
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.925 232432 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance destroyed successfully.
Nov 29 08:26:05 compute-2 nova_compute[232428]: 2025-11-29 08:26:05.926 232432 DEBUG nova.objects.instance [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3304961612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.323 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.370 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.376 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.432 232432 INFO nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Beginning cold snapshot process
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3304961612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.765 232432 DEBUG nova.virt.libvirt.imagebackend [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 08:26:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3433676574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.818 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.820 232432 DEBUG nova.virt.libvirt.vif [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1775463929',display_name='tempest-ServerRescueNegativeTestJSON-server-1775463929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1775463929',id=161,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmo+eet4YBY7wXcrzDQzITBcUSszsOuTXPJOsSPetwgqxs8tnSNHiHLo4P9tBVRJry94mJeN6BGQc8NI6+0zP4qONnsq3uMb4XX3eYuPLEZknBDW+VjJB6uAaoViaI9RQ==',key_name='tempest-keypair-928415713',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-bgjax5g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c1118af2-2266-48e4-a246-9549c68ddaa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.820 232432 DEBUG nova.network.os_vif_util [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.821 232432 DEBUG nova.network.os_vif_util [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.823 232432 DEBUG nova.objects.instance [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.845 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <uuid>c1118af2-2266-48e4-a246-9549c68ddaa4</uuid>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <name>instance-000000a1</name>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1775463929</nova:name>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:26:05</nova:creationTime>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <nova:port uuid="dc933ba7-ffdf-4e89-9aae-ae19d42f4315">
Nov 29 08:26:06 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <system>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="serial">c1118af2-2266-48e4-a246-9549c68ddaa4</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="uuid">c1118af2-2266-48e4-a246-9549c68ddaa4</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </system>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <os>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </os>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <features>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </features>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/c1118af2-2266-48e4-a246-9549c68ddaa4_disk">
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config">
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:06 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:af:a8:10"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <target dev="tapdc933ba7-ff"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/console.log" append="off"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <video>
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </video>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:26:06 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:26:06 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:26:06 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:26:06 compute-2 nova_compute[232428]: </domain>
Nov 29 08:26:06 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.845 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Preparing to wait for external event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.846 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.846 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.846 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.847 232432 DEBUG nova.virt.libvirt.vif [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1775463929',display_name='tempest-ServerRescueNegativeTestJSON-server-1775463929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1775463929',id=161,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmo+eet4YBY7wXcrzDQzITBcUSszsOuTXPJOsSPetwgqxs8tnSNHiHLo4P9tBVRJry94mJeN6BGQc8NI6+0zP4qONnsq3uMb4XX3eYuPLEZknBDW+VjJB6uAaoViaI9RQ==',key_name='tempest-keypair-928415713',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-bgjax5g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c1118af2-2266-48e4-a246-9549c68ddaa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.847 232432 DEBUG nova.network.os_vif_util [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.848 232432 DEBUG nova.network.os_vif_util [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.848 232432 DEBUG os_vif [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.849 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.849 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.854 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc933ba7-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.854 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc933ba7-ff, col_values=(('external_ids', {'iface-id': 'dc933ba7-ffdf-4e89-9aae-ae19d42f4315', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:a8:10', 'vm-uuid': 'c1118af2-2266-48e4-a246-9549c68ddaa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:06 compute-2 NetworkManager[48993]: <info>  [1764404766.8569] manager: (tapdc933ba7-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.867 232432 INFO os_vif [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff')
Nov 29 08:26:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:06.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.966 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.966 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.967 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:af:a8:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:26:06 compute-2 nova_compute[232428]: 2025-11-29 08:26:06.967 232432 INFO nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Using config drive
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.005 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.160 232432 DEBUG nova.storage.rbd_utils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] creating snapshot(5326cd26a109478ebe780e53207a71aa) on rbd image(9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:26:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.763 232432 INFO nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Creating config drive at /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config
Nov 29 08:26:07 compute-2 ceph-mon[77138]: pgmap v2644: 305 pgs: 305 active+clean; 778 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 155 op/s
Nov 29 08:26:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3433676574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3668102352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.777 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ub2qadd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.816 232432 DEBUG nova.network.neutron [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updated VIF entry in instance network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.817 232432 DEBUG nova.network.neutron [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.829 232432 DEBUG nova.compute.manager [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.829 232432 DEBUG oslo_concurrency.lockutils [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.830 232432 DEBUG oslo_concurrency.lockutils [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.830 232432 DEBUG oslo_concurrency.lockutils [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.830 232432 DEBUG nova.compute.manager [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.830 232432 WARNING nova.compute.manager [req-ce351820-d627-4544-ad0c-f0e678bb1104 req-644aac0a-c59a-4743-99c7-f7e25451686a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state active and task_state shelving_image_uploading.
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.837 232432 DEBUG oslo_concurrency.lockutils [req-5b1f2b97-f010-4fb5-99ac-afa554cfb62e req-51e2d64e-329b-4138-89a5-19160971ebf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.849 232432 DEBUG nova.storage.rbd_utils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] cloning vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk@5326cd26a109478ebe780e53207a71aa to images/5945a148-7986-4fa0-8052-c380ea11f788 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.925 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ub2qadd" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.954 232432 DEBUG nova.storage.rbd_utils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.957 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:07 compute-2 nova_compute[232428]: 2025-11-29 08:26:07.998 232432 DEBUG nova.storage.rbd_utils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] flattening images/5945a148-7986-4fa0-8052-c380ea11f788 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:26:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.370 232432 DEBUG oslo_concurrency.processutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.370 232432 INFO nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Deleting local config drive /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config because it was imported into RBD.
Nov 29 08:26:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.4457] manager: (tapdc933ba7-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 08:26:08 compute-2 kernel: tapdc933ba7-ff: entered promiscuous mode
Nov 29 08:26:08 compute-2 ovn_controller[134375]: 2025-11-29T08:26:08Z|00726|binding|INFO|Claiming lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for this chassis.
Nov 29 08:26:08 compute-2 ovn_controller[134375]: 2025-11-29T08:26:08Z|00727|binding|INFO|dc933ba7-ffdf-4e89-9aae-ae19d42f4315: Claiming fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.449 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.453 232432 DEBUG nova.storage.rbd_utils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] removing snapshot(5326cd26a109478ebe780e53207a71aa) on rbd image(9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.456 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.457 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.458 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d46d19-8aed-4458-be9b-851a3b1c8c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.471 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.476 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.476 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a8b4cd-116d-4262-a7bd-4cadffc072bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.477 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb4e88e-b37d-4ca1-9b1e-c78af8d15cec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.491 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[794c07b4-6695-40b2-9e80-251824c46ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 systemd-udevd[300445]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:26:08 compute-2 ovn_controller[134375]: 2025-11-29T08:26:08Z|00728|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 ovn-installed in OVS
Nov 29 08:26:08 compute-2 ovn_controller[134375]: 2025-11-29T08:26:08Z|00729|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 up in Southbound
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.5109] device (tapdc933ba7-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.5122] device (tapdc933ba7-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:26:08 compute-2 systemd-machined[194747]: New machine qemu-76-instance-000000a1.
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.514 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1305d53c-f241-47c8-aa62-269f8ce44bdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 systemd[1]: Started Virtual Machine qemu-76-instance-000000a1.
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.547 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[03c52684-9d9a-4e6e-9b44-68db20520557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 podman[300435]: 2025-11-29 08:26:08.54840687 +0000 UTC m=+0.071613342 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.552 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc43725-f9eb-4096-8a3f-6971bd1d6a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.5532] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.601 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2908a445-200e-4da8-af88-a00062daea18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.605 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b57544cf-3c56-433d-9dbc-42b2840c0c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.6370] device (tap7008b597-80): carrier: link connected
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.647 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[78920376-c645-4524-baac-035c4df1589f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.670 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cc03fa40-1387-487d-9bc3-f048b75d5433]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773490, 'reachable_time': 32879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300494, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.693 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcad410-fb28-48f9-ae63-bc7659a08788]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773490, 'tstamp': 773490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300495, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.723 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a45f2f-572a-40e0-b618-fc8a235055b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773490, 'reachable_time': 32879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300496, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.763 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[937eef2b-e219-4a49-a1c4-b05738b840ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.847 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3680ce04-ce28-4366-af98-6a19bc39e735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.849 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.849 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.850 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 NetworkManager[48993]: <info>  [1764404768.8531] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 08:26:08 compute-2 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.896 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 ovn_controller[134375]: 2025-11-29T08:26:08Z|00730|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 08:26:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:08.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:08 compute-2 ceph-mon[77138]: osdmap e346: 3 total, 3 up, 3 in
Nov 29 08:26:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/922232462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.917 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.917 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.918 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[14124ea7-e525-415b-9791-d61d776c9c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.919 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:26:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:08.920 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:26:08 compute-2 nova_compute[232428]: 2025-11-29 08:26:08.933 232432 DEBUG nova.storage.rbd_utils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] creating snapshot(snap) on rbd image(5945a148-7986-4fa0-8052-c380ea11f788) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:26:09 compute-2 nova_compute[232428]: 2025-11-29 08:26:09.153 232432 DEBUG nova.compute.manager [req-116e52e7-71aa-4253-992c-e551bfdb1e99 req-d78081c4-2572-4d25-af3a-b7e1749bc094 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:09 compute-2 nova_compute[232428]: 2025-11-29 08:26:09.154 232432 DEBUG oslo_concurrency.lockutils [req-116e52e7-71aa-4253-992c-e551bfdb1e99 req-d78081c4-2572-4d25-af3a-b7e1749bc094 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:09 compute-2 nova_compute[232428]: 2025-11-29 08:26:09.154 232432 DEBUG oslo_concurrency.lockutils [req-116e52e7-71aa-4253-992c-e551bfdb1e99 req-d78081c4-2572-4d25-af3a-b7e1749bc094 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:09 compute-2 nova_compute[232428]: 2025-11-29 08:26:09.154 232432 DEBUG oslo_concurrency.lockutils [req-116e52e7-71aa-4253-992c-e551bfdb1e99 req-d78081c4-2572-4d25-af3a-b7e1749bc094 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:09 compute-2 nova_compute[232428]: 2025-11-29 08:26:09.154 232432 DEBUG nova.compute.manager [req-116e52e7-71aa-4253-992c-e551bfdb1e99 req-d78081c4-2572-4d25-af3a-b7e1749bc094 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Processing event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:26:09 compute-2 podman[300549]: 2025-11-29 08:26:09.291372108 +0000 UTC m=+0.022615729 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:26:09 compute-2 ceph-mon[77138]: pgmap v2646: 305 pgs: 305 active+clean; 855 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.6 MiB/s wr, 201 op/s
Nov 29 08:26:09 compute-2 ceph-mon[77138]: osdmap e347: 3 total, 3 up, 3 in
Nov 29 08:26:09 compute-2 podman[300549]: 2025-11-29 08:26:09.935588504 +0000 UTC m=+0.666832105 container create 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:26:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 08:26:09 compute-2 systemd[1]: Started libpod-conmon-65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d.scope.
Nov 29 08:26:10 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:26:10 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa929d7f34b43564eb8391097aaec4fd0fa7cf87e56932dc38c109966ea07f09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:26:10 compute-2 podman[300549]: 2025-11-29 08:26:10.051701328 +0000 UTC m=+0.782944949 container init 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:26:10 compute-2 podman[300549]: 2025-11-29 08:26:10.057731307 +0000 UTC m=+0.788974908 container start 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:26:10 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [NOTICE]   (300614) : New worker (300616) forked
Nov 29 08:26:10 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [NOTICE]   (300614) : Loading success.
Nov 29 08:26:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:10.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.122 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404770.1217895, c1118af2-2266-48e4-a246-9549c68ddaa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.123 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Started (Lifecycle Event)
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.125 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.129 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.133 232432 INFO nova.virt.libvirt.driver [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance spawned successfully.
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.134 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.152 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.158 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.161 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.162 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.162 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.162 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.163 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.163 232432 DEBUG nova.virt.libvirt.driver [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.217 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.218 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404770.1219218, c1118af2-2266-48e4-a246-9549c68ddaa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.218 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Paused (Lifecycle Event)
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.250 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.253 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404770.1287322, c1118af2-2266-48e4-a246-9549c68ddaa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.253 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Resumed (Lifecycle Event)
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.259 232432 INFO nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Took 11.25 seconds to spawn the instance on the hypervisor.
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.259 232432 DEBUG nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.289 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:10 compute-2 nova_compute[232428]: 2025-11-29 08:26:10.293 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:10.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.091 232432 INFO nova.compute.manager [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Took 13.05 seconds to build instance.
Nov 29 08:26:11 compute-2 ceph-mon[77138]: osdmap e348: 3 total, 3 up, 3 in
Nov 29 08:26:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2199665804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.157 232432 DEBUG oslo_concurrency.lockutils [None req-6ba017bd-3c3b-4c1c-bef2-3d773b03f1be dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.316 232432 DEBUG nova.compute.manager [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.316 232432 DEBUG oslo_concurrency.lockutils [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.317 232432 DEBUG oslo_concurrency.lockutils [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.317 232432 DEBUG oslo_concurrency.lockutils [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.317 232432 DEBUG nova.compute.manager [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.317 232432 WARNING nova.compute.manager [req-12d282db-d2c6-4749-aa7c-9ec2d8e1a805 req-cc21902d-ce84-4895-94fb-8ea8df9ceee8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state None.
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:11 compute-2 nova_compute[232428]: 2025-11-29 08:26:11.856 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:12.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:12 compute-2 ceph-mon[77138]: pgmap v2649: 305 pgs: 305 active+clean; 893 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 13 MiB/s wr, 337 op/s
Nov 29 08:26:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1724115301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2742457055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.228 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.228 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:26:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2243881017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.718 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.812 232432 INFO nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Snapshot image upload complete
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.813 232432 DEBUG nova.compute.manager [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:12 compute-2 podman[300650]: 2025-11-29 08:26:12.835619584 +0000 UTC m=+0.073541003 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.842 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.843 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.846 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.846 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.851 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.851 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.875 232432 INFO nova.compute.manager [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Shelve offloading
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.884 232432 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance destroyed successfully.
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.884 232432 DEBUG nova.compute.manager [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.887 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.887 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:12 compute-2 nova_compute[232428]: 2025-11-29 08:26:12.887 232432 DEBUG nova.network.neutron [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:26:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:12.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.031 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.032 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3927MB free_disk=20.652362823486328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.032 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.032 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2243881017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1175812981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.337 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.338 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.338 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance c1118af2-2266-48e4-a246-9549c68ddaa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.338 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.339 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:26:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.502 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:26:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2127369700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.947 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.953 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:26:13 compute-2 nova_compute[232428]: 2025-11-29 08:26:13.973 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:26:14 compute-2 nova_compute[232428]: 2025-11-29 08:26:14.002 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:26:14 compute-2 nova_compute[232428]: 2025-11-29 08:26:14.003 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:14 compute-2 sshd-session[300692]: Invalid user sol from 45.148.10.240 port 57918
Nov 29 08:26:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:14.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:14 compute-2 ceph-mon[77138]: pgmap v2650: 305 pgs: 305 active+clean; 893 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 12 MiB/s wr, 329 op/s
Nov 29 08:26:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2127369700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:14 compute-2 sshd-session[300692]: Connection closed by invalid user sol 45.148.10.240 port 57918 [preauth]
Nov 29 08:26:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:16.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:16 compute-2 ceph-mon[77138]: pgmap v2651: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 13 MiB/s wr, 382 op/s
Nov 29 08:26:16 compute-2 nova_compute[232428]: 2025-11-29 08:26:16.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:16 compute-2 nova_compute[232428]: 2025-11-29 08:26:16.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.165 232432 DEBUG nova.compute.manager [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.166 232432 DEBUG nova.compute.manager [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing instance network info cache due to event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.166 232432 DEBUG oslo_concurrency.lockutils [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.167 232432 DEBUG oslo_concurrency.lockutils [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.167 232432 DEBUG nova.network.neutron [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.656 232432 DEBUG nova.network.neutron [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:17 compute-2 nova_compute[232428]: 2025-11-29 08:26:17.687 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:26:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:26:18 compute-2 ceph-mon[77138]: pgmap v2652: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Nov 29 08:26:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 08:26:18 compute-2 sudo[300698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:18 compute-2 sudo[300698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:18 compute-2 sudo[300698]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:18 compute-2 sudo[300723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:18 compute-2 sudo[300723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:18 compute-2 sudo[300723]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:19 compute-2 ceph-mon[77138]: osdmap e349: 3 total, 3 up, 3 in
Nov 29 08:26:19 compute-2 ceph-mon[77138]: pgmap v2654: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 5.0 MiB/s wr, 276 op/s
Nov 29 08:26:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:26:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:26:20 compute-2 nova_compute[232428]: 2025-11-29 08:26:20.425 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404765.4244275, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:20 compute-2 nova_compute[232428]: 2025-11-29 08:26:20.427 232432 INFO nova.compute.manager [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Stopped (Lifecycle Event)
Nov 29 08:26:20 compute-2 nova_compute[232428]: 2025-11-29 08:26:20.488 232432 DEBUG nova.compute.manager [None req-c5072278-6411-48dc-81a4-875d5cc4dc1e - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:20 compute-2 nova_compute[232428]: 2025-11-29 08:26:20.492 232432 DEBUG nova.compute.manager [None req-c5072278-6411-48dc-81a4-875d5cc4dc1e - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:20 compute-2 nova_compute[232428]: 2025-11-29 08:26:20.521 232432 INFO nova.compute.manager [None req-c5072278-6411-48dc-81a4-875d5cc4dc1e - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Nov 29 08:26:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:21 compute-2 ceph-mon[77138]: pgmap v2655: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 197 op/s
Nov 29 08:26:21 compute-2 nova_compute[232428]: 2025-11-29 08:26:21.695 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:21 compute-2 nova_compute[232428]: 2025-11-29 08:26:21.860 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.021 232432 DEBUG nova.network.neutron [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updated VIF entry in instance network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.022 232432 DEBUG nova.network.neutron [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.044 232432 DEBUG oslo_concurrency.lockutils [req-885ade73-e1ee-4bdf-8498-5c7ef5c79a9b req-563c1883-7f13-4d9d-9f10-82edf6b062bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.131 232432 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance destroyed successfully.
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.131 232432 DEBUG nova.objects.instance [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'resources' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.157 232432 DEBUG nova.virt.libvirt.vif [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member',shelved_at='2025-11-29T08:26:12.813664',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5945a148-7986-4fa0-8052-c380ea11f788'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:06Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.158 232432 DEBUG nova.network.os_vif_util [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.159 232432 DEBUG nova.network.os_vif_util [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.159 232432 DEBUG os_vif [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.162 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap654e5561-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.164 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.169 232432 INFO os_vif [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24')
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.568 232432 DEBUG nova.compute.manager [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-changed-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.569 232432 DEBUG nova.compute.manager [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing instance network info cache due to event network-changed-654e5561-248d-48f1-9b25-da86880e3041. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.570 232432 DEBUG oslo_concurrency.lockutils [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.570 232432 DEBUG oslo_concurrency.lockutils [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.570 232432 DEBUG nova.network.neutron [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing network info cache for port 654e5561-248d-48f1-9b25-da86880e3041 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.745 232432 INFO nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deleting instance files /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_del
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.747 232432 INFO nova.virt.libvirt.driver [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deletion of /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_del complete
Nov 29 08:26:22 compute-2 nova_compute[232428]: 2025-11-29 08:26:22.876 232432 INFO nova.scheduler.client.report [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Deleted allocations for instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1
Nov 29 08:26:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:26:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.012 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.013 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.107 232432 DEBUG oslo_concurrency.processutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:26:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1701621705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.572 232432 DEBUG oslo_concurrency.processutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.580 232432 DEBUG nova.compute.provider_tree [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.594 232432 DEBUG nova.scheduler.client.report [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.627 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:23 compute-2 ceph-mon[77138]: pgmap v2656: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 198 op/s
Nov 29 08:26:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1701621705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:23 compute-2 nova_compute[232428]: 2025-11-29 08:26:23.701 232432 DEBUG oslo_concurrency.lockutils [None req-e082015a-aff0-4d52-8bf0-ebbcc54d2455 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:24 compute-2 ovn_controller[134375]: 2025-11-29T08:26:24Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:26:24 compute-2 ovn_controller[134375]: 2025-11-29T08:26:24Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:26:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:24.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:25 compute-2 sudo[300795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:25 compute-2 sudo[300795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:25 compute-2 sudo[300795]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:25 compute-2 sudo[300820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:26:25 compute-2 sudo[300820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:25 compute-2 sudo[300820]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:25 compute-2 sudo[300845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:25 compute-2 sudo[300845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:25 compute-2 sudo[300845]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:25 compute-2 sudo[300870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:26:25 compute-2 sudo[300870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:25 compute-2 ceph-mon[77138]: pgmap v2657: 305 pgs: 305 active+clean; 891 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.3 MiB/s wr, 182 op/s
Nov 29 08:26:25 compute-2 sudo[300870]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:26 compute-2 podman[300910]: 2025-11-29 08:26:26.032280923 +0000 UTC m=+0.096080149 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:26:26 compute-2 sudo[300940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:26 compute-2 sudo[300940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:26 compute-2 sudo[300940]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:26.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.178 232432 DEBUG nova.network.neutron [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updated VIF entry in instance network info cache for port 654e5561-248d-48f1-9b25-da86880e3041. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.178 232432 DEBUG nova.network.neutron [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap654e5561-24", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:26 compute-2 sudo[300965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:26:26 compute-2 sudo[300965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:26 compute-2 sudo[300965]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.214 232432 DEBUG oslo_concurrency.lockutils [req-1ef49209-37a5-49dc-ad09-bf258fb7c0d7 req-4a2329c1-9fa0-4f3b-9f48-ba75eed95be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:26 compute-2 sudo[300990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:26 compute-2 sudo[300990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:26 compute-2 sudo[300990]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:26 compute-2 sudo[301015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:26:26 compute-2 sudo[301015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.696 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.716 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.717 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.745 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.850 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.850 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.859 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:26:26 compute-2 nova_compute[232428]: 2025-11-29 08:26:26.860 232432 INFO nova.compute.claims [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:26:26 compute-2 sudo[301015]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:26:27 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.078 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.164 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:26:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1788033539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.544 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.550 232432 DEBUG nova.compute.provider_tree [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.576 232432 DEBUG nova.scheduler.client.report [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.598 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.599 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.675 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.675 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.693 232432 INFO nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.723 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.876 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.878 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.879 232432 INFO nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Creating image(s)
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.918 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.956 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.993 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:27 compute-2 nova_compute[232428]: 2025-11-29 08:26:27.997 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.043 232432 DEBUG nova.policy [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:26:28 compute-2 ceph-mon[77138]: pgmap v2658: 305 pgs: 305 active+clean; 868 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 29 08:26:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1788033539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4256123452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:26:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4256123452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.084 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.085 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.086 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.086 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.113 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.118 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 78a00526-9c03-4c52-93a4-2275348b883a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:28.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.453 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 78a00526-9c03-4c52-93a4-2275348b883a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.534 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] resizing rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.658 232432 DEBUG nova.objects.instance [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.704 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.704 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Ensure instance console log exists: /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.705 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.706 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:28 compute-2 nova_compute[232428]: 2025-11-29 08:26:28.706 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:29 compute-2 nova_compute[232428]: 2025-11-29 08:26:29.616 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Successfully created port: e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:26:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 08:26:30 compute-2 ceph-mon[77138]: pgmap v2659: 305 pgs: 305 active+clean; 890 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 29 08:26:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:30.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:31 compute-2 ceph-mon[77138]: osdmap e350: 3 total, 3 up, 3 in
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.547 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Successfully updated port: e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.574 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.574 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.574 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.700 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.816 232432 DEBUG nova.compute.manager [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.816 232432 DEBUG nova.compute.manager [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:26:31 compute-2 nova_compute[232428]: 2025-11-29 08:26:31.816 232432 DEBUG oslo_concurrency.lockutils [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:32.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:32 compute-2 ceph-mon[77138]: pgmap v2661: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 29 08:26:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1819305652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:26:32 compute-2 nova_compute[232428]: 2025-11-29 08:26:32.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:32 compute-2 nova_compute[232428]: 2025-11-29 08:26:32.208 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:26:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:33 compute-2 nova_compute[232428]: 2025-11-29 08:26:33.205 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:33 compute-2 nova_compute[232428]: 2025-11-29 08:26:33.206 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:33 compute-2 nova_compute[232428]: 2025-11-29 08:26:33.443 232432 DEBUG nova.objects.instance [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:33 compute-2 nova_compute[232428]: 2025-11-29 08:26:33.554 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:33 compute-2 sudo[301262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:33 compute-2 sudo[301262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:33 compute-2 sudo[301262]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:33 compute-2 nova_compute[232428]: 2025-11-29 08:26:33.945 232432 DEBUG nova.network.neutron [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:34 compute-2 sudo[301287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:26:34 compute-2 sudo[301287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:34 compute-2 sudo[301287]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:34.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.215 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.215 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance network_info: |[{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.216 232432 DEBUG oslo_concurrency.lockutils [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.216 232432 DEBUG nova.network.neutron [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:26:34 compute-2 ceph-mon[77138]: pgmap v2662: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 29 08:26:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:26:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.219 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start _get_guest_xml network_info=[{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.224 232432 WARNING nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.230 232432 DEBUG nova.virt.libvirt.host [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.231 232432 DEBUG nova.virt.libvirt.host [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.233 232432 DEBUG nova.virt.libvirt.host [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.233 232432 DEBUG nova.virt.libvirt.host [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.234 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.235 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.235 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.235 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.235 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.236 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.237 232432 DEBUG nova.virt.hardware [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.240 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.472 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.473 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.474 232432 INFO nova.compute.manager [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Attaching volume e575575a-0c17-43ef-9168-0fa9b5177df6 to /dev/vdb
Nov 29 08:26:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1727086910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.761 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.795 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.800 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.841 232432 DEBUG os_brick.utils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.845 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.862 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.862 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[21882e64-8e92-421b-9e24-19571d93df99]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.864 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.874 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.874 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6c65deaf-7254-4064-912d-90114c0e085c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.876 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.888 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.888 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[4054c166-3b1c-460f-a0d9-63fde4f5401d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.890 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[4e67f851-fa6d-4a1a-916a-20b19eaa2e39]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.891 232432 DEBUG oslo_concurrency.processutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.924 232432 DEBUG oslo_concurrency.processutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.927 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.928 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.928 232432 DEBUG os_brick.initiator.connectors.lightos [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.928 232432 DEBUG os_brick.utils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:26:34 compute-2 nova_compute[232428]: 2025-11-29 08:26:34.929 232432 DEBUG nova.virt.block_device [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating existing volume attachment record: b0077d42-4966-4605-bded-14c59e62917c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:26:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1727086910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1098016032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.271 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.272 232432 DEBUG nova.virt.libvirt.vif [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.273 232432 DEBUG nova.network.os_vif_util [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.274 232432 DEBUG nova.network.os_vif_util [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.275 232432 DEBUG nova.objects.instance [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.304 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <uuid>78a00526-9c03-4c52-93a4-2275348b883a</uuid>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <name>instance-000000a3</name>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:name>multiattach-server-1</nova:name>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:26:34</nova:creationTime>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <nova:port uuid="e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e">
Nov 29 08:26:35 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <system>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="serial">78a00526-9c03-4c52-93a4-2275348b883a</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="uuid">78a00526-9c03-4c52-93a4-2275348b883a</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </system>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <os>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </os>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <features>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </features>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/78a00526-9c03-4c52-93a4-2275348b883a_disk">
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/78a00526-9c03-4c52-93a4-2275348b883a_disk.config">
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:35 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:76:cc:96"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <target dev="tape0c088b1-9b"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/console.log" append="off"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <video>
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </video>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:26:35 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:26:35 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:26:35 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:26:35 compute-2 nova_compute[232428]: </domain>
Nov 29 08:26:35 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.306 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Preparing to wait for external event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.307 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.307 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.307 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.308 232432 DEBUG nova.virt.libvirt.vif [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.309 232432 DEBUG nova.network.os_vif_util [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.310 232432 DEBUG nova.network.os_vif_util [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.310 232432 DEBUG os_vif [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.311 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.312 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.312 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.317 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.317 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0c088b1-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.318 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0c088b1-9b, col_values=(('external_ids', {'iface-id': 'e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:cc:96', 'vm-uuid': '78a00526-9c03-4c52-93a4-2275348b883a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.319 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:35 compute-2 NetworkManager[48993]: <info>  [1764404795.3204] manager: (tape0c088b1-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.327 232432 INFO os_vif [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b')
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.492 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.493 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.493 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:76:cc:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.493 232432 INFO nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Using config drive
Nov 29 08:26:35 compute-2 nova_compute[232428]: 2025-11-29 08:26:35.522 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:36.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:36 compute-2 ceph-mon[77138]: pgmap v2663: 305 pgs: 305 active+clean; 950 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 6.0 MiB/s wr, 183 op/s
Nov 29 08:26:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1098016032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1904656645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2312093543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2735783585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.638 232432 DEBUG nova.objects.instance [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.683 232432 DEBUG nova.virt.libvirt.driver [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Attempting to attach volume e575575a-0c17-43ef-9168-0fa9b5177df6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.688 232432 DEBUG nova.virt.libvirt.guest [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:26:36 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e575575a-0c17-43ef-9168-0fa9b5177df6">
Nov 29 08:26:36 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   </source>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:26:36 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:26:36 compute-2 nova_compute[232428]:   <serial>e575575a-0c17-43ef-9168-0fa9b5177df6</serial>
Nov 29 08:26:36 compute-2 nova_compute[232428]: </disk>
Nov 29 08:26:36 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.703 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.736 232432 INFO nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Creating config drive at /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.746 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzzexn0lq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.841 232432 DEBUG nova.virt.libvirt.driver [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.842 232432 DEBUG nova.virt.libvirt.driver [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.842 232432 DEBUG nova.virt.libvirt.driver [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.842 232432 DEBUG nova.virt.libvirt.driver [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:af:a8:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.890 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzzexn0lq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.923 232432 DEBUG nova.storage.rbd_utils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 78a00526-9c03-4c52-93a4-2275348b883a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:36 compute-2 nova_compute[232428]: 2025-11-29 08:26:36.928 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config 78a00526-9c03-4c52-93a4-2275348b883a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.108 232432 DEBUG oslo_concurrency.processutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config 78a00526-9c03-4c52-93a4-2275348b883a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.109 232432 INFO nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Deleting local config drive /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/disk.config because it was imported into RBD.
Nov 29 08:26:37 compute-2 kernel: tape0c088b1-9b: entered promiscuous mode
Nov 29 08:26:37 compute-2 NetworkManager[48993]: <info>  [1764404797.1650] manager: (tape0c088b1-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:37 compute-2 ovn_controller[134375]: 2025-11-29T08:26:37Z|00731|binding|INFO|Claiming lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for this chassis.
Nov 29 08:26:37 compute-2 ovn_controller[134375]: 2025-11-29T08:26:37Z|00732|binding|INFO|e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e: Claiming fa:16:3e:76:cc:96 10.100.0.3
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.178 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:cc:96 10.100.0.3'], port_security=['fa:16:3e:76:cc:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '78a00526-9c03-4c52-93a4-2275348b883a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.180 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.182 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 08:26:37 compute-2 ovn_controller[134375]: 2025-11-29T08:26:37Z|00733|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e ovn-installed in OVS
Nov 29 08:26:37 compute-2 ovn_controller[134375]: 2025-11-29T08:26:37Z|00734|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e up in Southbound
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.194 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:37 compute-2 systemd-udevd[301475]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.203 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[31eef932-0b01-49f0-aa00-4767c2d0a158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 systemd-machined[194747]: New machine qemu-77-instance-000000a3.
Nov 29 08:26:37 compute-2 NetworkManager[48993]: <info>  [1764404797.2184] device (tape0c088b1-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:26:37 compute-2 NetworkManager[48993]: <info>  [1764404797.2194] device (tape0c088b1-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:26:37 compute-2 systemd[1]: Started Virtual Machine qemu-77-instance-000000a3.
Nov 29 08:26:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2735783585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.244 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d89393-b734-4154-8d00-ac04d98d1c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.249 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[456b4599-d4ff-4dec-a21a-1439cecd5622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.285 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff2cbcc-af3d-4f0f-bba6-b901e4de516e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.307 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12c5dff9-fee2-4df9-a8a6-e32abed5ba1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764320, 'reachable_time': 38487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301488, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.330 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b03d8a8-122a-47cc-a928-c9bc18967b39]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764333, 'tstamp': 764333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301490, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764336, 'tstamp': 764336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301490, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.332 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.335 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.336 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.336 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:37.336 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.437 232432 DEBUG oslo_concurrency.lockutils [None req-0d2f90ec-6fec-47f8-9ee4-9fefbad378ad dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.578 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404797.577778, 78a00526-9c03-4c52-93a4-2275348b883a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.578 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Started (Lifecycle Event)
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.598 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.603 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404797.5792048, 78a00526-9c03-4c52-93a4-2275348b883a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.603 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Paused (Lifecycle Event)
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.632 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.637 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:37 compute-2 nova_compute[232428]: 2025-11-29 08:26:37.669 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:26:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:38.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.188 232432 INFO nova.compute.manager [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Rescuing
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.189 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.189 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.189 232432 DEBUG nova.network.neutron [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.238 232432 DEBUG nova.network.neutron [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.239 232432 DEBUG nova.network.neutron [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:38 compute-2 ceph-mon[77138]: pgmap v2664: 305 pgs: 305 active+clean; 951 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 4.8 MiB/s wr, 139 op/s
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.267 232432 DEBUG oslo_concurrency.lockutils [req-43f998c6-20c1-487c-a8b5-6a652b7d71b5 req-f22ffb75-8455-4b0a-a080-e35c95aba7a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 08:26:38 compute-2 podman[301534]: 2025-11-29 08:26:38.663605317 +0000 UTC m=+0.063777158 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 08:26:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:38.738 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:38.739 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:26:38 compute-2 nova_compute[232428]: 2025-11-29 08:26:38.739 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:38.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:39 compute-2 sudo[301553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:39 compute-2 sudo[301553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:39 compute-2 sudo[301553]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:39 compute-2 sudo[301578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:39 compute-2 sudo[301578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:39 compute-2 sudo[301578]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:39 compute-2 ceph-mon[77138]: osdmap e351: 3 total, 3 up, 3 in
Nov 29 08:26:39 compute-2 ceph-mon[77138]: pgmap v2666: 305 pgs: 305 active+clean; 951 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 213 op/s
Nov 29 08:26:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:39.741 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:40 compute-2 nova_compute[232428]: 2025-11-29 08:26:40.321 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1795562614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:41 compute-2 ceph-mon[77138]: pgmap v2667: 305 pgs: 305 active+clean; 984 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Nov 29 08:26:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3501057398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:41 compute-2 nova_compute[232428]: 2025-11-29 08:26:41.705 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.198 232432 DEBUG nova.compute.manager [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.199 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.199 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.200 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.200 232432 DEBUG nova.compute.manager [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Processing event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.200 232432 DEBUG nova.compute.manager [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.201 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.201 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.201 232432 DEBUG oslo_concurrency.lockutils [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.202 232432 DEBUG nova.compute.manager [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.202 232432 WARNING nova.compute.manager [req-c6c0c620-279e-44c4-a4c2-6d59b66d4650 req-e04512ab-6fc7-4581-ab86-b33dfb0362eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state building and task_state spawning.
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.203 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.207 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404802.2069695, 78a00526-9c03-4c52-93a4-2275348b883a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.207 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Resumed (Lifecycle Event)
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.209 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.213 232432 INFO nova.virt.libvirt.driver [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance spawned successfully.
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.214 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.232 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.240 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.244 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.244 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.245 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.246 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.246 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.247 232432 DEBUG nova.virt.libvirt.driver [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.297 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.336 232432 INFO nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Took 14.46 seconds to spawn the instance on the hypervisor.
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.337 232432 DEBUG nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.430 232432 INFO nova.compute.manager [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Took 15.62 seconds to build instance.
Nov 29 08:26:42 compute-2 nova_compute[232428]: 2025-11-29 08:26:42.448 232432 DEBUG oslo_concurrency.lockutils [None req-b78fd934-f705-4d97-8b6a-6e4f7a9fe8bb b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:43 compute-2 ceph-mon[77138]: pgmap v2668: 305 pgs: 305 active+clean; 984 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Nov 29 08:26:43 compute-2 podman[301605]: 2025-11-29 08:26:43.718999707 +0000 UTC m=+0.116158837 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:26:44 compute-2 nova_compute[232428]: 2025-11-29 08:26:44.102 232432 DEBUG nova.network.neutron [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:44 compute-2 nova_compute[232428]: 2025-11-29 08:26:44.137 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:44.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:44 compute-2 nova_compute[232428]: 2025-11-29 08:26:44.665 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:26:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:45 compute-2 nova_compute[232428]: 2025-11-29 08:26:45.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:45 compute-2 ceph-mon[77138]: pgmap v2669: 305 pgs: 305 active+clean; 1.0 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 4.7 MiB/s wr, 231 op/s
Nov 29 08:26:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:46.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:46 compute-2 nova_compute[232428]: 2025-11-29 08:26:46.707 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 08:26:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:47 compute-2 kernel: tapdc933ba7-ff (unregistering): left promiscuous mode
Nov 29 08:26:47 compute-2 NetworkManager[48993]: <info>  [1764404807.0444] device (tapdc933ba7-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:47 compute-2 ovn_controller[134375]: 2025-11-29T08:26:47Z|00735|binding|INFO|Releasing lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 from this chassis (sb_readonly=0)
Nov 29 08:26:47 compute-2 ovn_controller[134375]: 2025-11-29T08:26:47Z|00736|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 down in Southbound
Nov 29 08:26:47 compute-2 ovn_controller[134375]: 2025-11-29T08:26:47Z|00737|binding|INFO|Removing iface tapdc933ba7-ff ovn-installed in OVS
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.066 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.068 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.070 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.074 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b173bec3-30b4-4cb2-ae82-04d981f2583c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.074 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.087 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:47 compute-2 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Nov 29 08:26:47 compute-2 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Consumed 16.192s CPU time.
Nov 29 08:26:47 compute-2 systemd-machined[194747]: Machine qemu-76-instance-000000a1 terminated.
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [NOTICE]   (300614) : haproxy version is 2.8.14-c23fe91
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [NOTICE]   (300614) : path to executable is /usr/sbin/haproxy
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [WARNING]  (300614) : Exiting Master process...
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [WARNING]  (300614) : Exiting Master process...
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [ALERT]    (300614) : Current worker (300616) exited with code 143 (Terminated)
Nov 29 08:26:47 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[300602]: [WARNING]  (300614) : All workers exited. Exiting... (0)
Nov 29 08:26:47 compute-2 systemd[1]: libpod-65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d.scope: Deactivated successfully.
Nov 29 08:26:47 compute-2 podman[301652]: 2025-11-29 08:26:47.243591798 +0000 UTC m=+0.050667297 container died 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.285 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.298 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-fa929d7f34b43564eb8391097aaec4fd0fa7cf87e56932dc38c109966ea07f09-merged.mount: Deactivated successfully.
Nov 29 08:26:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:26:47 compute-2 podman[301652]: 2025-11-29 08:26:47.644999234 +0000 UTC m=+0.452074733 container cleanup 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.652 232432 DEBUG nova.compute.manager [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.653 232432 DEBUG oslo_concurrency.lockutils [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.654 232432 DEBUG oslo_concurrency.lockutils [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.655 232432 DEBUG oslo_concurrency.lockutils [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.656 232432 DEBUG nova.compute.manager [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.656 232432 WARNING nova.compute.manager [req-e8d43391-bd10-424a-b25c-da094fdf59b3 req-ab7ccac0-bea3-4cdb-8dde-516b35a702e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state rescuing.
Nov 29 08:26:47 compute-2 systemd[1]: libpod-conmon-65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d.scope: Deactivated successfully.
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.689 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance shutdown successfully after 3 seconds.
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.696 232432 INFO nova.virt.libvirt.driver [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance destroyed successfully.
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.697 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'numa_topology' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:47 compute-2 ceph-mon[77138]: pgmap v2670: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 256 op/s
Nov 29 08:26:47 compute-2 ceph-mon[77138]: osdmap e352: 3 total, 3 up, 3 in
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.751 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Attempting rescue
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.754 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.759 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.760 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Creating image(s)
Nov 29 08:26:47 compute-2 podman[301690]: 2025-11-29 08:26:47.768939784 +0000 UTC m=+0.081331597 container remove 65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.780 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[365da016-ccb1-4a3f-8867-2659a3312e23]: (4, ('Sat Nov 29 08:26:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d)\n65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d\nSat Nov 29 08:26:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d)\n65f236504af08cbc42b6f72f1e3cead1ce8c6585243e19199a97f099426c401d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.782 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba51025-9c2b-4c55-8550-dbc772a66b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.784 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:47 compute-2 kernel: tap7008b597-80: left promiscuous mode
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[691934e5-7137-43fa-a688-954157b75bbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.813 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.825 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[68cb1ffe-c001-442f-9efb-1b2bd48fbab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.827 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc11d4d-670b-4919-b585-3b14fcd00271]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.835 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.845 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[48b0cc87-1ba9-40bc-8a9f-cfce4d1445dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773480, 'reachable_time': 35342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301727, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.848 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:26:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:47.848 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdbdc39-e270-445f-9c9c-0af47ecf4f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.923 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.978 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:47 compute-2 nova_compute[232428]: 2025-11-29 08:26:47.985 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.091 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.094 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.095 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.096 232432 DEBUG oslo_concurrency.lockutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.151 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.158 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.542446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808542673, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1057, "num_deletes": 252, "total_data_size": 1982133, "memory_usage": 2013232, "flush_reason": "Manual Compaction"}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808553501, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 907975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56442, "largest_seqno": 57494, "table_properties": {"data_size": 903802, "index_size": 1761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11203, "raw_average_key_size": 21, "raw_value_size": 894787, "raw_average_value_size": 1714, "num_data_blocks": 76, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404743, "oldest_key_time": 1764404743, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 11114 microseconds, and 3231 cpu microseconds.
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.553558) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 907975 bytes OK
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.553580) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.570798) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.570812) EVENT_LOG_v1 {"time_micros": 1764404808570808, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.570829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1976894, prev total WAL file size 1976894, number of live WAL files 2.
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.571579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373634' seq:72057594037927935, type:22 .. '6D6772737461740032303135' seq:0, type:0; will stop at (end)
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(886KB)], [108(12MB)]
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808571617, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14222373, "oldest_snapshot_seqno": -1}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8638 keys, 10767984 bytes, temperature: kUnknown
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808662400, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10767984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10712677, "index_size": 32603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225274, "raw_average_key_size": 26, "raw_value_size": 10561196, "raw_average_value_size": 1222, "num_data_blocks": 1266, "num_entries": 8638, "num_filter_entries": 8638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.662731) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10767984 bytes
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.666821) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.5 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(27.5) write-amplify(11.9) OK, records in: 9135, records dropped: 497 output_compression: NoCompression
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.666846) EVENT_LOG_v1 {"time_micros": 1764404808666836, "job": 68, "event": "compaction_finished", "compaction_time_micros": 90891, "compaction_time_cpu_micros": 32333, "output_level": 6, "num_output_files": 1, "total_output_size": 10767984, "num_input_records": 9135, "num_output_records": 8638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808667159, "job": 68, "event": "table_file_deletion", "file_number": 110}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808669287, "job": 68, "event": "table_file_deletion", "file_number": 108}
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.571512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.669446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.669450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.669452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.669453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:26:48.669454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.755 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.756 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.774 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.775 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start _get_guest_xml network_info=[{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:af:a8:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.775 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.794 232432 WARNING nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.806 232432 DEBUG nova.virt.libvirt.host [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.806 232432 DEBUG nova.virt.libvirt.host [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.809 232432 DEBUG nova.virt.libvirt.host [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.809 232432 DEBUG nova.virt.libvirt.host [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.810 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.810 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.811 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.811 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.811 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.811 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.812 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.812 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.812 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.812 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.813 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.813 232432 DEBUG nova.virt.hardware [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.813 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:48 compute-2 nova_compute[232428]: 2025-11-29 08:26:48.841 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:49.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2848427558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.319 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.321 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:49 compute-2 ceph-mon[77138]: pgmap v2672: 305 pgs: 305 active+clean; 967 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.7 MiB/s wr, 284 op/s
Nov 29 08:26:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2848427558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3926407031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.749 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.750 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.786 232432 DEBUG nova.compute.manager [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.787 232432 DEBUG oslo_concurrency.lockutils [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.788 232432 DEBUG oslo_concurrency.lockutils [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.788 232432 DEBUG oslo_concurrency.lockutils [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.788 232432 DEBUG nova.compute.manager [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:49 compute-2 nova_compute[232428]: 2025-11-29 08:26:49.789 232432 WARNING nova.compute.manager [req-9044c0ba-0044-46c9-9fd4-fba1cfa369e2 req-0c7d1ee0-636d-43f6-aa7b-d98f12af993f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state rescuing.
Nov 29 08:26:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:26:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4269961425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.241 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.242 232432 DEBUG nova.virt.libvirt.vif [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1775463929',display_name='tempest-ServerRescueNegativeTestJSON-server-1775463929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1775463929',id=161,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmo+eet4YBY7wXcrzDQzITBcUSszsOuTXPJOsSPetwgqxs8tnSNHiHLo4P9tBVRJry94mJeN6BGQc8NI6+0zP4qONnsq3uMb4XX3eYuPLEZknBDW+VjJB6uAaoViaI9RQ==',key_name='tempest-keypair-928415713',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-bgjax5g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c1118af2-2266-48e4-a246-9549c68ddaa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:af:a8:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.243 232432 DEBUG nova.network.os_vif_util [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:af:a8:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.244 232432 DEBUG nova.network.os_vif_util [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.245 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.294 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <uuid>c1118af2-2266-48e4-a246-9549c68ddaa4</uuid>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <name>instance-000000a1</name>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1775463929</nova:name>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:26:48</nova:creationTime>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <nova:port uuid="dc933ba7-ffdf-4e89-9aae-ae19d42f4315">
Nov 29 08:26:50 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <system>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="serial">c1118af2-2266-48e4-a246-9549c68ddaa4</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="uuid">c1118af2-2266-48e4-a246-9549c68ddaa4</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </system>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <os>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </os>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <features>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </features>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/c1118af2-2266-48e4-a246-9549c68ddaa4_disk.rescue">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/c1118af2-2266-48e4-a246-9549c68ddaa4_disk">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config.rescue">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </source>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:26:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:af:a8:10"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <target dev="tapdc933ba7-ff"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/console.log" append="off"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <video>
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </video>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:26:50 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:26:50 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:26:50 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:26:50 compute-2 nova_compute[232428]: </domain>
Nov 29 08:26:50 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.302 232432 INFO nova.virt.libvirt.driver [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance destroyed successfully.
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.578 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.579 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.579 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.580 232432 DEBUG nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:af:a8:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.580 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Using config drive
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.610 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.667 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3926407031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4269961425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:50 compute-2 nova_compute[232428]: 2025-11-29 08:26:50.825 232432 DEBUG nova.objects.instance [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'keypairs' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:26:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.328 232432 DEBUG nova.compute.manager [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.329 232432 DEBUG nova.compute.manager [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.330 232432 DEBUG oslo_concurrency.lockutils [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.330 232432 DEBUG oslo_concurrency.lockutils [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.331 232432 DEBUG nova.network.neutron [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:26:51 compute-2 nova_compute[232428]: 2025-11-29 08:26:51.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:52 compute-2 ceph-mon[77138]: pgmap v2673: 305 pgs: 305 active+clean; 971 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 08:26:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.234 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.352 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Creating config drive at /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.362 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwmy_gpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.521 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwmy_gpl" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.559 232432 DEBUG nova.storage.rbd_utils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.564 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.753 232432 DEBUG oslo_concurrency.processutils [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue c1118af2-2266-48e4-a246-9549c68ddaa4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.754 232432 INFO nova.virt.libvirt.driver [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Deleting local config drive /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4/disk.config.rescue because it was imported into RBD.
Nov 29 08:26:52 compute-2 kernel: tapdc933ba7-ff: entered promiscuous mode
Nov 29 08:26:52 compute-2 NetworkManager[48993]: <info>  [1764404812.8261] manager: (tapdc933ba7-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 08:26:52 compute-2 ovn_controller[134375]: 2025-11-29T08:26:52Z|00738|binding|INFO|Claiming lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for this chassis.
Nov 29 08:26:52 compute-2 ovn_controller[134375]: 2025-11-29T08:26:52Z|00739|binding|INFO|dc933ba7-ffdf-4e89-9aae-ae19d42f4315: Claiming fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.826 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.836 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '5', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.837 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.840 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:52 compute-2 ovn_controller[134375]: 2025-11-29T08:26:52Z|00740|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 ovn-installed in OVS
Nov 29 08:26:52 compute-2 ovn_controller[134375]: 2025-11-29T08:26:52Z|00741|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 up in Southbound
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:52 compute-2 nova_compute[232428]: 2025-11-29 08:26:52.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.857 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[be004af3-04f6-4511-b50a-262a32c95ff9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.858 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.860 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.860 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cd89ed-0cec-44d9-aa98-5972735dffc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.861 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee165f25-f212-473e-ad93-bc61f7f70241]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 systemd-udevd[301942]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:26:52 compute-2 NetworkManager[48993]: <info>  [1764404812.8897] device (tapdc933ba7-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:26:52 compute-2 NetworkManager[48993]: <info>  [1764404812.8913] device (tapdc933ba7-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.892 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[0321b1c2-4d0d-4723-99ba-23b54c902788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 systemd-machined[194747]: New machine qemu-78-instance-000000a1.
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.918 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d8549a01-d458-4e62-bea0-2c40f599a459]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 systemd[1]: Started Virtual Machine qemu-78-instance-000000a1.
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.964 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[04649e21-98ac-4a3d-ab3d-83e01b77d742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:52.970 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c28bab4-d009-4048-bd0c-8d9fcf0d77ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:52 compute-2 NetworkManager[48993]: <info>  [1764404812.9736] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 08:26:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.011 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[60bc1912-40d7-4056-a28c-7dd5a79775fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.013 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbcda4a-190b-442a-beee-0910d37bc260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 NetworkManager[48993]: <info>  [1764404813.0461] device (tap7008b597-80): carrier: link connected
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.055 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2112c1a1-049b-47fc-8317-354cc57edb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.077 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4222bdc0-6da2-4560-809e-15d7a996a91a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777931, 'reachable_time': 15460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301977, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.098 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3412d51a-a91a-4f0f-b40c-f3bb2be6feaf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777931, 'tstamp': 777931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301978, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.117 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5859fa-5732-43e2-9d63-c65a8b92e0c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777931, 'reachable_time': 15460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301979, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.153 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ee0d58-01d2-43be-a890-22dbc122a180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.226 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f026b35-c557-4dcb-a175-e3ee5b6a4223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.227 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.227 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.228 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:53 compute-2 NetworkManager[48993]: <info>  [1764404813.2306] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 08:26:53 compute-2 nova_compute[232428]: 2025-11-29 08:26:53.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:53 compute-2 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.237 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:26:53 compute-2 nova_compute[232428]: 2025-11-29 08:26:53.238 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:53 compute-2 ovn_controller[134375]: 2025-11-29T08:26:53Z|00742|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 08:26:53 compute-2 nova_compute[232428]: 2025-11-29 08:26:53.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.275 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.277 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe44994-d691-4e65-ab3d-d38939c79e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.279 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:26:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:26:53.281 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:26:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 08:26:53 compute-2 podman[302011]: 2025-11-29 08:26:53.688102293 +0000 UTC m=+0.032335043 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:26:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:54 compute-2 ceph-mon[77138]: pgmap v2674: 305 pgs: 305 active+clean; 971 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 08:26:54 compute-2 ceph-mon[77138]: osdmap e353: 3 total, 3 up, 3 in
Nov 29 08:26:54 compute-2 podman[302011]: 2025-11-29 08:26:54.447562677 +0000 UTC m=+0.791795417 container create 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:26:54 compute-2 systemd[1]: Started libpod-conmon-112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2.scope.
Nov 29 08:26:54 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:26:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff3e5d9977c8215b8c63cb5f5bddfb349087930cde72580e898539237d85226/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:26:54 compute-2 podman[302011]: 2025-11-29 08:26:54.884717952 +0000 UTC m=+1.228950802 container init 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:26:54 compute-2 podman[302011]: 2025-11-29 08:26:54.891730251 +0000 UTC m=+1.235963031 container start 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:26:54 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [NOTICE]   (302087) : New worker (302093) forked
Nov 29 08:26:54 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [NOTICE]   (302087) : Loading success.
Nov 29 08:26:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.033 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for c1118af2-2266-48e4-a246-9549c68ddaa4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.033 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404815.032167, c1118af2-2266-48e4-a246-9549c68ddaa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.034 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Resumed (Lifecycle Event)
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.043 232432 DEBUG nova.compute.manager [None req-2cc8529b-6954-4d27-8e5c-0bb1a70a5d80 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.332 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.550 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.557 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.920 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404815.0374954, c1118af2-2266-48e4-a246-9549c68ddaa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:26:55 compute-2 nova_compute[232428]: 2025-11-29 08:26:55.921 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Started (Lifecycle Event)
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.146 232432 DEBUG nova.compute.manager [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.147 232432 DEBUG oslo_concurrency.lockutils [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.147 232432 DEBUG oslo_concurrency.lockutils [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.148 232432 DEBUG oslo_concurrency.lockutils [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.148 232432 DEBUG nova.compute.manager [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.148 232432 WARNING nova.compute.manager [req-d353d0a8-fa13-4766-89d9-b194f064ab1f req-52ac06f7-3564-4679-8218-75006b8a948b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state rescued and task_state None.
Nov 29 08:26:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.209 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.215 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:26:56 compute-2 ceph-mon[77138]: pgmap v2676: 305 pgs: 305 active+clean; 998 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 267 op/s
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:26:56 compute-2 podman[302104]: 2025-11-29 08:26:56.715926965 +0000 UTC m=+0.114086302 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.720 232432 DEBUG nova.network.neutron [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.720 232432 DEBUG nova.network.neutron [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:26:56 compute-2 nova_compute[232428]: 2025-11-29 08:26:56.755 232432 DEBUG oslo_concurrency.lockutils [req-26af9aec-e6b7-4e69-9ed2-aee0a425ea91 req-f43885b7-cb3a-42d5-aab2-643ac7ff33d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:26:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:26:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:26:57 compute-2 ovn_controller[134375]: 2025-11-29T08:26:57Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:cc:96 10.100.0.3
Nov 29 08:26:57 compute-2 ovn_controller[134375]: 2025-11-29T08:26:57Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:cc:96 10.100.0.3
Nov 29 08:26:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:58 compute-2 ceph-mon[77138]: pgmap v2677: 305 pgs: 305 active+clean; 998 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 232 op/s
Nov 29 08:26:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.468 232432 DEBUG nova.compute.manager [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.468 232432 DEBUG oslo_concurrency.lockutils [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.468 232432 DEBUG oslo_concurrency.lockutils [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.468 232432 DEBUG oslo_concurrency.lockutils [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.469 232432 DEBUG nova.compute.manager [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:26:58 compute-2 nova_compute[232428]: 2025-11-29 08:26:58.469 232432 WARNING nova.compute.manager [req-00b2380a-598f-4e12-9580-5ac3ed645391 req-6da1c798-1231-4162-9ecb-fcf112f2d082 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state rescued and task_state None.
Nov 29 08:26:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:26:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:26:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:26:59 compute-2 nova_compute[232428]: 2025-11-29 08:26:59.145 232432 INFO nova.compute.manager [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Unrescuing
Nov 29 08:26:59 compute-2 nova_compute[232428]: 2025-11-29 08:26:59.146 232432 DEBUG oslo_concurrency.lockutils [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:26:59 compute-2 nova_compute[232428]: 2025-11-29 08:26:59.146 232432 DEBUG oslo_concurrency.lockutils [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:26:59 compute-2 nova_compute[232428]: 2025-11-29 08:26:59.146 232432 DEBUG nova.network.neutron [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:26:59 compute-2 sudo[302131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:59 compute-2 sudo[302131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3349961666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:26:59 compute-2 sudo[302131]: pam_unix(sudo:session): session closed for user root
Nov 29 08:26:59 compute-2 sudo[302157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:26:59 compute-2 sudo[302157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:26:59 compute-2 sudo[302157]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:00 compute-2 nova_compute[232428]: 2025-11-29 08:27:00.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:00 compute-2 nova_compute[232428]: 2025-11-29 08:27:00.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:00 compute-2 nova_compute[232428]: 2025-11-29 08:27:00.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:00 compute-2 nova_compute[232428]: 2025-11-29 08:27:00.987 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:00 compute-2 nova_compute[232428]: 2025-11-29 08:27:00.988 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.052 232432 DEBUG nova.objects.instance [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.120 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:27:01 compute-2 ceph-mon[77138]: pgmap v2678: 305 pgs: 305 active+clean; 1020 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.716 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.953 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.955 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:01 compute-2 nova_compute[232428]: 2025-11-29 08:27:01.955 232432 INFO nova.compute.manager [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Attaching volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 to /dev/vdb
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.149 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.150 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.151 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.151 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d2af1c0-e1ed-48f9-beda-42cc37212de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:02.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.283 232432 DEBUG os_brick.utils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.286 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.306 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.307 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[b193150b-629a-4e4c-afa2-a483628e1068]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.309 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.323 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.324 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[df735d92-3265-4807-8f55-a95d3e5a74bf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.326 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.342 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.342 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0324e7-6aa2-406c-a507-3a3e6e907f8b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.345 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[12f2dbab-cc9c-440b-954a-d0b38f7a9559]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.346 232432 DEBUG oslo_concurrency.processutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:02 compute-2 ceph-mon[77138]: pgmap v2679: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 253 op/s
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.407 232432 DEBUG oslo_concurrency.processutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.413 232432 DEBUG os_brick.initiator.connectors.lightos [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.414 232432 DEBUG os_brick.initiator.connectors.lightos [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.415 232432 DEBUG os_brick.initiator.connectors.lightos [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.416 232432 DEBUG os_brick.utils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (131ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:27:02 compute-2 nova_compute[232428]: 2025-11-29 08:27:02.417 232432 DEBUG nova.virt.block_device [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating existing volume attachment record: 0f03bc9a-d9ad-4c1d-889e-f5d227125924 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:27:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.077 232432 DEBUG nova.network.neutron [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.128 232432 DEBUG oslo_concurrency.lockutils [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.130 232432 DEBUG nova.objects.instance [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:03 compute-2 kernel: tapdc933ba7-ff (unregistering): left promiscuous mode
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.3000] device (tapdc933ba7-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00743|binding|INFO|Releasing lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 from this chassis (sb_readonly=0)
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00744|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 down in Southbound
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00745|binding|INFO|Removing iface tapdc933ba7-ff ovn-installed in OVS
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.330 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.331 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.333 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.335 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '6', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.338 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.340 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.342 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4136423d-0e50-4639-bcad-62c66eeb0cb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.343 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore
Nov 29 08:27:03 compute-2 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Nov 29 08:27:03 compute-2 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a1.scope: Consumed 10.091s CPU time.
Nov 29 08:27:03 compute-2 systemd-machined[194747]: Machine qemu-78-instance-000000a1 terminated.
Nov 29 08:27:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:03 compute-2 ceph-mon[77138]: pgmap v2680: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 253 op/s
Nov 29 08:27:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1293910375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.477 232432 INFO nova.virt.libvirt.driver [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance destroyed successfully.
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.477 232432 DEBUG nova.objects.instance [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'numa_topology' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:03 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [NOTICE]   (302087) : haproxy version is 2.8.14-c23fe91
Nov 29 08:27:03 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [NOTICE]   (302087) : path to executable is /usr/sbin/haproxy
Nov 29 08:27:03 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [WARNING]  (302087) : Exiting Master process...
Nov 29 08:27:03 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [ALERT]    (302087) : Current worker (302093) exited with code 143 (Terminated)
Nov 29 08:27:03 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302071]: [WARNING]  (302087) : All workers exited. Exiting... (0)
Nov 29 08:27:03 compute-2 systemd[1]: libpod-112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2.scope: Deactivated successfully.
Nov 29 08:27:03 compute-2 podman[302214]: 2025-11-29 08:27:03.518952912 +0000 UTC m=+0.060512115 container died 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:27:03 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2-userdata-shm.mount: Deactivated successfully.
Nov 29 08:27:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-5ff3e5d9977c8215b8c63cb5f5bddfb349087930cde72580e898539237d85226-merged.mount: Deactivated successfully.
Nov 29 08:27:03 compute-2 podman[302214]: 2025-11-29 08:27:03.571847578 +0000 UTC m=+0.113406801 container cleanup 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:27:03 compute-2 systemd[1]: libpod-conmon-112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2.scope: Deactivated successfully.
Nov 29 08:27:03 compute-2 kernel: tapdc933ba7-ff: entered promiscuous mode
Nov 29 08:27:03 compute-2 systemd-udevd[302193]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.5902] manager: (tapdc933ba7-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.590 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00746|binding|INFO|Claiming lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for this chassis.
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00747|binding|INFO|dc933ba7-ffdf-4e89-9aae-ae19d42f4315: Claiming fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.6017] device (tapdc933ba7-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.6029] device (tapdc933ba7-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00748|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 ovn-installed in OVS
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.616 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.619 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '6', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:27:03 compute-2 ovn_controller[134375]: 2025-11-29T08:27:03Z|00749|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 up in Southbound
Nov 29 08:27:03 compute-2 systemd-machined[194747]: New machine qemu-79-instance-000000a1.
Nov 29 08:27:03 compute-2 systemd[1]: Started Virtual Machine qemu-79-instance-000000a1.
Nov 29 08:27:03 compute-2 podman[302262]: 2025-11-29 08:27:03.64504914 +0000 UTC m=+0.045793345 container remove 112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.652 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[afe4fa3e-cbb5-4bfe-9c44-563f02da655a]: (4, ('Sat Nov 29 08:27:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2)\n112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2\nSat Nov 29 08:27:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2)\n112be334f8462be2794ce1284d94d25d7492d8c3d4c14c81c254ff347e8d8af2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.653 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80de5655-c8f6-4896-b21b-f709f803fa35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.654 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.656 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.670 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:03 compute-2 kernel: tap7008b597-80: left promiscuous mode
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.674 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c24393cb-4ae7-4e29-ba51-6f87722baea5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.690 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[844fe391-eb7c-4bd5-8995-2c0bcbca0d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.691 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae1fb62-a445-448d-872a-cc9244cc8de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.707 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[627719bc-da1f-4f78-b177-19bf6b35d093]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777922, 'reachable_time': 41633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302290, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.711 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:27:03 compute-2 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.711 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f5bde176-55ee-4ded-8a71-2f2f9b045b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.713 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.715 232432 DEBUG nova.compute.manager [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.715 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.715 232432 DEBUG oslo_concurrency.lockutils [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.715 232432 DEBUG oslo_concurrency.lockutils [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.716 232432 DEBUG oslo_concurrency.lockutils [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.716 232432 DEBUG nova.compute.manager [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.716 232432 WARNING nova.compute.manager [req-588456b2-3795-4e82-97fd-9bc443c45129 req-66549ba4-8aa7-4c18-b7d5-585cb02c54ef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state rescued and task_state unrescuing.
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.728 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2e2578-3b51-4cd3-b3ea-5fb0aff82988]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.729 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.730 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.730 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2c96fb-3a6b-4267-b938-d28b9748fe3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.731 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[02681487-024b-4a23-afae-893453570a27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.744 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[341cbf11-7306-406c-9d54-034f1d39f351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.757 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e80e3d44-d345-4978-81e4-b359df6e992b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.791 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cebc2e13-35a6-4f20-a287-156c0311097a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.796 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f542aaf8-df14-428e-8020-b218d09f773f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.7987] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.832 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb5c06-be8b-4ead-b4e5-390a0145b495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.835 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[49c906f1-6cba-42e6-894f-db3fcc6f46f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.852 232432 DEBUG nova.objects.instance [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:03 compute-2 NetworkManager[48993]: <info>  [1764404823.8663] device (tap7008b597-80): carrier: link connected
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.872 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd261cb-4714-4ef5-814b-35a6ece8f994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.876 232432 DEBUG nova.virt.libvirt.driver [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Attempting to attach volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:27:03 compute-2 nova_compute[232428]: 2025-11-29 08:27:03.879 232432 DEBUG nova.virt.libvirt.guest [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 08:27:03 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   </source>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:27:03 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 08:27:03 compute-2 nova_compute[232428]:   <shareable/>
Nov 29 08:27:03 compute-2 nova_compute[232428]: </disk>
Nov 29 08:27:03 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a74f30ce-d1f6-4543-898e-7284ea2f86fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779013, 'reachable_time': 30308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302316, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.929 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[71bc41a0-1ede-4ddd-ad31-ebdfd796e026]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779013, 'tstamp': 779013}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302319, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:03.958 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1106ccb5-ead1-4cf6-bd15-ce6bddd5ef65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779013, 'reachable_time': 30308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302335, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.006 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[77f2c31e-7710-4c38-b365-e63cf67d0e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.037 232432 DEBUG nova.virt.libvirt.driver [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.038 232432 DEBUG nova.virt.libvirt.driver [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.039 232432 DEBUG nova.virt.libvirt.driver [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.040 232432 DEBUG nova.virt.libvirt.driver [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:76:cc:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.112 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b1840bbb-55b6-4767-ba95-a146226f3698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.114 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.114 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.115 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:04 compute-2 NetworkManager[48993]: <info>  [1764404824.1174] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:04 compute-2 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.121 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:04 compute-2 ovn_controller[134375]: 2025-11-29T08:27:04Z|00750|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.147 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.149 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d45e50c7-7693-4fe7-8a35-911acfe60a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.150 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:27:04 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:04.150 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:27:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:04.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.334 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for c1118af2-2266-48e4-a246-9549c68ddaa4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.336 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404824.3336666, c1118af2-2266-48e4-a246-9549c68ddaa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.336 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Resumed (Lifecycle Event)
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.416 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.422 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:27:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2318822720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1686904777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.480 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.480 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404824.3360507, c1118af2-2266-48e4-a246-9549c68ddaa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.481 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Started (Lifecycle Event)
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.515 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.520 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.545 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 29 08:27:04 compute-2 podman[302447]: 2025-11-29 08:27:04.567730612 +0000 UTC m=+0.072135909 container create acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:27:04 compute-2 podman[302447]: 2025-11-29 08:27:04.52868667 +0000 UTC m=+0.033091967 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:27:04 compute-2 systemd[1]: Started libpod-conmon-acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356.scope.
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.651 232432 DEBUG oslo_concurrency.lockutils [None req-edf0d084-0d54-4d9f-afdf-f2d1db1781e8 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:04 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:27:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ad83f56e4b568e877ddc060067be12539bc0cc7fa093072e7464a1663c85b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:27:04 compute-2 podman[302447]: 2025-11-29 08:27:04.702616325 +0000 UTC m=+0.207021662 container init acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:27:04 compute-2 podman[302447]: 2025-11-29 08:27:04.715415365 +0000 UTC m=+0.219820642 container start acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:27:04 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [NOTICE]   (302467) : New worker (302469) forked
Nov 29 08:27:04 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [NOTICE]   (302467) : Loading success.
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.809 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.831 232432 DEBUG nova.compute.manager [None req-24dd5b25-89a1-4a7b-b212-903abfcc7a6a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.832 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.833 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:27:04 compute-2 nova_compute[232428]: 2025-11-29 08:27:04.833 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.228 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.338 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:05 compute-2 ceph-mon[77138]: pgmap v2681: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 206 op/s
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.946 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.946 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.947 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.947 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.947 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.948 232432 WARNING nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state None.
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.948 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.948 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.949 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.949 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.949 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.949 232432 WARNING nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state None.
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.950 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.950 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.950 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.951 232432 DEBUG oslo_concurrency.lockutils [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.951 232432 DEBUG nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:05 compute-2 nova_compute[232428]: 2025-11-29 08:27:05.951 232432 WARNING nova.compute.manager [req-5b79e740-2c00-4686-89bf-abadb593c9a0 req-57403324-2ca4-48f2-a7d6-a49cfb0f57bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state None.
Nov 29 08:27:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:06 compute-2 nova_compute[232428]: 2025-11-29 08:27:06.719 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:07 compute-2 ceph-mon[77138]: pgmap v2682: 305 pgs: 305 active+clean; 1013 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Nov 29 08:27:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 08:27:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:09 compute-2 podman[302480]: 2025-11-29 08:27:09.674813611 +0000 UTC m=+0.076950250 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:27:09 compute-2 ceph-mon[77138]: pgmap v2683: 305 pgs: 305 active+clean; 986 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 255 op/s
Nov 29 08:27:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:10 compute-2 nova_compute[232428]: 2025-11-29 08:27:10.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:10 compute-2 nova_compute[232428]: 2025-11-29 08:27:10.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:27:10 compute-2 nova_compute[232428]: 2025-11-29 08:27:10.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:11 compute-2 nova_compute[232428]: 2025-11-29 08:27:11.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:11 compute-2 ceph-mon[77138]: pgmap v2684: 305 pgs: 305 active+clean; 986 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1020 KiB/s wr, 218 op/s
Nov 29 08:27:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3326238135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:11 compute-2 nova_compute[232428]: 2025-11-29 08:27:11.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1472573862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1163314426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:12 compute-2 nova_compute[232428]: 2025-11-29 08:27:12.886 232432 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:12 compute-2 nova_compute[232428]: 2025-11-29 08:27:12.886 232432 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing instance network info cache due to event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:27:12 compute-2 nova_compute[232428]: 2025-11-29 08:27:12.887 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:12 compute-2 nova_compute[232428]: 2025-11-29 08:27:12.887 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:12 compute-2 nova_compute[232428]: 2025-11-29 08:27:12.887 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:27:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.279 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.279 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.280 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.280 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.281 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:27:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/319242056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.721 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:13 compute-2 ceph-mon[77138]: pgmap v2685: 305 pgs: 305 active+clean; 986 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 40 KiB/s wr, 121 op/s
Nov 29 08:27:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/319242056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:13 compute-2 podman[302525]: 2025-11-29 08:27:13.82221406 +0000 UTC m=+0.056344635 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.865 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.865 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.869 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.870 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.870 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.873 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.874 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:13 compute-2 nova_compute[232428]: 2025-11-29 08:27:13.874 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.052 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.053 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3579MB free_disk=20.556934356689453GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.054 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.054 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.164 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.164 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance c1118af2-2266-48e4-a246-9549c68ddaa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.165 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 78a00526-9c03-4c52-93a4-2275348b883a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.165 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.166 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:27:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.431 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:27:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2191410932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.920 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.927 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.958 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.993 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:27:14 compute-2 nova_compute[232428]: 2025-11-29 08:27:14.993 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:15 compute-2 nova_compute[232428]: 2025-11-29 08:27:15.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:15 compute-2 ceph-mon[77138]: pgmap v2686: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 50 KiB/s wr, 127 op/s
Nov 29 08:27:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2191410932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.016 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updated VIF entry in instance network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.016 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.051 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.052 232432 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.052 232432 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing instance network info cache due to event network-changed-dc933ba7-ffdf-4e89-9aae-ae19d42f4315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.052 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.053 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.053 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Refreshing network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:27:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:16 compute-2 nova_compute[232428]: 2025-11-29 08:27:16.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:17.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:17 compute-2 ceph-mon[77138]: pgmap v2687: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 43 KiB/s wr, 115 op/s
Nov 29 08:27:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:27:18 compute-2 ovn_controller[134375]: 2025-11-29T08:27:18Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:a8:10 10.100.0.10
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.384 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updated VIF entry in instance network info cache for port dc933ba7-ffdf-4e89-9aae-ae19d42f4315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.385 232432 DEBUG nova.network.neutron [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [{"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:18 compute-2 nova_compute[232428]: 2025-11-29 08:27:18.419 232432 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1118af2-2266-48e4-a246-9549c68ddaa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:19.747 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:27:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:19.749 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:27:19 compute-2 nova_compute[232428]: 2025-11-29 08:27:19.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:19.749 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:19 compute-2 ceph-mon[77138]: pgmap v2688: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 110 op/s
Nov 29 08:27:20 compute-2 sudo[302572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:20 compute-2 sudo[302572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:20 compute-2 sudo[302572]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:20 compute-2 sudo[302597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:20 compute-2 sudo[302597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:20 compute-2 sudo[302597]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:20 compute-2 nova_compute[232428]: 2025-11-29 08:27:20.348 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:21 compute-2 nova_compute[232428]: 2025-11-29 08:27:21.727 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:21 compute-2 ceph-mon[77138]: pgmap v2689: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 43 KiB/s wr, 82 op/s
Nov 29 08:27:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:23.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:23 compute-2 ceph-mon[77138]: pgmap v2690: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 746 KiB/s rd, 42 KiB/s wr, 56 op/s
Nov 29 08:27:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:25 compute-2 nova_compute[232428]: 2025-11-29 08:27:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:25 compute-2 nova_compute[232428]: 2025-11-29 08:27:25.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:27:25 compute-2 nova_compute[232428]: 2025-11-29 08:27:25.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:25 compute-2 ceph-mon[77138]: pgmap v2691: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 747 KiB/s rd, 43 KiB/s wr, 57 op/s
Nov 29 08:27:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:26 compute-2 nova_compute[232428]: 2025-11-29 08:27:26.730 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2019255988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:27 compute-2 nova_compute[232428]: 2025-11-29 08:27:27.774 232432 DEBUG oslo_concurrency.lockutils [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:27 compute-2 nova_compute[232428]: 2025-11-29 08:27:27.775 232432 DEBUG oslo_concurrency.lockutils [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:27 compute-2 podman[302626]: 2025-11-29 08:27:27.782156533 +0000 UTC m=+0.174782782 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:27:27 compute-2 nova_compute[232428]: 2025-11-29 08:27:27.808 232432 INFO nova.compute.manager [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Detaching volume e575575a-0c17-43ef-9168-0fa9b5177df6
Nov 29 08:27:27 compute-2 ceph-mon[77138]: pgmap v2692: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 42 KiB/s wr, 52 op/s
Nov 29 08:27:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 08:27:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.631 232432 INFO nova.virt.block_device [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Attempting to driver detach volume e575575a-0c17-43ef-9168-0fa9b5177df6 from mountpoint /dev/vdb
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.640 232432 DEBUG nova.virt.libvirt.driver [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Attempting to detach device vdb from instance c1118af2-2266-48e4-a246-9549c68ddaa4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.641 232432 DEBUG nova.virt.libvirt.guest [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e575575a-0c17-43ef-9168-0fa9b5177df6">
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   </source>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <serial>e575575a-0c17-43ef-9168-0fa9b5177df6</serial>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]: </disk>
Nov 29 08:27:28 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.649 232432 INFO nova.virt.libvirt.driver [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully detached device vdb from instance c1118af2-2266-48e4-a246-9549c68ddaa4 from the persistent domain config.
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.649 232432 DEBUG nova.virt.libvirt.driver [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c1118af2-2266-48e4-a246-9549c68ddaa4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.650 232432 DEBUG nova.virt.libvirt.guest [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e575575a-0c17-43ef-9168-0fa9b5177df6">
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   </source>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <serial>e575575a-0c17-43ef-9168-0fa9b5177df6</serial>
Nov 29 08:27:28 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:27:28 compute-2 nova_compute[232428]: </disk>
Nov 29 08:27:28 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.705 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764404848.705252, c1118af2-2266-48e4-a246-9549c68ddaa4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.707 232432 DEBUG nova.virt.libvirt.driver [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c1118af2-2266-48e4-a246-9549c68ddaa4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.709 232432 INFO nova.virt.libvirt.driver [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully detached device vdb from instance c1118af2-2266-48e4-a246-9549c68ddaa4 from the live domain config.
Nov 29 08:27:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:27:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2660741984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:27:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2660741984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:28 compute-2 nova_compute[232428]: 2025-11-29 08:27:28.984 232432 DEBUG nova.objects.instance [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:29 compute-2 ceph-mon[77138]: osdmap e354: 3 total, 3 up, 3 in
Nov 29 08:27:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/982805940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/982805940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2873914067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2660741984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2660741984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:29 compute-2 nova_compute[232428]: 2025-11-29 08:27:29.031 232432 DEBUG oslo_concurrency.lockutils [None req-d8d9b10e-0002-47db-ba3a-771e95226996 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:29.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:30 compute-2 ceph-mon[77138]: pgmap v2694: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 29 KiB/s wr, 56 op/s
Nov 29 08:27:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2590103841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:30.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.678 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.679 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.680 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.680 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.681 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.682 232432 INFO nova.compute.manager [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Terminating instance
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.684 232432 DEBUG nova.compute.manager [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:27:30 compute-2 kernel: tapdc933ba7-ff (unregistering): left promiscuous mode
Nov 29 08:27:30 compute-2 NetworkManager[48993]: <info>  [1764404850.7430] device (tapdc933ba7-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 ovn_controller[134375]: 2025-11-29T08:27:30Z|00751|binding|INFO|Releasing lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 from this chassis (sb_readonly=0)
Nov 29 08:27:30 compute-2 ovn_controller[134375]: 2025-11-29T08:27:30Z|00752|binding|INFO|Setting lport dc933ba7-ffdf-4e89-9aae-ae19d42f4315 down in Southbound
Nov 29 08:27:30 compute-2 ovn_controller[134375]: 2025-11-29T08:27:30Z|00753|binding|INFO|Removing iface tapdc933ba7-ff ovn-installed in OVS
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.758 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:30.773 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:a8:10 10.100.0.10'], port_security=['fa:16:3e:af:a8:10 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1118af2-2266-48e4-a246-9549c68ddaa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '8', 'neutron:security_group_ids': '21a5b713-336c-4fa4-b1c3-01bbb3410dc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=dc933ba7-ffdf-4e89-9aae-ae19d42f4315) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:27:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:30.774 143801 INFO neutron.agent.ovn.metadata.agent [-] Port dc933ba7-ffdf-4e89-9aae-ae19d42f4315 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis
Nov 29 08:27:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:30.776 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:27:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:30.778 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6b7264-31e8-42a3-93f3-c9a1fbc6ae87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:30 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:30.779 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Nov 29 08:27:30 compute-2 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a1.scope: Consumed 14.624s CPU time.
Nov 29 08:27:30 compute-2 systemd-machined[194747]: Machine qemu-79-instance-000000a1 terminated.
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.936 232432 INFO nova.virt.libvirt.driver [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Instance destroyed successfully.
Nov 29 08:27:30 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [NOTICE]   (302467) : haproxy version is 2.8.14-c23fe91
Nov 29 08:27:30 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [NOTICE]   (302467) : path to executable is /usr/sbin/haproxy
Nov 29 08:27:30 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [WARNING]  (302467) : Exiting Master process...
Nov 29 08:27:30 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [ALERT]    (302467) : Current worker (302469) exited with code 143 (Terminated)
Nov 29 08:27:30 compute-2 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[302463]: [WARNING]  (302467) : All workers exited. Exiting... (0)
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.938 232432 DEBUG nova.objects.instance [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid c1118af2-2266-48e4-a246-9549c68ddaa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:30 compute-2 systemd[1]: libpod-acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356.scope: Deactivated successfully.
Nov 29 08:27:30 compute-2 conmon[302463]: conmon acdfe0cf5af206279675 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356.scope/container/memory.events
Nov 29 08:27:30 compute-2 podman[302680]: 2025-11-29 08:27:30.950347007 +0000 UTC m=+0.063698084 container died acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.961 232432 DEBUG nova.virt.libvirt.vif [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1775463929',display_name='tempest-ServerRescueNegativeTestJSON-server-1775463929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1775463929',id=161,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmo+eet4YBY7wXcrzDQzITBcUSszsOuTXPJOsSPetwgqxs8tnSNHiHLo4P9tBVRJry94mJeN6BGQc8NI6+0zP4qONnsq3uMb4XX3eYuPLEZknBDW+VjJB6uAaoViaI9RQ==',key_name='tempest-keypair-928415713',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-bgjax5g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c1118af2-2266-48e4-a246-9549c68ddaa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.962 232432 DEBUG nova.network.os_vif_util [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "address": "fa:16:3e:af:a8:10", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc933ba7-ff", "ovs_interfaceid": "dc933ba7-ffdf-4e89-9aae-ae19d42f4315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.964 232432 DEBUG nova.network.os_vif_util [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.964 232432 DEBUG os_vif [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.969 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc933ba7-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.973 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:30 compute-2 nova_compute[232428]: 2025-11-29 08:27:30.981 232432 INFO os_vif [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:a8:10,bridge_name='br-int',has_traffic_filtering=True,id=dc933ba7-ffdf-4e89-9aae-ae19d42f4315,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc933ba7-ff')
Nov 29 08:27:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-6e5ad83f56e4b568e877ddc060067be12539bc0cc7fa093072e7464a1663c85b-merged.mount: Deactivated successfully.
Nov 29 08:27:30 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356-userdata-shm.mount: Deactivated successfully.
Nov 29 08:27:31 compute-2 podman[302680]: 2025-11-29 08:27:31.021306309 +0000 UTC m=+0.134657386 container cleanup acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:27:31 compute-2 systemd[1]: libpod-conmon-acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356.scope: Deactivated successfully.
Nov 29 08:27:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:31.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:31 compute-2 podman[302733]: 2025-11-29 08:27:31.128712282 +0000 UTC m=+0.072127900 container remove acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.137 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5857654-b3f7-4c29-bf04-f3b2fe3d15b3]: (4, ('Sat Nov 29 08:27:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356)\nacdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356\nSat Nov 29 08:27:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (acdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356)\nacdfe0cf5af20627967527557aee33eb59bcc7fbe70d584eeb828c80822bc356\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.140 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[45d4e44b-36ff-46eb-8fc1-748abee7bc5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.142 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/47885940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/47885940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:31 compute-2 kernel: tap7008b597-80: left promiscuous mode
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.164 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.169 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[135de666-8c0c-4bb3-b7bb-bdcbe8110382]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.188 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e51f9d-3b99-44a9-a48f-abeb3095c391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.190 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aebe299e-e5fe-4716-a8ad-d8efdf4611c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a25dcbdc-39fc-4d49-9d95-1235dd8db9ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779005, 'reachable_time': 22449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302751, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.217 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:27:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:31.217 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[45fc098d-cdd1-4bee-a506-913f58a8d40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.510 232432 INFO nova.virt.libvirt.driver [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Deleting instance files /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4_del
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.511 232432 INFO nova.virt.libvirt.driver [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Deletion of /var/lib/nova/instances/c1118af2-2266-48e4-a246-9549c68ddaa4_del complete
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.589 232432 INFO nova.compute.manager [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.590 232432 DEBUG oslo.service.loopingcall [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.590 232432 DEBUG nova.compute.manager [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.590 232432 DEBUG nova.network.neutron [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:27:31 compute-2 nova_compute[232428]: 2025-11-29 08:27:31.731 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:32 compute-2 ceph-mon[77138]: pgmap v2695: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 19 KiB/s wr, 42 op/s
Nov 29 08:27:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:27:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987975856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:27:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987975856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.799 232432 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.800 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.800 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.800 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.800 232432 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.801 232432 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-unplugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.801 232432 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.801 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.801 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.802 232432 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.802 232432 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] No waiting events found dispatching network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:32 compute-2 nova_compute[232428]: 2025-11-29 08:27:32.802 232432 WARNING nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received unexpected event network-vif-plugged-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 for instance with vm_state active and task_state deleting.
Nov 29 08:27:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1987975856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1987975856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:33 compute-2 nova_compute[232428]: 2025-11-29 08:27:33.723 232432 DEBUG nova.network.neutron [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:33 compute-2 nova_compute[232428]: 2025-11-29 08:27:33.774 232432 INFO nova.compute.manager [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Took 2.18 seconds to deallocate network for instance.
Nov 29 08:27:33 compute-2 nova_compute[232428]: 2025-11-29 08:27:33.840 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:33 compute-2 nova_compute[232428]: 2025-11-29 08:27:33.841 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.081 232432 DEBUG oslo_concurrency.processutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:34 compute-2 sudo[302755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:34 compute-2 sudo[302755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:34 compute-2 sudo[302755]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:34 compute-2 sudo[302781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:27:34 compute-2 ceph-mon[77138]: pgmap v2696: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 19 KiB/s wr, 42 op/s
Nov 29 08:27:34 compute-2 sudo[302781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:34 compute-2 sudo[302781]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:34 compute-2 sudo[302825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:34 compute-2 sudo[302825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:34 compute-2 sudo[302825]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:34 compute-2 sudo[302850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:27:34 compute-2 sudo[302850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:27:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3497163968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.590 232432 DEBUG oslo_concurrency.processutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.600 232432 DEBUG nova.compute.provider_tree [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.623 232432 DEBUG nova.scheduler.client.report [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.672 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.733 232432 INFO nova.scheduler.client.report [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Deleted allocations for instance c1118af2-2266-48e4-a246-9549c68ddaa4
Nov 29 08:27:34 compute-2 nova_compute[232428]: 2025-11-29 08:27:34.836 232432 DEBUG oslo_concurrency.lockutils [None req-e5e9a664-262d-4ef6-bc66-8c6776cd141e dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c1118af2-2266-48e4-a246-9549c68ddaa4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:34 compute-2 sudo[302850]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:35.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2507905717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3497163968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:27:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:27:35 compute-2 nova_compute[232428]: 2025-11-29 08:27:35.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:36 compute-2 nova_compute[232428]: 2025-11-29 08:27:36.273 232432 DEBUG nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Received event network-vif-deleted-dc933ba7-ffdf-4e89-9aae-ae19d42f4315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:36 compute-2 nova_compute[232428]: 2025-11-29 08:27:36.733 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:37.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 08:27:37 compute-2 ceph-mon[77138]: pgmap v2697: 305 pgs: 305 active+clean; 907 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 170 op/s
Nov 29 08:27:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2696924827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:38 compute-2 ceph-mon[77138]: pgmap v2698: 305 pgs: 305 active+clean; 795 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 240 op/s
Nov 29 08:27:38 compute-2 ceph-mon[77138]: osdmap e355: 3 total, 3 up, 3 in
Nov 29 08:27:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 08:27:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:27:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095645667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:27:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095645667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:39 compute-2 ceph-mon[77138]: osdmap e356: 3 total, 3 up, 3 in
Nov 29 08:27:39 compute-2 ceph-mon[77138]: pgmap v2701: 305 pgs: 305 active+clean; 738 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 9.4 KiB/s wr, 313 op/s
Nov 29 08:27:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4095645667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4095645667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1711814554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:27:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303912543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:27:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303912543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:40 compute-2 sudo[302911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:40 compute-2 sudo[302911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:40 compute-2 sudo[302911]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:40 compute-2 sudo[302937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:40 compute-2 sudo[302937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:40 compute-2 sudo[302937]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:40 compute-2 podman[302935]: 2025-11-29 08:27:40.336506079 +0000 UTC m=+0.099311040 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:27:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1303912543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:27:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1303912543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:27:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 08:27:40 compute-2 nova_compute[232428]: 2025-11-29 08:27:40.975 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:41 compute-2 ovn_controller[134375]: 2025-11-29T08:27:41Z|00754|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:27:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:41 compute-2 nova_compute[232428]: 2025-11-29 08:27:41.175 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:41 compute-2 ceph-mon[77138]: osdmap e357: 3 total, 3 up, 3 in
Nov 29 08:27:41 compute-2 ceph-mon[77138]: pgmap v2703: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 237 op/s
Nov 29 08:27:41 compute-2 nova_compute[232428]: 2025-11-29 08:27:41.736 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 08:27:43 compute-2 ceph-mon[77138]: pgmap v2704: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 29 08:27:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:27:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:27:43 compute-2 ceph-mon[77138]: osdmap e358: 3 total, 3 up, 3 in
Nov 29 08:27:43 compute-2 sudo[302982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:27:43 compute-2 sudo[302982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:43 compute-2 sudo[302982]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:43 compute-2 sudo[303008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:27:43 compute-2 sudo[303008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:27:43 compute-2 sudo[303008]: pam_unix(sudo:session): session closed for user root
Nov 29 08:27:43 compute-2 nova_compute[232428]: 2025-11-29 08:27:43.920 232432 DEBUG nova.compute.manager [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:43 compute-2 nova_compute[232428]: 2025-11-29 08:27:43.920 232432 DEBUG nova.compute.manager [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:27:43 compute-2 nova_compute[232428]: 2025-11-29 08:27:43.920 232432 DEBUG oslo_concurrency.lockutils [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:43 compute-2 nova_compute[232428]: 2025-11-29 08:27:43.920 232432 DEBUG oslo_concurrency.lockutils [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:43 compute-2 nova_compute[232428]: 2025-11-29 08:27:43.921 232432 DEBUG nova.network.neutron [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.132 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.132 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.181 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:27:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.270 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.270 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.277 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.278 232432 INFO nova.compute.claims [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.435 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:44 compute-2 podman[303034]: 2025-11-29 08:27:44.679860601 +0000 UTC m=+0.084847396 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:27:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2016664186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:27:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2590606779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.923 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.931 232432 DEBUG nova.compute.provider_tree [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.951 232432 DEBUG nova.scheduler.client.report [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.981 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:44 compute-2 nova_compute[232428]: 2025-11-29 08:27:44.982 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.055 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.055 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.081 232432 INFO nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.120 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:27:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:45.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.231 232432 DEBUG nova.policy [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cbb3ac39ebd4876ad23f2a6d1c50166', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.240 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.242 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.242 232432 INFO nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Creating image(s)
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.267 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.294 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.319 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.323 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "5a014232164664518828c9a902557a9bd93a955f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.324 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "5a014232164664518828c9a902557a9bd93a955f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.692 232432 DEBUG nova.virt.libvirt.imagebackend [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/aad237b6-caeb-4300-902b-ba8936a7053b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/aad237b6-caeb-4300-902b-ba8936a7053b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:27:45 compute-2 ceph-mon[77138]: pgmap v2706: 305 pgs: 305 active+clean; 685 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 3.4 MiB/s wr, 187 op/s
Nov 29 08:27:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2590606779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.821 232432 DEBUG nova.network.neutron [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.822 232432 DEBUG nova.network.neutron [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.932 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404850.9305298, c1118af2-2266-48e4-a246-9549c68ddaa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.932 232432 INFO nova.compute.manager [-] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] VM Stopped (Lifecycle Event)
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:45 compute-2 nova_compute[232428]: 2025-11-29 08:27:45.992 232432 DEBUG oslo_concurrency.lockutils [req-63705dc7-5cad-4e78-bce4-1a53aa9d4e38 req-e9002c5e-a3c8-4b2d-8111-0c160582ee55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:46 compute-2 nova_compute[232428]: 2025-11-29 08:27:46.001 232432 DEBUG nova.compute.manager [None req-230c7c24-76c5-446b-a9d8-43fa951ae625 - - - - - -] [instance: c1118af2-2266-48e4-a246-9549c68ddaa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.131 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Successfully created port: ba386159-20fd-49b2-9e6a-783215282d96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.748 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.823 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.825 232432 DEBUG nova.virt.images [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] aad237b6-caeb-4300-902b-ba8936a7053b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.828 232432 DEBUG nova.privsep.utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 08:27:47 compute-2 nova_compute[232428]: 2025-11-29 08:27:47.828 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.part /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:49 compute-2 ceph-mon[77138]: pgmap v2707: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 789 KiB/s rd, 2.6 MiB/s wr, 207 op/s
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.347 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.part /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.converted" returned: 0 in 1.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.357 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.445 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f.converted --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.447 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "5a014232164664518828c9a902557a9bd93a955f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.480 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.486 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.657 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Successfully updated port: ba386159-20fd-49b2-9e6a-783215282d96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.700 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.701 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:49 compute-2 nova_compute[232428]: 2025-11-29 08:27:49.701 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:27:50 compute-2 nova_compute[232428]: 2025-11-29 08:27:50.124 232432 DEBUG nova.compute.manager [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:50 compute-2 nova_compute[232428]: 2025-11-29 08:27:50.124 232432 DEBUG nova.compute.manager [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:27:50 compute-2 nova_compute[232428]: 2025-11-29 08:27:50.124 232432 DEBUG oslo_concurrency.lockutils [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:27:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:27:50 compute-2 nova_compute[232428]: 2025-11-29 08:27:50.347 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:27:50 compute-2 nova_compute[232428]: 2025-11-29 08:27:50.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:51 compute-2 ceph-mon[77138]: pgmap v2708: 305 pgs: 305 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Nov 29 08:27:51 compute-2 ceph-mon[77138]: osdmap e359: 3 total, 3 up, 3 in
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.735 232432 DEBUG nova.network.neutron [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.771 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.771 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance network_info: |[{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.772 232432 DEBUG oslo_concurrency.lockutils [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.772 232432 DEBUG nova.network.neutron [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.886 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:51 compute-2 nova_compute[232428]: 2025-11-29 08:27:51.970 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] resizing rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.118 232432 DEBUG nova.objects.instance [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'migration_context' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.145 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.146 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Ensure instance console log exists: /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.146 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.146 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.147 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.149 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Start _get_guest_xml network_info=[{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:27:38Z,direct_url=<?>,disk_format='qcow2',id=aad237b6-caeb-4300-902b-ba8936a7053b,min_disk=0,min_ram=0,name='tempest-scenario-img--154480173',owner='f9a9decdabb1480da8f7d039e8b3d414',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:27:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': 'aad237b6-caeb-4300-902b-ba8936a7053b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.155 232432 WARNING nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.161 232432 DEBUG nova.virt.libvirt.host [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.161 232432 DEBUG nova.virt.libvirt.host [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.165 232432 DEBUG nova.virt.libvirt.host [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.165 232432 DEBUG nova.virt.libvirt.host [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.167 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.167 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:27:38Z,direct_url=<?>,disk_format='qcow2',id=aad237b6-caeb-4300-902b-ba8936a7053b,min_disk=0,min_ram=0,name='tempest-scenario-img--154480173',owner='f9a9decdabb1480da8f7d039e8b3d414',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:27:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.168 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.168 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.168 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.169 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.169 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.169 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.169 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.169 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.170 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.170 232432 DEBUG nova.virt.hardware [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.188 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:52.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.602 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.651 232432 WARNING nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.652 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 5d2af1c0-e1ed-48f9-beda-42cc37212de7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.653 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 78a00526-9c03-4c52-93a4-2275348b883a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.653 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.653 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.654 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.655 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.656 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.657 232432 INFO nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (resize_prep). Skip.
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.657 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.658 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:27:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3946102284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.690 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.693 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.727 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:52 compute-2 nova_compute[232428]: 2025-11-29 08:27:52.733 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:27:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1697026536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.352 232432 DEBUG nova.network.neutron [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.353 232432 DEBUG nova.network.neutron [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.374 232432 DEBUG oslo_concurrency.lockutils [req-8de8f315-07a5-4fa3-9df0-e9c376c6280a req-b6f1db3d-8ad7-4256-abb8-23e1a520d42f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:53 compute-2 ceph-mon[77138]: pgmap v2710: 305 pgs: 305 active+clean; 549 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Nov 29 08:27:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2459429466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3946102284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.732 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.999s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.735 232432 DEBUG nova.virt.libvirt.vif [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2108819744',display_name='tempest-TestMinimumBasicScenario-server-2108819744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2108819744',id=164,image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOH4aPpp8Txlyd8KsEts3qHu9394MaXMXIAHGOQ87/9IyEVfVwsUqqibD266w2tVmIG0iA5UFLFCmcOOGuKgAW7H/0vKZXHikjjni8gouN+3Z7UfOLVkIMOyjOHzfXmaoA==',key_name='tempest-TestMinimumBasicScenario-2074196759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-qoy6pwyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:45Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=21fbf4d2-7068-4308-a3fc-70637e7f52b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.736 232432 DEBUG nova.network.os_vif_util [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.737 232432 DEBUG nova.network.os_vif_util [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.743 232432 DEBUG nova.objects.instance [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.762 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <uuid>21fbf4d2-7068-4308-a3fc-70637e7f52b7</uuid>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <name>instance-000000a4</name>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:name>tempest-TestMinimumBasicScenario-server-2108819744</nova:name>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:27:52</nova:creationTime>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:user uuid="0cbb3ac39ebd4876ad23f2a6d1c50166">tempest-TestMinimumBasicScenario-1484268516-project-member</nova:user>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:project uuid="f9a9decdabb1480da8f7d039e8b3d414">tempest-TestMinimumBasicScenario-1484268516</nova:project>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="aad237b6-caeb-4300-902b-ba8936a7053b"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <nova:port uuid="ba386159-20fd-49b2-9e6a-783215282d96">
Nov 29 08:27:53 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <system>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="serial">21fbf4d2-7068-4308-a3fc-70637e7f52b7</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="uuid">21fbf4d2-7068-4308-a3fc-70637e7f52b7</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </system>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <os>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </os>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <features>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </features>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk">
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config">
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </source>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:27:53 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:a2:b7:4a"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <target dev="tapba386159-20"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/console.log" append="off"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <video>
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </video>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:27:53 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:27:53 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:27:53 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:27:53 compute-2 nova_compute[232428]: </domain>
Nov 29 08:27:53 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.763 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Preparing to wait for external event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.763 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.764 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.764 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.765 232432 DEBUG nova.virt.libvirt.vif [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2108819744',display_name='tempest-TestMinimumBasicScenario-server-2108819744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2108819744',id=164,image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOH4aPpp8Txlyd8KsEts3qHu9394MaXMXIAHGOQ87/9IyEVfVwsUqqibD266w2tVmIG0iA5UFLFCmcOOGuKgAW7H/0vKZXHikjjni8gouN+3Z7UfOLVkIMOyjOHzfXmaoA==',key_name='tempest-TestMinimumBasicScenario-2074196759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-qoy6pwyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:45Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=21fbf4d2-7068-4308-a3fc-70637e7f52b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.766 232432 DEBUG nova.network.os_vif_util [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.767 232432 DEBUG nova.network.os_vif_util [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.768 232432 DEBUG os_vif [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.770 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.770 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.777 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba386159-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.778 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba386159-20, col_values=(('external_ids', {'iface-id': 'ba386159-20fd-49b2-9e6a-783215282d96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:b7:4a', 'vm-uuid': '21fbf4d2-7068-4308-a3fc-70637e7f52b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:53 compute-2 NetworkManager[48993]: <info>  [1764404873.7812] manager: (tapba386159-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.791 232432 INFO os_vif [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20')
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.865 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.865 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.866 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No VIF found with MAC fa:16:3e:a2:b7:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.867 232432 INFO nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Using config drive
Nov 29 08:27:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:53 compute-2 nova_compute[232428]: 2025-11-29 08:27:53.905 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.257 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.389 232432 INFO nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Creating config drive at /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.399 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdfswya5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.563 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdfswya5" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.599 232432 DEBUG nova.storage.rbd_utils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.604 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:54 compute-2 ovn_controller[134375]: 2025-11-29T08:27:54Z|00755|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:27:54 compute-2 nova_compute[232428]: 2025-11-29 08:27:54.947 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.243 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.243 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.244 232432 DEBUG nova.network.neutron [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:27:55 compute-2 ceph-mon[77138]: pgmap v2711: 305 pgs: 305 active+clean; 549 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 497 KiB/s wr, 122 op/s
Nov 29 08:27:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1697026536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.479 232432 DEBUG oslo_concurrency.processutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config 21fbf4d2-7068-4308-a3fc-70637e7f52b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.875s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.480 232432 INFO nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Deleting local config drive /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7/disk.config because it was imported into RBD.
Nov 29 08:27:55 compute-2 kernel: tapba386159-20: entered promiscuous mode
Nov 29 08:27:55 compute-2 ovn_controller[134375]: 2025-11-29T08:27:55Z|00756|binding|INFO|Claiming lport ba386159-20fd-49b2-9e6a-783215282d96 for this chassis.
Nov 29 08:27:55 compute-2 ovn_controller[134375]: 2025-11-29T08:27:55Z|00757|binding|INFO|ba386159-20fd-49b2-9e6a-783215282d96: Claiming fa:16:3e:a2:b7:4a 10.100.0.7
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.5392] manager: (tapba386159-20): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.539 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.544 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:b7:4a 10.100.0.7'], port_security=['fa:16:3e:a2:b7:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21fbf4d2-7068-4308-a3fc-70637e7f52b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '2', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ba386159-20fd-49b2-9e6a-783215282d96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.546 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ba386159-20fd-49b2-9e6a-783215282d96 in datapath cf206693-b177-47ba-9c63-2ab4e51898ce bound to our chassis
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.547 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:27:55 compute-2 ovn_controller[134375]: 2025-11-29T08:27:55Z|00758|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 ovn-installed in OVS
Nov 29 08:27:55 compute-2 ovn_controller[134375]: 2025-11-29T08:27:55Z|00759|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 up in Southbound
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.564 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7c49a-e48e-4130-9453-e7f0158b749f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.565 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf206693-b1 in ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.568 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf206693-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.568 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fb07576e-cf21-428a-bf37-992a47f3f44a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.570 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[47406573-d2a8-40d6-88ad-d077513e8447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 systemd-udevd[303395]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:27:55 compute-2 systemd-machined[194747]: New machine qemu-80-instance-000000a4.
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.584 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c15a618c-9906-428f-ab87-30bebdafdc2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.5882] device (tapba386159-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.5891] device (tapba386159-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:27:55 compute-2 systemd[1]: Started Virtual Machine qemu-80-instance-000000a4.
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.603 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b65a932a-7850-4888-a042-68039ecb0881]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.636 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b30ad453-3b02-4500-8471-132d01741bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.641 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[057ba3fb-03ea-4126-9d6d-0dbb40c78aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.6420] manager: (tapcf206693-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.677 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[340532fc-1603-4073-8703-c7abfe97bed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.680 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb7317-a776-4760-965c-927d53fcf0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.7073] device (tapcf206693-b0): carrier: link connected
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.712 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0344884b-2b2f-480d-868a-67e07e92f918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.729 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7e6d74-01f9-4de9-b64f-357323828379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784197, 'reachable_time': 20045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303429, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.745 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb926b06-21cb-479e-95dc-704726ed14bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:f810'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784197, 'tstamp': 784197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303430, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.765 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2d58f511-5b96-493a-8d65-9a463aba46f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784197, 'reachable_time': 20045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303431, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.805 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a621693-9d65-4aea-a9b1-4e71ddb3be8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.876 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9903303a-a6f2-4506-a976-b3a65457e0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.878 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.878 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.878 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf206693-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:55 compute-2 kernel: tapcf206693-b0: entered promiscuous mode
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 NetworkManager[48993]: <info>  [1764404875.8813] manager: (tapcf206693-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.883 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf206693-b0, col_values=(('external_ids', {'iface-id': '5116070e-bd28-42f7-aba2-689a78e19083'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:27:55 compute-2 ovn_controller[134375]: 2025-11-29T08:27:55Z|00760|binding|INFO|Releasing lport 5116070e-bd28-42f7-aba2-689a78e19083 from this chassis (sb_readonly=0)
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.900 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.901 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.902 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6321bb21-3afd-4d5e-85f2-232495034783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.902 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:27:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:27:55.903 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'env', 'PROCESS_TAG=haproxy-cf206693-b177-47ba-9c63-2ab4e51898ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf206693-b177-47ba-9c63-2ab4e51898ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.985 232432 DEBUG nova.compute.manager [req-3334ed97-2e1e-4c85-a6c2-2c6c0592b753 req-c5f3195c-6ddc-42a1-83fb-ecef7ffd72bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.985 232432 DEBUG oslo_concurrency.lockutils [req-3334ed97-2e1e-4c85-a6c2-2c6c0592b753 req-c5f3195c-6ddc-42a1-83fb-ecef7ffd72bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.986 232432 DEBUG oslo_concurrency.lockutils [req-3334ed97-2e1e-4c85-a6c2-2c6c0592b753 req-c5f3195c-6ddc-42a1-83fb-ecef7ffd72bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.986 232432 DEBUG oslo_concurrency.lockutils [req-3334ed97-2e1e-4c85-a6c2-2c6c0592b753 req-c5f3195c-6ddc-42a1-83fb-ecef7ffd72bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:55 compute-2 nova_compute[232428]: 2025-11-29 08:27:55.986 232432 DEBUG nova.compute.manager [req-3334ed97-2e1e-4c85-a6c2-2c6c0592b753 req-c5f3195c-6ddc-42a1-83fb-ecef7ffd72bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Processing event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:27:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:56 compute-2 podman[303481]: 2025-11-29 08:27:56.29740216 +0000 UTC m=+0.062630612 container create d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:27:56 compute-2 systemd[1]: Started libpod-conmon-d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c.scope.
Nov 29 08:27:56 compute-2 podman[303481]: 2025-11-29 08:27:56.263612272 +0000 UTC m=+0.028840744 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:27:56 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:27:56 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f6cfce8fb6f9d89f6cd6966068ca70f43afb6515d53e891cdbb463cc9e7349/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:27:56 compute-2 podman[303481]: 2025-11-29 08:27:56.383618049 +0000 UTC m=+0.148846501 container init d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:27:56 compute-2 podman[303481]: 2025-11-29 08:27:56.391048501 +0000 UTC m=+0.156276953 container start d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:27:56 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [NOTICE]   (303501) : New worker (303503) forked
Nov 29 08:27:56 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [NOTICE]   (303501) : Loading success.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.745 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.746 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404876.744524, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.747 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Started (Lifecycle Event)
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.750 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.753 232432 INFO nova.virt.libvirt.driver [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance spawned successfully.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.753 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.775 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.780 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.785 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.786 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.786 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.786 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.787 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.787 232432 DEBUG nova.virt.libvirt.driver [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.820 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.821 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404876.744648, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.822 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Paused (Lifecycle Event)
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.854 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.858 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404876.7490783, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.858 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Resumed (Lifecycle Event)
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.868 232432 INFO nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Took 11.63 seconds to spawn the instance on the hypervisor.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.868 232432 DEBUG nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.882 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.886 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:27:56 compute-2 ceph-mon[77138]: pgmap v2712: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 29 08:27:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/430301272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.907 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.950 232432 INFO nova.compute.manager [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Took 12.71 seconds to build instance.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.979 232432 DEBUG oslo_concurrency.lockutils [None req-7f962b1c-d03f-4c24-aab2-c284b6dc3e7c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.980 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.981 232432 INFO nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:27:56 compute-2 nova_compute[232428]: 2025-11-29 08:27:56.981 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:27:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.346 232432 DEBUG nova.network.neutron [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.377 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.496 232432 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.498 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Creating file /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/00e791e3bbd44a7281092499898b8da8.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.499 232432 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/00e791e3bbd44a7281092499898b8da8.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.958 232432 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/00e791e3bbd44a7281092499898b8da8.tmp" returned: 1 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.961 232432 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/00e791e3bbd44a7281092499898b8da8.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.962 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Creating directory /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Nov 29 08:27:57 compute-2 nova_compute[232428]: 2025-11-29 08:27:57.963 232432 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.091 232432 DEBUG nova.compute.manager [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.093 232432 DEBUG oslo_concurrency.lockutils [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.093 232432 DEBUG oslo_concurrency.lockutils [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.094 232432 DEBUG oslo_concurrency.lockutils [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.095 232432 DEBUG nova.compute.manager [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.096 232432 WARNING nova.compute.manager [req-e2cf319f-8dfa-4c5b-96ec-ab44c3bec21b req-f773cc7a-47e8-41af-9481-1cd95a266fce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state None.
Nov 29 08:27:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:27:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.238 232432 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.244 232432 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:27:58 compute-2 podman[303539]: 2025-11-29 08:27:58.698129981 +0000 UTC m=+0.091878797 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:27:58 compute-2 nova_compute[232428]: 2025-11-29 08:27:58.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:27:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:27:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:27:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.003000095s ======
Nov 29 08:27:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000095s
Nov 29 08:27:59 compute-2 ceph-mon[77138]: pgmap v2713: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 08:28:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:00.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:00 compute-2 sudo[303567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:00 compute-2 sudo[303567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:00 compute-2 sudo[303567]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:00 compute-2 sudo[303592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:00 compute-2 sudo[303592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:00 compute-2 sudo[303592]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:00 compute-2 ceph-mon[77138]: pgmap v2714: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 08:28:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:01.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.261 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.262 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.262 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:28:01 compute-2 nova_compute[232428]: 2025-11-29 08:28:01.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:01 compute-2 ceph-mon[77138]: pgmap v2715: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 29 08:28:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:02 compute-2 kernel: tape0c088b1-9b (unregistering): left promiscuous mode
Nov 29 08:28:02 compute-2 NetworkManager[48993]: <info>  [1764404882.9821] device (tape0c088b1-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:28:02 compute-2 ovn_controller[134375]: 2025-11-29T08:28:02Z|00761|binding|INFO|Releasing lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e from this chassis (sb_readonly=0)
Nov 29 08:28:02 compute-2 nova_compute[232428]: 2025-11-29 08:28:02.998 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 ovn_controller[134375]: 2025-11-29T08:28:03Z|00762|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e down in Southbound
Nov 29 08:28:03 compute-2 ovn_controller[134375]: 2025-11-29T08:28:03Z|00763|binding|INFO|Removing iface tape0c088b1-9b ovn-installed in OVS
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.028 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Nov 29 08:28:03 compute-2 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a3.scope: Consumed 18.091s CPU time.
Nov 29 08:28:03 compute-2 systemd-machined[194747]: Machine qemu-77-instance-000000a3 terminated.
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.059 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:cc:96 10.100.0.3'], port_security=['fa:16:3e:76:cc:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '78a00526-9c03-4c52-93a4-2275348b883a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.062 143801 INFO neutron.agent.ovn.metadata.agent [-] Port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.066 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.088 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1516eda1-264d-4b89-a05c-77205dd46d17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.123 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.125 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c087ef49-022e-495d-a195-0b20f7414d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.129 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8ec961-0648-4470-9544-3337bffc1ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.141 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.141 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.142 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:03.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.171 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c7f1af-e38a-4be3-9d71-5e287c01a05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.190 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[577cdac2-1eac-4439-a262-a688e0da979b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764320, 'reachable_time': 43939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303630, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.210 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d36b2530-6536-4024-b858-5969acade843]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764333, 'tstamp': 764333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303631, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764336, 'tstamp': 764336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303631, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.212 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.216 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.220 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.221 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.221 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.222 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.275 232432 INFO nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance shutdown successfully after 5 seconds.
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.281 232432 INFO nova.virt.libvirt.driver [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance destroyed successfully.
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.281 232432 DEBUG nova.virt.libvirt.vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.282 232432 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.282 232432 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.283 232432 DEBUG os_vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.285 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.285 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0c088b1-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.288 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.290 232432 INFO os_vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b')
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.331 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.332 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:03.333 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.577 232432 DEBUG nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.577 232432 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.578 232432 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.578 232432 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.578 232432 DEBUG nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.579 232432 WARNING nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state active and task_state resize_migrating.
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.606 232432 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.607 232432 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.607 232432 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:03 compute-2 nova_compute[232428]: 2025-11-29 08:28:03.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:03 compute-2 ceph-mon[77138]: pgmap v2716: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 75 op/s
Nov 29 08:28:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:04.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:04 compute-2 nova_compute[232428]: 2025-11-29 08:28:04.837 232432 DEBUG neutronclient.v2_0.client [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 29 08:28:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4065788622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.084 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.085 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:05.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.273 232432 DEBUG nova.objects.instance [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.491 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.492 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.492 232432 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.496 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.895 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.896 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.896 232432 INFO nova.compute.manager [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Attaching volume d82be9cd-deee-4312-bbbb-bb9d0726ae5c to /dev/vdb
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.946 232432 DEBUG nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.947 232432 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.947 232432 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.948 232432 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.948 232432 DEBUG nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:05 compute-2 nova_compute[232428]: 2025-11-29 08:28:05.948 232432 WARNING nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state active and task_state resize_migrated.
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.091 232432 DEBUG os_brick.utils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.094 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.110 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.111 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[a69f6f18-d8b1-49de-b3bb-5364cc016d99]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.113 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.125 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.125 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[175a1d57-fb20-4921-837b-c41a81a129ae]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:06 compute-2 ceph-mon[77138]: pgmap v2717: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.127 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.142 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.143 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5acb8a-6479-4dd9-9437-8f31f2e88910]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.145 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[66bad6ae-3d6a-4417-b166-f0f9a96d2a73]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.145 232432 DEBUG oslo_concurrency.processutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.190 232432 DEBUG oslo_concurrency.processutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.193 232432 DEBUG os_brick.initiator.connectors.lightos [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.193 232432 DEBUG os_brick.initiator.connectors.lightos [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.194 232432 DEBUG os_brick.initiator.connectors.lightos [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.194 232432 DEBUG os_brick.utils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.194 232432 DEBUG nova.virt.block_device [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating existing volume attachment record: 91f9fa53-1d75-4b68-a91b-2c4afb5002fb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:06.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.763 232432 DEBUG nova.compute.manager [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.764 232432 DEBUG nova.compute.manager [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.764 232432 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.765 232432 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.765 232432 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:28:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2803810910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.901 232432 DEBUG nova.objects.instance [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.930 232432 DEBUG nova.virt.libvirt.driver [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Attempting to attach volume d82be9cd-deee-4312-bbbb-bb9d0726ae5c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:28:06 compute-2 nova_compute[232428]: 2025-11-29 08:28:06.932 232432 DEBUG nova.virt.libvirt.guest [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:28:06 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d82be9cd-deee-4312-bbbb-bb9d0726ae5c">
Nov 29 08:28:06 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   </source>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:28:06 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:28:06 compute-2 nova_compute[232428]:   <serial>d82be9cd-deee-4312-bbbb-bb9d0726ae5c</serial>
Nov 29 08:28:06 compute-2 nova_compute[232428]: </disk>
Nov 29 08:28:06 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.078 232432 DEBUG nova.virt.libvirt.driver [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.078 232432 DEBUG nova.virt.libvirt.driver [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.079 232432 DEBUG nova.virt.libvirt.driver [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.079 232432 DEBUG nova.virt.libvirt.driver [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No VIF found with MAC fa:16:3e:a2:b7:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:28:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/774689083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2803810910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:07.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.264 232432 DEBUG oslo_concurrency.lockutils [None req-05f9962b-f9c9-425b-8474-935e75a5b86c 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:07 compute-2 nova_compute[232428]: 2025-11-29 08:28:07.395 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:08 compute-2 ceph-mon[77138]: pgmap v2718: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 79 op/s
Nov 29 08:28:08 compute-2 nova_compute[232428]: 2025-11-29 08:28:08.151 232432 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:08 compute-2 nova_compute[232428]: 2025-11-29 08:28:08.152 232432 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:08 compute-2 nova_compute[232428]: 2025-11-29 08:28:08.174 232432 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:08.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:08 compute-2 nova_compute[232428]: 2025-11-29 08:28:08.287 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:28:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2949021077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:09 compute-2 nova_compute[232428]: 2025-11-29 08:28:09.058 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2949021077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:10.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:10 compute-2 podman[303674]: 2025-11-29 08:28:10.693055462 +0000 UTC m=+0.083305959 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:28:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:11 compute-2 nova_compute[232428]: 2025-11-29 08:28:11.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:11 compute-2 nova_compute[232428]: 2025-11-29 08:28:11.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:11 compute-2 nova_compute[232428]: 2025-11-29 08:28:11.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:28:11 compute-2 ceph-mon[77138]: pgmap v2719: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 79 op/s
Nov 29 08:28:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 08:28:11 compute-2 nova_compute[232428]: 2025-11-29 08:28:11.765 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:11.879 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:11 compute-2 nova_compute[232428]: 2025-11-29 08:28:11.881 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:11.881 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.154 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:12.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:12 compute-2 ceph-mon[77138]: pgmap v2720: 305 pgs: 305 active+clean; 592 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 456 KiB/s wr, 48 op/s
Nov 29 08:28:12 compute-2 ceph-mon[77138]: osdmap e360: 3 total, 3 up, 3 in
Nov 29 08:28:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1680458643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.361 232432 DEBUG nova.compute.manager [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.362 232432 DEBUG nova.compute.manager [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.362 232432 DEBUG oslo_concurrency.lockutils [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.363 232432 DEBUG oslo_concurrency.lockutils [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:12 compute-2 nova_compute[232428]: 2025-11-29 08:28:12.363 232432 DEBUG nova.network.neutron [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:13.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.231 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.232 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2204410226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1984822392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.289 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:13 compute-2 ovn_controller[134375]: 2025-11-29T08:28:13Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:b7:4a 10.100.0.7
Nov 29 08:28:13 compute-2 ovn_controller[134375]: 2025-11-29T08:28:13Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:b7:4a 10.100.0.7
Nov 29 08:28:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:28:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2656809925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.677 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.713 232432 DEBUG nova.network.neutron [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.714 232432 DEBUG nova.network.neutron [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.760 232432 DEBUG oslo_concurrency.lockutils [req-f9a80645-ac6b-4579-94e2-abc7a6e065a3 req-3fa7f3fa-9b31-4f8f-a89d-d9eb2119dca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.780 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.780 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.784 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.784 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.784 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.787 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.788 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.788 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:28:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.971 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.973 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3785MB free_disk=20.77997589111328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.973 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:13 compute-2 nova_compute[232428]: 2025-11-29 08:28:13.973 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.052 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration for instance 78a00526-9c03-4c52-93a4-2275348b883a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.081 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating resource usage from migration f4512608-06e5-4fc1-8a5c-b2332184a36d
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.081 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting to track outgoing migration f4512608-06e5-4fc1-8a5c-b2332184a36d with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.193 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.193 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.193 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration f4512608-06e5-4fc1-8a5c-b2332184a36d is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.194 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.194 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:28:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.363 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.487 232432 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.488 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.489 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.489 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.489 232432 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.489 232432 WARNING nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state resized and task_state None.
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.490 232432 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.490 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.490 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.490 232432 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.490 232432 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.491 232432 WARNING nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state resized and task_state None.
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:28:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/644913029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.836 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.843 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.856 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:28:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:14.884 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.892 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.893 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.905 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.905 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:14 compute-2 nova_compute[232428]: 2025-11-29 08:28:14.905 232432 DEBUG nova.compute.manager [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Going to confirm migration 20 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Nov 29 08:28:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:15.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:15 compute-2 nova_compute[232428]: 2025-11-29 08:28:15.430 232432 DEBUG neutronclient.v2_0.client [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 29 08:28:15 compute-2 nova_compute[232428]: 2025-11-29 08:28:15.431 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:15 compute-2 nova_compute[232428]: 2025-11-29 08:28:15.431 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:15 compute-2 nova_compute[232428]: 2025-11-29 08:28:15.431 232432 DEBUG nova.network.neutron [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:28:15 compute-2 nova_compute[232428]: 2025-11-29 08:28:15.431 232432 DEBUG nova.objects.instance [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'info_cache' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:28:15 compute-2 podman[303740]: 2025-11-29 08:28:15.714843621 +0000 UTC m=+0.113626408 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 08:28:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.697 232432 DEBUG nova.compute.manager [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.698 232432 DEBUG nova.compute.manager [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.698 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.699 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.699 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:16 compute-2 nova_compute[232428]: 2025-11-29 08:28:16.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:17.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:17 compute-2 nova_compute[232428]: 2025-11-29 08:28:17.361 232432 DEBUG nova.network.neutron [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:17 compute-2 nova_compute[232428]: 2025-11-29 08:28:17.394 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:17 compute-2 nova_compute[232428]: 2025-11-29 08:28:17.395 232432 DEBUG nova.objects.instance [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:28:17 compute-2 ceph-mon[77138]: pgmap v2722: 305 pgs: 305 active+clean; 592 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 533 KiB/s wr, 39 op/s
Nov 29 08:28:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2656809925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1456162095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:17 compute-2 nova_compute[232428]: 2025-11-29 08:28:17.555 232432 DEBUG nova.storage.rbd_utils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] removing snapshot(nova-resize) on rbd image(78a00526-9c03-4c52-93a4-2275348b883a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.227 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404883.2271156, 78a00526-9c03-4c52-93a4-2275348b883a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.228 232432 INFO nova.compute.manager [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Stopped (Lifecycle Event)
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.244 232432 DEBUG nova.compute.manager [None req-25c5e4f5-5d9d-41a2-945d-2785ea49430e - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.248 232432 DEBUG nova.compute.manager [None req-25c5e4f5-5d9d-41a2-945d-2785ea49430e - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:28:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:18.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.265 232432 INFO nova.compute.manager [None req-25c5e4f5-5d9d-41a2-945d-2785ea49430e - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:18 compute-2 ceph-mon[77138]: pgmap v2723: 305 pgs: 305 active+clean; 615 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Nov 29 08:28:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/644913029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:18 compute-2 ceph-mon[77138]: pgmap v2724: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Nov 29 08:28:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.549 232432 DEBUG nova.virt.libvirt.vif [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:13Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:28:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.550 232432 DEBUG nova.network.os_vif_util [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.551 232432 DEBUG nova.network.os_vif_util [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.551 232432 DEBUG os_vif [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.554 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0c088b1-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.554 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.557 232432 INFO os_vif [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b')
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.558 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.558 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:18 compute-2 nova_compute[232428]: 2025-11-29 08:28:18.655 232432 DEBUG oslo_concurrency.processutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:28:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/980111895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.119 232432 DEBUG oslo_concurrency.processutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.129 232432 DEBUG nova.compute.provider_tree [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:28:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:19.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.373 232432 DEBUG nova.scheduler.client.report [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:28:19 compute-2 ceph-mon[77138]: osdmap e361: 3 total, 3 up, 3 in
Nov 29 08:28:19 compute-2 ceph-mon[77138]: pgmap v2726: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 227 op/s
Nov 29 08:28:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/980111895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.669 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.671 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.687 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.712 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.713 232432 DEBUG nova.compute.manager [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.714 232432 DEBUG nova.compute.manager [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.715 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.715 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.715 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.728 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.861 232432 INFO nova.scheduler.client.report [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocation for migration f4512608-06e5-4fc1-8a5c-b2332184a36d
Nov 29 08:28:19 compute-2 nova_compute[232428]: 2025-11-29 08:28:19.915 232432 DEBUG oslo_concurrency.lockutils [None req-0ccf653e-c3ad-400f-bc02-2dc53011a69b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:20 compute-2 sshd-session[303821]: Invalid user sol from 45.148.10.240 port 44532
Nov 29 08:28:20 compute-2 sshd-session[303821]: Connection closed by invalid user sol 45.148.10.240 port 44532 [preauth]
Nov 29 08:28:20 compute-2 sudo[303823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:20 compute-2 sudo[303823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:20 compute-2 sudo[303823]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3296194091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:20 compute-2 sudo[303848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:20 compute-2 sudo[303848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:20 compute-2 sudo[303848]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.019 232432 DEBUG nova.compute.manager [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.020 232432 DEBUG nova.compute.manager [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.021 232432 DEBUG oslo_concurrency.lockutils [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:21.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.644 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.645 232432 DEBUG nova.network.neutron [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.664 232432 DEBUG oslo_concurrency.lockutils [req-f69389f0-1f21-4597-95c0-08eae5ffffaf req-ecb00d76-7b94-4191-9d2c-17070aa87b8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.665 232432 DEBUG oslo_concurrency.lockutils [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.665 232432 DEBUG nova.network.neutron [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:21 compute-2 ceph-mon[77138]: pgmap v2727: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.916 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.917 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.917 232432 INFO nova.compute.manager [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Rebooting instance
Nov 29 08:28:21 compute-2 nova_compute[232428]: 2025-11-29 08:28:21.945 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 08:28:22 compute-2 nova_compute[232428]: 2025-11-29 08:28:22.778 232432 DEBUG nova.network.neutron [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:22 compute-2 nova_compute[232428]: 2025-11-29 08:28:22.779 232432 DEBUG nova.network.neutron [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:22 compute-2 nova_compute[232428]: 2025-11-29 08:28:22.802 232432 DEBUG oslo_concurrency.lockutils [req-07d4f7e8-2328-44e1-8cd5-07209093cf39 req-0ff4ef69-bf46-4172-bef3-68d0e7e146a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:22 compute-2 nova_compute[232428]: 2025-11-29 08:28:22.803 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:22 compute-2 nova_compute[232428]: 2025-11-29 08:28:22.803 232432 DEBUG nova.network.neutron [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:28:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:23 compute-2 nova_compute[232428]: 2025-11-29 08:28:23.293 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:23 compute-2 ceph-mon[77138]: pgmap v2728: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Nov 29 08:28:23 compute-2 ceph-mon[77138]: osdmap e362: 3 total, 3 up, 3 in
Nov 29 08:28:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:24 compute-2 nova_compute[232428]: 2025-11-29 08:28:24.191 232432 DEBUG nova.network.neutron [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:24 compute-2 nova_compute[232428]: 2025-11-29 08:28:24.220 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:24 compute-2 nova_compute[232428]: 2025-11-29 08:28:24.222 232432 DEBUG nova.compute.manager [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:28:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:28:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:25 compute-2 ceph-mon[77138]: pgmap v2730: 305 pgs: 305 active+clean; 672 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 130 op/s
Nov 29 08:28:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3816501715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/303436852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:26 compute-2 nova_compute[232428]: 2025-11-29 08:28:26.773 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:27 compute-2 kernel: tapba386159-20 (unregistering): left promiscuous mode
Nov 29 08:28:27 compute-2 NetworkManager[48993]: <info>  [1764404907.2481] device (tapba386159-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.258 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 ovn_controller[134375]: 2025-11-29T08:28:27Z|00764|binding|INFO|Releasing lport ba386159-20fd-49b2-9e6a-783215282d96 from this chassis (sb_readonly=0)
Nov 29 08:28:27 compute-2 ovn_controller[134375]: 2025-11-29T08:28:27Z|00765|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 down in Southbound
Nov 29 08:28:27 compute-2 ovn_controller[134375]: 2025-11-29T08:28:27Z|00766|binding|INFO|Removing iface tapba386159-20 ovn-installed in OVS
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.263 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.266 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:b7:4a 10.100.0.7'], port_security=['fa:16:3e:a2:b7:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21fbf4d2-7068-4308-a3fc-70637e7f52b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '5', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67 37101917-f07f-4849-ae27-7097b8e78597', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ba386159-20fd-49b2-9e6a-783215282d96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.267 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ba386159-20fd-49b2-9e6a-783215282d96 in datapath cf206693-b177-47ba-9c63-2ab4e51898ce unbound from our chassis
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.268 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf206693-b177-47ba-9c63-2ab4e51898ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.270 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4dd7ea-6112-4da5-84ab-b711773cc0e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.271 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace which is not needed anymore
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.277 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Nov 29 08:28:27 compute-2 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Consumed 16.008s CPU time.
Nov 29 08:28:27 compute-2 systemd-machined[194747]: Machine qemu-80-instance-000000a4 terminated.
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [NOTICE]   (303501) : haproxy version is 2.8.14-c23fe91
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [NOTICE]   (303501) : path to executable is /usr/sbin/haproxy
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [WARNING]  (303501) : Exiting Master process...
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [WARNING]  (303501) : Exiting Master process...
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [ALERT]    (303501) : Current worker (303503) exited with code 143 (Terminated)
Nov 29 08:28:27 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[303497]: [WARNING]  (303501) : All workers exited. Exiting... (0)
Nov 29 08:28:27 compute-2 systemd[1]: libpod-d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c.scope: Deactivated successfully.
Nov 29 08:28:27 compute-2 podman[303899]: 2025-11-29 08:28:27.432270321 +0000 UTC m=+0.049676946 container died d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:28:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-03f6cfce8fb6f9d89f6cd6966068ca70f43afb6515d53e891cdbb463cc9e7349-merged.mount: Deactivated successfully.
Nov 29 08:28:27 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c-userdata-shm.mount: Deactivated successfully.
Nov 29 08:28:27 compute-2 podman[303899]: 2025-11-29 08:28:27.469964441 +0000 UTC m=+0.087371076 container cleanup d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:28:27 compute-2 systemd[1]: libpod-conmon-d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c.scope: Deactivated successfully.
Nov 29 08:28:27 compute-2 NetworkManager[48993]: <info>  [1764404907.4909] manager: (tapba386159-20): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 podman[303929]: 2025-11-29 08:28:27.549567373 +0000 UTC m=+0.050673018 container remove d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.556 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f23b5ca7-9bc3-4ea4-88ed-db889edd494c]: (4, ('Sat Nov 29 08:28:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c)\nd4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c\nSat Nov 29 08:28:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (d4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c)\nd4b57da0dbc84a5f5b8a48dfbdba09a0edf3054dcc48f86827fe51cff48fea6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.557 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e607d725-db90-4205-8e7c-5ae3762d1f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.558 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 kernel: tapcf206693-b0: left promiscuous mode
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.583 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcbd08f-cb22-4086-97c2-29c22a286c3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.602 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9343911b-d669-4e04-8b01-93b7c35e5891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.604 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[23db230c-dbea-4ee9-bc66-475db39744d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.623 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a91efa3d-0ffe-478d-a453-d9f56817e3ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784189, 'reachable_time': 32679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303956, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.627 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:28:27 compute-2 systemd[1]: run-netns-ovnmeta\x2dcf206693\x2db177\x2d47ba\x2d9c63\x2d2ab4e51898ce.mount: Deactivated successfully.
Nov 29 08:28:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:27.627 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[77bfdb3c-fccc-4a07-8167-9ed422cb2471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.750 232432 DEBUG nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.751 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.751 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.752 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.752 232432 DEBUG nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.753 232432 WARNING nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state reboot_started.
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.754 232432 DEBUG nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.754 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.755 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.755 232432 DEBUG oslo_concurrency.lockutils [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.755 232432 DEBUG nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.756 232432 WARNING nova.compute.manager [req-97087dcc-a2c9-44ef-b553-a7951f638c05 req-e8487abd-9a8d-4f59-9a18-4d8fe714cc04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state reboot_started.
Nov 29 08:28:27 compute-2 ceph-mon[77138]: pgmap v2731: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 5.2 MiB/s wr, 68 op/s
Nov 29 08:28:27 compute-2 nova_compute[232428]: 2025-11-29 08:28:27.969 232432 INFO nova.virt.libvirt.driver [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance shutdown successfully.
Nov 29 08:28:28 compute-2 kernel: tapba386159-20: entered promiscuous mode
Nov 29 08:28:28 compute-2 systemd-udevd[303879]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.0271] manager: (tapba386159-20): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 08:28:28 compute-2 ovn_controller[134375]: 2025-11-29T08:28:28Z|00767|binding|INFO|Claiming lport ba386159-20fd-49b2-9e6a-783215282d96 for this chassis.
Nov 29 08:28:28 compute-2 ovn_controller[134375]: 2025-11-29T08:28:28Z|00768|binding|INFO|ba386159-20fd-49b2-9e6a-783215282d96: Claiming fa:16:3e:a2:b7:4a 10.100.0.7
Nov 29 08:28:28 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.029 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.0378] device (tapba386159-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.0386] device (tapba386159-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:28:28 compute-2 ovn_controller[134375]: 2025-11-29T08:28:28Z|00769|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 ovn-installed in OVS
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.049 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 systemd-machined[194747]: New machine qemu-81-instance-000000a4.
Nov 29 08:28:28 compute-2 systemd[1]: Started Virtual Machine qemu-81-instance-000000a4.
Nov 29 08:28:28 compute-2 ovn_controller[134375]: 2025-11-29T08:28:28Z|00770|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 up in Southbound
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.115 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:b7:4a 10.100.0.7'], port_security=['fa:16:3e:a2:b7:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21fbf4d2-7068-4308-a3fc-70637e7f52b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '6', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67 37101917-f07f-4849-ae27-7097b8e78597', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ba386159-20fd-49b2-9e6a-783215282d96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.116 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ba386159-20fd-49b2-9e6a-783215282d96 in datapath cf206693-b177-47ba-9c63-2ab4e51898ce bound to our chassis
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.118 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.232 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[feb357eb-9234-4fa8-a82a-a4304f80d59a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.233 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf206693-b1 in ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.235 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf206693-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.235 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f4760389-591b-4e0f-b7f6-fb4fecfef0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.236 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f19171-e6cb-4d67-a2c0-424fa9b0be0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.249 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf30de2-c7f7-490d-80c7-9866faa3f3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.275 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed2d232-9c45-4770-b17b-ab1ac58009a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.295 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.309 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[44eae807-2136-46fd-8f73-4a62c17486fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.3179] manager: (tapcf206693-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.319 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d1ab95-aafa-4eda-ab11-c795efb3ab40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.355 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[04dc2a50-d51c-49bf-b86b-d342d3662962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.359 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[41f8b698-9dad-4097-9486-39f9e800f860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.3884] device (tapcf206693-b0): carrier: link connected
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.397 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b581e797-c633-4bd6-ac2e-b61a6f394c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.419 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b27d39a1-908a-4e79-8584-d54b05a24333]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787465, 'reachable_time': 44968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304005, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.439 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[889d026f-346c-47ad-b5aa-cf91b470bb3c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:f810'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787465, 'tstamp': 787465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304006, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.461 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c26d4e04-9a12-4121-98b0-3f525518d197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787465, 'reachable_time': 44968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304007, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.502 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[901f76cc-f0f5-4173-92f7-0e81cfc6fb6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.580 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[af3d375e-067b-4eeb-b211-faa6a3f4feeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.581 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.582 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.582 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf206693-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:28 compute-2 kernel: tapcf206693-b0: entered promiscuous mode
Nov 29 08:28:28 compute-2 NetworkManager[48993]: <info>  [1764404908.5843] manager: (tapcf206693-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.583 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.587 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf206693-b0, col_values=(('external_ids', {'iface-id': '5116070e-bd28-42f7-aba2-689a78e19083'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:28 compute-2 ovn_controller[134375]: 2025-11-29T08:28:28Z|00771|binding|INFO|Releasing lport 5116070e-bd28-42f7-aba2-689a78e19083 from this chassis (sb_readonly=0)
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 nova_compute[232428]: 2025-11-29 08:28:28.603 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.605 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.605 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[669469d6-2cff-4063-b5f5-8516752919b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.607 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:28:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:28.607 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'env', 'PROCESS_TAG=haproxy-cf206693-b177-47ba-9c63-2ab4e51898ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf206693-b177-47ba-9c63-2ab4e51898ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:28:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 08:28:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:29 compute-2 podman[304070]: 2025-11-29 08:28:28.955530884 +0000 UTC m=+0.028302407 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:28:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3051919099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:28:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3051919099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:28:29 compute-2 ceph-mon[77138]: osdmap e363: 3 total, 3 up, 3 in
Nov 29 08:28:29 compute-2 podman[304070]: 2025-11-29 08:28:29.07963044 +0000 UTC m=+0.152401953 container create 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:28:29 compute-2 systemd[1]: Started libpod-conmon-454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474.scope.
Nov 29 08:28:29 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:28:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10104f805cb364f7561842cd9850878d9d4cf16ecf61f8895fb913705ca3992e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:28:29 compute-2 podman[304070]: 2025-11-29 08:28:29.150315953 +0000 UTC m=+0.223087476 container init 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:28:29 compute-2 podman[304070]: 2025-11-29 08:28:29.15758354 +0000 UTC m=+0.230355033 container start 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:28:29 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [NOTICE]   (304125) : New worker (304134) forked
Nov 29 08:28:29 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [NOTICE]   (304125) : Loading success.
Nov 29 08:28:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:29 compute-2 podman[304087]: 2025-11-29 08:28:29.216525835 +0000 UTC m=+0.101421506 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.282 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 21fbf4d2-7068-4308-a3fc-70637e7f52b7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.283 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404909.282343, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.283 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Resumed (Lifecycle Event)
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.288 232432 INFO nova.virt.libvirt.driver [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance running successfully.
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.288 232432 INFO nova.virt.libvirt.driver [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance soft rebooted successfully.
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.289 232432 DEBUG nova.compute.manager [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.318 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.321 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.351 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] During sync_power_state the instance has a pending task (reboot_started). Skip.
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.351 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404909.2834957, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.352 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Started (Lifecycle Event)
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.355 232432 DEBUG oslo_concurrency.lockutils [None req-3fa6e0f2-79fb-422f-9928-040313e48640 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.388 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.396 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.838 232432 DEBUG nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.839 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.839 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.840 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.840 232432 DEBUG nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.841 232432 WARNING nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state None.
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.841 232432 DEBUG nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.841 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.842 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.842 232432 DEBUG oslo_concurrency.lockutils [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.842 232432 DEBUG nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:29 compute-2 nova_compute[232428]: 2025-11-29 08:28:29.843 232432 WARNING nova.compute.manager [req-38bccdca-4495-4585-b437-5d699b4b1f16 req-1944eb00-9d8e-49a5-939e-df05f025b669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state None.
Nov 29 08:28:30 compute-2 ceph-mon[77138]: pgmap v2732: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Nov 29 08:28:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:31 compute-2 nova_compute[232428]: 2025-11-29 08:28:31.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:32 compute-2 ceph-mon[77138]: pgmap v2734: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.3 MiB/s wr, 224 op/s
Nov 29 08:28:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:33 compute-2 nova_compute[232428]: 2025-11-29 08:28:33.298 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:33 compute-2 ceph-mon[77138]: pgmap v2735: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 180 op/s
Nov 29 08:28:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:35 compute-2 ceph-mon[77138]: pgmap v2736: 305 pgs: 305 active+clean; 688 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 1.0 MiB/s wr, 284 op/s
Nov 29 08:28:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:36 compute-2 nova_compute[232428]: 2025-11-29 08:28:36.779 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:37 compute-2 ceph-mon[77138]: pgmap v2737: 305 pgs: 305 active+clean; 702 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 930 KiB/s wr, 269 op/s
Nov 29 08:28:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:38 compute-2 nova_compute[232428]: 2025-11-29 08:28:38.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.014 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.014 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.030 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.114 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.115 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.127 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.127 232432 INFO nova.compute.claims [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:28:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.515 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:39 compute-2 ceph-mon[77138]: pgmap v2738: 305 pgs: 305 active+clean; 776 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.2 MiB/s wr, 300 op/s
Nov 29 08:28:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:28:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1264169963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.948 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:39 compute-2 nova_compute[232428]: 2025-11-29 08:28:39.956 232432 DEBUG nova.compute.provider_tree [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:28:40 compute-2 nova_compute[232428]: 2025-11-29 08:28:40.123 232432 DEBUG nova.scheduler.client.report [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:28:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:40 compute-2 sudo[304183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:40 compute-2 sudo[304183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:40 compute-2 sudo[304183]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:40 compute-2 podman[304207]: 2025-11-29 08:28:40.897797297 +0000 UTC m=+0.076341240 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:28:40 compute-2 sudo[304214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:40 compute-2 sudo[304214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:40 compute-2 sudo[304214]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:40 compute-2 nova_compute[232428]: 2025-11-29 08:28:40.925 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:40 compute-2 nova_compute[232428]: 2025-11-29 08:28:40.926 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:28:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1264169963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:41.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:41 compute-2 nova_compute[232428]: 2025-11-29 08:28:41.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:41 compute-2 nova_compute[232428]: 2025-11-29 08:28:41.787 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:28:41 compute-2 nova_compute[232428]: 2025-11-29 08:28:41.787 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:28:42 compute-2 ceph-mon[77138]: pgmap v2739: 305 pgs: 305 active+clean; 814 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.9 MiB/s wr, 266 op/s
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.213 232432 INFO nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:28:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.286 232432 DEBUG nova.policy [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd039e57f31de4717a235fc96ebd56559', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '527c6a274d1e478eadfe67139e121185', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.456 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.605 232432 INFO nova.virt.block_device [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Booting with volume 2d68fb40-6374-433d-8236-b50acd6ca7f0 at /dev/vda
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.765 232432 DEBUG os_brick.utils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.767 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.781 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.782 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[82c0ae11-94b7-4e20-8dca-94c220813b4f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.783 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.792 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.792 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[0c28540f-0c8a-42d8-8a2d-ea645128e7eb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.795 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.810 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.810 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffe890b-1adf-4f50-8583-362c78cc814f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.812 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfe7111-1109-428b-9598-29de3c06b87d]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.812 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.851 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.854 232432 DEBUG os_brick.initiator.connectors.lightos [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.855 232432 DEBUG os_brick.initiator.connectors.lightos [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.855 232432 DEBUG os_brick.initiator.connectors.lightos [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.855 232432 DEBUG os_brick.utils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (89ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:28:42 compute-2 nova_compute[232428]: 2025-11-29 08:28:42.856 232432 DEBUG nova.virt.block_device [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating existing volume attachment record: 5e8c22c1-482f-464f-8b5d-9067afb4b6a5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:28:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.300 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Successfully created port: 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:43 compute-2 ovn_controller[134375]: 2025-11-29T08:28:43Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:b7:4a 10.100.0.7
Nov 29 08:28:43 compute-2 sudo[304261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:43 compute-2 sudo[304261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:43 compute-2 sudo[304261]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.903 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.905 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.905 232432 INFO nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Creating image(s)
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.906 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.906 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Ensure instance console log exists: /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.906 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.907 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:43 compute-2 nova_compute[232428]: 2025-11-29 08:28:43.907 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:43 compute-2 sudo[304286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:28:43 compute-2 sudo[304286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:43 compute-2 sudo[304286]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:44 compute-2 sudo[304311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:44 compute-2 sudo[304311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:44 compute-2 sudo[304311]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.055 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Successfully updated port: 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:28:44 compute-2 sudo[304336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:28:44 compute-2 sudo[304336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.089 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.089 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.089 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.184 232432 DEBUG nova.compute.manager [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.184 232432 DEBUG nova.compute.manager [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.185 232432 DEBUG oslo_concurrency.lockutils [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:28:44 compute-2 ceph-mon[77138]: pgmap v2740: 305 pgs: 305 active+clean; 814 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 207 op/s
Nov 29 08:28:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1539773764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4065275847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:44.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:44 compute-2 nova_compute[232428]: 2025-11-29 08:28:44.379 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:28:44 compute-2 sudo[304336]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.253 232432 DEBUG nova.network.neutron [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:28:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2674661049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.672 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.673 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Instance network_info: |[{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.673 232432 DEBUG oslo_concurrency.lockutils [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.674 232432 DEBUG nova.network.neutron [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.679 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Start _get_guest_xml network_info=[{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2d68fb40-6374-433d-8236-b50acd6ca7f0', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2d68fb40-6374-433d-8236-b50acd6ca7f0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8d30dc23-4d84-4468-94cd-9f1300767585', 'attached_at': '', 'detached_at': '', 'volume_id': '2d68fb40-6374-433d-8236-b50acd6ca7f0', 'serial': '2d68fb40-6374-433d-8236-b50acd6ca7f0'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '5e8c22c1-482f-464f-8b5d-9067afb4b6a5', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.685 232432 WARNING nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.691 232432 DEBUG nova.virt.libvirt.host [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.691 232432 DEBUG nova.virt.libvirt.host [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.695 232432 DEBUG nova.virt.libvirt.host [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.695 232432 DEBUG nova.virt.libvirt.host [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.696 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.697 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.697 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.697 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.698 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.698 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.698 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.698 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.699 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.699 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.699 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.700 232432 DEBUG nova.virt.hardware [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.730 232432 DEBUG nova.storage.rbd_utils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image 8d30dc23-4d84-4468-94cd-9f1300767585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:28:45 compute-2 nova_compute[232428]: 2025-11-29 08:28:45.734 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:28:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3029909002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.173 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:46 compute-2 podman[304432]: 2025-11-29 08:28:46.679854215 +0000 UTC m=+0.074309616 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.784 232432 DEBUG nova.virt.libvirt.vif [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1662779044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1662779044',id=166,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-zail0yj6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:42Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=8d30dc23-4d84-4468-94cd-9f1300767585,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.785 232432 DEBUG nova.network.os_vif_util [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.786 232432 DEBUG nova.network.os_vif_util [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.787 232432 DEBUG nova.objects.instance [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:28:46 compute-2 nova_compute[232428]: 2025-11-29 08:28:46.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:46 compute-2 ceph-mon[77138]: pgmap v2741: 305 pgs: 305 active+clean; 854 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.4 MiB/s wr, 291 op/s
Nov 29 08:28:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3029909002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.581 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <uuid>8d30dc23-4d84-4468-94cd-9f1300767585</uuid>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <name>instance-000000a6</name>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1662779044</nova:name>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:28:45</nova:creationTime>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:user uuid="d039e57f31de4717a235fc96ebd56559">tempest-TestInstancesWithCinderVolumes-663978016-project-member</nova:user>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:project uuid="527c6a274d1e478eadfe67139e121185">tempest-TestInstancesWithCinderVolumes-663978016</nova:project>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <nova:port uuid="0984e0d6-449f-45b6-bead-2a6a5cc37e11">
Nov 29 08:28:47 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <system>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="serial">8d30dc23-4d84-4468-94cd-9f1300767585</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="uuid">8d30dc23-4d84-4468-94cd-9f1300767585</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </system>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <os>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </os>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <features>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </features>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8d30dc23-4d84-4468-94cd-9f1300767585_disk.config">
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </source>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-2d68fb40-6374-433d-8236-b50acd6ca7f0">
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </source>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:28:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <serial>2d68fb40-6374-433d-8236-b50acd6ca7f0</serial>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:10:a6:9c"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <target dev="tap0984e0d6-44"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/console.log" append="off"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <video>
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </video>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:28:47 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:28:47 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:28:47 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:28:47 compute-2 nova_compute[232428]: </domain>
Nov 29 08:28:47 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.583 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Preparing to wait for external event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.583 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.583 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.583 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.584 232432 DEBUG nova.virt.libvirt.vif [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1662779044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1662779044',id=166,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-zail0yj6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:42Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=8d30dc23-4d84-4468-94cd-9f1300767585,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.584 232432 DEBUG nova.network.os_vif_util [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.585 232432 DEBUG nova.network.os_vif_util [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.585 232432 DEBUG os_vif [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.586 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.586 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.592 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0984e0d6-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.593 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0984e0d6-44, col_values=(('external_ids', {'iface-id': '0984e0d6-449f-45b6-bead-2a6a5cc37e11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:a6:9c', 'vm-uuid': '8d30dc23-4d84-4468-94cd-9f1300767585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.594 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:47 compute-2 NetworkManager[48993]: <info>  [1764404927.5961] manager: (tap0984e0d6-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.603 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:47 compute-2 nova_compute[232428]: 2025-11-29 08:28:47.604 232432 INFO os_vif [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44')
Nov 29 08:28:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:48 compute-2 nova_compute[232428]: 2025-11-29 08:28:48.447 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:28:48 compute-2 nova_compute[232428]: 2025-11-29 08:28:48.448 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:28:48 compute-2 nova_compute[232428]: 2025-11-29 08:28:48.449 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:10:a6:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:28:48 compute-2 nova_compute[232428]: 2025-11-29 08:28:48.450 232432 INFO nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Using config drive
Nov 29 08:28:48 compute-2 ceph-mon[77138]: pgmap v2742: 305 pgs: 305 active+clean; 860 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 7.5 MiB/s wr, 210 op/s
Nov 29 08:28:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1333799630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:28:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:49 compute-2 nova_compute[232428]: 2025-11-29 08:28:49.187 232432 DEBUG nova.storage.rbd_utils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image 8d30dc23-4d84-4468-94cd-9f1300767585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:28:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:50.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:50 compute-2 ceph-mon[77138]: pgmap v2743: 305 pgs: 305 active+clean; 860 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 952 KiB/s rd, 6.8 MiB/s wr, 209 op/s
Nov 29 08:28:50 compute-2 nova_compute[232428]: 2025-11-29 08:28:50.504 232432 DEBUG nova.network.neutron [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:28:50 compute-2 nova_compute[232428]: 2025-11-29 08:28:50.504 232432 DEBUG nova.network.neutron [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:28:50 compute-2 nova_compute[232428]: 2025-11-29 08:28:50.581 232432 DEBUG oslo_concurrency.lockutils [req-80cdc6a7-63e6-424b-9e35-03f297c0586f req-9a3d5f76-ec3a-4da9-946a-16df858f6e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:28:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:51.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.434 232432 INFO nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Creating config drive at /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.439 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmjdx_de_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.598 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmjdx_de_" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.945 232432 DEBUG nova.storage.rbd_utils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image 8d30dc23-4d84-4468-94cd-9f1300767585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.952 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config 8d30dc23-4d84-4468-94cd-9f1300767585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:28:51 compute-2 nova_compute[232428]: 2025-11-29 08:28:51.998 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:52 compute-2 sudo[304513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:28:52 compute-2 sudo[304513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:52 compute-2 sudo[304513]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:52 compute-2 sudo[304538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:28:52 compute-2 sudo[304538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:28:52 compute-2 sudo[304538]: pam_unix(sudo:session): session closed for user root
Nov 29 08:28:52 compute-2 ceph-mon[77138]: pgmap v2744: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 137 op/s
Nov 29 08:28:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:28:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:28:52 compute-2 nova_compute[232428]: 2025-11-29 08:28:52.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:53.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:53.910 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:53 compute-2 nova_compute[232428]: 2025-11-29 08:28:53.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:53.912 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:28:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:54.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:54 compute-2 ceph-mon[77138]: pgmap v2745: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Nov 29 08:28:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3833390503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:28:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:55.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:55 compute-2 nova_compute[232428]: 2025-11-29 08:28:55.237 232432 DEBUG oslo_concurrency.processutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config 8d30dc23-4d84-4468-94cd-9f1300767585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:28:55 compute-2 nova_compute[232428]: 2025-11-29 08:28:55.238 232432 INFO nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Deleting local config drive /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585/disk.config because it was imported into RBD.
Nov 29 08:28:55 compute-2 NetworkManager[48993]: <info>  [1764404935.3100] manager: (tap0984e0d6-44): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 08:28:55 compute-2 kernel: tap0984e0d6-44: entered promiscuous mode
Nov 29 08:28:55 compute-2 ovn_controller[134375]: 2025-11-29T08:28:55Z|00772|binding|INFO|Claiming lport 0984e0d6-449f-45b6-bead-2a6a5cc37e11 for this chassis.
Nov 29 08:28:55 compute-2 ovn_controller[134375]: 2025-11-29T08:28:55Z|00773|binding|INFO|0984e0d6-449f-45b6-bead-2a6a5cc37e11: Claiming fa:16:3e:10:a6:9c 10.100.0.9
Nov 29 08:28:55 compute-2 nova_compute[232428]: 2025-11-29 08:28:55.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:55 compute-2 systemd-udevd[304577]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:28:55 compute-2 ovn_controller[134375]: 2025-11-29T08:28:55Z|00774|binding|INFO|Setting lport 0984e0d6-449f-45b6-bead-2a6a5cc37e11 ovn-installed in OVS
Nov 29 08:28:55 compute-2 nova_compute[232428]: 2025-11-29 08:28:55.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:55 compute-2 nova_compute[232428]: 2025-11-29 08:28:55.350 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:55 compute-2 NetworkManager[48993]: <info>  [1764404935.3521] device (tap0984e0d6-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:28:55 compute-2 NetworkManager[48993]: <info>  [1764404935.3530] device (tap0984e0d6-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:28:55 compute-2 systemd-machined[194747]: New machine qemu-82-instance-000000a6.
Nov 29 08:28:55 compute-2 systemd[1]: Started Virtual Machine qemu-82-instance-000000a6.
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.691 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a6:9c 10.100.0.9'], port_security=['fa:16:3e:10:a6:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d30dc23-4d84-4468-94cd-9f1300767585', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0984e0d6-449f-45b6-bead-2a6a5cc37e11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:28:55 compute-2 ovn_controller[134375]: 2025-11-29T08:28:55Z|00775|binding|INFO|Setting lport 0984e0d6-449f-45b6-bead-2a6a5cc37e11 up in Southbound
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.693 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e bound to our chassis
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.695 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.709 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee637b8-f72d-4cca-964e-4bd4d52fd88a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.710 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap371b699e-01 in ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.712 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap371b699e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.712 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c73755c8-0d7b-474c-96e8-e963d2db7107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.713 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a206c0ea-cb9f-4c7b-adb1-8d65ffb96e12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.729 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b130c-4cd0-4b82-92fa-e4e01e577f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.755 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[64335c7e-8a69-4b63-a2b1-167c01c9cc9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.784 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0904ba92-c616-4c9a-aa60-dc180a4d52e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.789 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e132897d-6f5b-4e50-9a25-0f318acbacea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 NetworkManager[48993]: <info>  [1764404935.7908] manager: (tap371b699e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.829 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc02f8c-708d-49c5-a351-4da857ed78d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.833 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f8369247-9c4c-4d54-bf08-e5b5bff6b10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 NetworkManager[48993]: <info>  [1764404935.8630] device (tap371b699e-00): carrier: link connected
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.875 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d43412c3-09c1-43ed-9809-9e6cc64d5592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.895 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b26b2a7-d88c-4067-b514-0d2183e5b874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790213, 'reachable_time': 28065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304615, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.914 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.917 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5e45f017-4a87-420b-bd3f-726ba08050e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:80be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790213, 'tstamp': 790213}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304616, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ceph-mon[77138]: pgmap v2746: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 132 op/s
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.939 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[224f7a8a-7828-459c-919e-c751bfc2d1f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790213, 'reachable_time': 28065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304617, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:55.980 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dfad9cec-c8d8-4720-ba2f-824b004653d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.055 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9827e361-a4b9-4d36-8b63-cc692cbd104d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.056 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.057 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.057 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap371b699e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:56 compute-2 NetworkManager[48993]: <info>  [1764404936.0599] manager: (tap371b699e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 08:28:56 compute-2 kernel: tap371b699e-00: entered promiscuous mode
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.062 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap371b699e-00, col_values=(('external_ids', {'iface-id': 'bf759292-fede-4172-b0b8-efd6e3442b62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:28:56 compute-2 ovn_controller[134375]: 2025-11-29T08:28:56Z|00776|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.083 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.084 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.085 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f58e6253-1ddc-4fa7-b104-0b0f574359fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.086 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:28:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:28:56.088 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'env', 'PROCESS_TAG=haproxy-371b699e-06e1-407e-ac77-9768d9a0e76e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/371b699e-06e1-407e-ac77-9768d9a0e76e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:28:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:28:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:28:56 compute-2 podman[304656]: 2025-11-29 08:28:56.504168978 +0000 UTC m=+0.035228944 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.732 232432 DEBUG nova.compute.manager [req-a301e689-49dc-425d-9eae-1e68d4db1962 req-51237f35-9326-4a54-a0f5-562ec2083994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.733 232432 DEBUG oslo_concurrency.lockutils [req-a301e689-49dc-425d-9eae-1e68d4db1962 req-51237f35-9326-4a54-a0f5-562ec2083994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.733 232432 DEBUG oslo_concurrency.lockutils [req-a301e689-49dc-425d-9eae-1e68d4db1962 req-51237f35-9326-4a54-a0f5-562ec2083994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.733 232432 DEBUG oslo_concurrency.lockutils [req-a301e689-49dc-425d-9eae-1e68d4db1962 req-51237f35-9326-4a54-a0f5-562ec2083994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.733 232432 DEBUG nova.compute.manager [req-a301e689-49dc-425d-9eae-1e68d4db1962 req-51237f35-9326-4a54-a0f5-562ec2083994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Processing event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:28:56 compute-2 nova_compute[232428]: 2025-11-29 08:28:56.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:57.465393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937465525, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1693, "num_deletes": 255, "total_data_size": 3703389, "memory_usage": 3747136, "flush_reason": "Manual Compaction"}
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.598 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.718 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404937.7172709, 8d30dc23-4d84-4468-94cd-9f1300767585 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.718 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] VM Started (Lifecycle Event)
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.720 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.724 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.728 232432 INFO nova.virt.libvirt.driver [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Instance spawned successfully.
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.729 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:28:57 compute-2 nova_compute[232428]: 2025-11-29 08:28:57.895 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937986441, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 2429582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57499, "largest_seqno": 59187, "table_properties": {"data_size": 2422494, "index_size": 4095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15891, "raw_average_key_size": 20, "raw_value_size": 2407941, "raw_average_value_size": 3139, "num_data_blocks": 178, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404809, "oldest_key_time": 1764404809, "file_creation_time": 1764404937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 521113 microseconds, and 40208 cpu microseconds.
Nov 29 08:28:57 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:28:58 compute-2 podman[304656]: 2025-11-29 08:28:58.013728522 +0000 UTC m=+1.544788488 container create ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:57.986503) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 2429582 bytes OK
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:57.986541) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:58.014156) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:58.014220) EVENT_LOG_v1 {"time_micros": 1764404938014207, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:58.014264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3695579, prev total WAL file size 3741965, number of live WAL files 2.
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:58.015671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(2372KB)], [111(10MB)]
Nov 29 08:28:58 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404938015742, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 13197566, "oldest_snapshot_seqno": -1}
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.026 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.033 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.038 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.039 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.040 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.040 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.041 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.042 232432 DEBUG nova.virt.libvirt.driver [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.256 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.258 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404937.7175498, 8d30dc23-4d84-4468-94cd-9f1300767585 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.259 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] VM Paused (Lifecycle Event)
Nov 29 08:28:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:28:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.456 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.461 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764404937.723742, 8d30dc23-4d84-4468-94cd-9f1300767585 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.461 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] VM Resumed (Lifecycle Event)
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.896 232432 INFO nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Took 14.99 seconds to spawn the instance on the hypervisor.
Nov 29 08:28:58 compute-2 nova_compute[232428]: 2025-11-29 08:28:58.897 232432 DEBUG nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.178 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.186 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:28:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:28:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:28:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.452 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8879 keys, 11321064 bytes, temperature: kUnknown
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404939681383, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 11321064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11263668, "index_size": 34116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 231266, "raw_average_key_size": 26, "raw_value_size": 11107744, "raw_average_value_size": 1251, "num_data_blocks": 1322, "num_entries": 8879, "num_filter_entries": 8879, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764404938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:28:59 compute-2 systemd[1]: Started libpod-conmon-ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008.scope.
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.681888) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11321064 bytes
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.739238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 7.9 rd, 6.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 9405, records dropped: 526 output_compression: NoCompression
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.739290) EVENT_LOG_v1 {"time_micros": 1764404939739271, "job": 70, "event": "compaction_finished", "compaction_time_micros": 1665758, "compaction_time_cpu_micros": 36459, "output_level": 6, "num_output_files": 1, "total_output_size": 11321064, "num_input_records": 9405, "num_output_records": 8879, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404939740040, "job": 70, "event": "table_file_deletion", "file_number": 113}
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404939742731, "job": 70, "event": "table_file_deletion", "file_number": 111}
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:58.015569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.742786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.742793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.742794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.742796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:28:59.742797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:28:59 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:28:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2a7a38a557a983edb5f8303ef29388b3a2b644c63126b1979a19c6d71ff896/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.835 232432 DEBUG nova.compute.manager [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.837 232432 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.838 232432 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.838 232432 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.839 232432 DEBUG nova.compute.manager [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] No waiting events found dispatching network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.839 232432 WARNING nova.compute.manager [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received unexpected event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 for instance with vm_state building and task_state spawning.
Nov 29 08:28:59 compute-2 nova_compute[232428]: 2025-11-29 08:28:59.852 232432 INFO nova.compute.manager [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Took 20.77 seconds to build instance.
Nov 29 08:28:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:28:59 compute-2 podman[304656]: 2025-11-29 08:28:59.892951278 +0000 UTC m=+3.424011304 container init ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:28:59 compute-2 ceph-mon[77138]: pgmap v2747: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 55 op/s
Nov 29 08:28:59 compute-2 podman[304656]: 2025-11-29 08:28:59.904389706 +0000 UTC m=+3.435449672 container start ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:28:59 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [NOTICE]   (304738) : New worker (304740) forked
Nov 29 08:28:59 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [NOTICE]   (304738) : Loading success.
Nov 29 08:29:00 compute-2 nova_compute[232428]: 2025-11-29 08:29:00.168 232432 DEBUG oslo_concurrency.lockutils [None req-107c4ad4-e0b9-4f16-868c-f05ab83309a4 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:00 compute-2 podman[304705]: 2025-11-29 08:29:00.21598346 +0000 UTC m=+0.612719691 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:29:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:00.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:00 compute-2 sudo[304750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:00 compute-2 sudo[304750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:00 compute-2 sudo[304750]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:01 compute-2 sudo[304775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:01 compute-2 sudo[304775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:01 compute-2 sudo[304775]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:01 compute-2 nova_compute[232428]: 2025-11-29 08:29:01.193 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:01.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:01 compute-2 nova_compute[232428]: 2025-11-29 08:29:01.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/151542145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:01 compute-2 ceph-mon[77138]: pgmap v2748: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 27 op/s
Nov 29 08:29:02 compute-2 nova_compute[232428]: 2025-11-29 08:29:02.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:02 compute-2 nova_compute[232428]: 2025-11-29 08:29:02.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:29:02 compute-2 nova_compute[232428]: 2025-11-29 08:29:02.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:29:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:02 compute-2 nova_compute[232428]: 2025-11-29 08:29:02.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:02 compute-2 ceph-mon[77138]: pgmap v2749: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 47 KiB/s wr, 51 op/s
Nov 29 08:29:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:03.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:03.332 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:03.333 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:03.334 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:03 compute-2 nova_compute[232428]: 2025-11-29 08:29:03.883 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:03 compute-2 nova_compute[232428]: 2025-11-29 08:29:03.883 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:04 compute-2 nova_compute[232428]: 2025-11-29 08:29:04.566 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:04 compute-2 nova_compute[232428]: 2025-11-29 08:29:04.567 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:04 compute-2 nova_compute[232428]: 2025-11-29 08:29:04.567 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:29:04 compute-2 nova_compute[232428]: 2025-11-29 08:29:04.567 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d2af1c0-e1ed-48f9-beda-42cc37212de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2060121373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:04 compute-2 ceph-mon[77138]: pgmap v2750: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 35 KiB/s wr, 50 op/s
Nov 29 08:29:04 compute-2 nova_compute[232428]: 2025-11-29 08:29:04.776 232432 DEBUG nova.objects.instance [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:05 compute-2 ceph-mon[77138]: pgmap v2751: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 54 KiB/s wr, 102 op/s
Nov 29 08:29:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:06.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:06 compute-2 nova_compute[232428]: 2025-11-29 08:29:06.797 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.232 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:07.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/651573233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.602 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.848 232432 DEBUG nova.compute.manager [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.848 232432 DEBUG nova.compute.manager [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.849 232432 DEBUG oslo_concurrency.lockutils [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.849 232432 DEBUG oslo_concurrency.lockutils [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:07 compute-2 nova_compute[232428]: 2025-11-29 08:29:07.849 232432 DEBUG nova.network.neutron [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:08 compute-2 ceph-mon[77138]: pgmap v2752: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 100 op/s
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.028 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [{"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.159 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.160 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.161 232432 INFO nova.compute.manager [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attaching volume e7fdb130-fae5-40cb-aa10-2d9145713cd5 to /dev/vdb
Nov 29 08:29:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-5d2af1c0-e1ed-48f9-beda-42cc37212de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.242 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.243 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.244 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.244 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.352 232432 DEBUG os_brick.utils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.354 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.378 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.379 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[ca02e465-5a43-477a-8709-5798c38ef4a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.381 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.400 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.400 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[022d9245-2d10-4f7d-81e2-4a4124abe483]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:09 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.403 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.416 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.416 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[54f06908-e41b-483f-a034-a2fc3335f8ad]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.418 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[d4378914-f2c8-4b55-ad10-a3d4c5b203e7]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.419 232432 DEBUG oslo_concurrency.processutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.468 232432 DEBUG oslo_concurrency.processutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.471 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.472 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.472 232432 DEBUG os_brick.initiator.connectors.lightos [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.473 232432 DEBUG os_brick.utils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (119ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.474 232432 DEBUG nova.virt.block_device [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating existing volume attachment record: 35525a09-f03f-43df-9c47-0609f1b58f18 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:29:09 compute-2 ceph-mon[77138]: pgmap v2753: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 25 KiB/s wr, 155 op/s
Nov 29 08:29:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1094632520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.780 232432 DEBUG nova.network.neutron [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:09 compute-2 nova_compute[232428]: 2025-11-29 08:29:09.781 232432 DEBUG nova.network.neutron [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:10 compute-2 nova_compute[232428]: 2025-11-29 08:29:10.054 232432 DEBUG oslo_concurrency.lockutils [req-b54c9c09-1692-4d5b-bd71-36e236302eb9 req-378f3254-0423-4b3f-9164-d6ece069672c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:29:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172404978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:10 compute-2 nova_compute[232428]: 2025-11-29 08:29:10.542 232432 DEBUG nova.objects.instance [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:10 compute-2 nova_compute[232428]: 2025-11-29 08:29:10.813 232432 DEBUG nova.virt.libvirt.driver [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attempting to attach volume e7fdb130-fae5-40cb-aa10-2d9145713cd5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:29:10 compute-2 nova_compute[232428]: 2025-11-29 08:29:10.817 232432 DEBUG nova.virt.libvirt.guest [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:29:10 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e7fdb130-fae5-40cb-aa10-2d9145713cd5">
Nov 29 08:29:10 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:29:10 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:29:10 compute-2 nova_compute[232428]:   <serial>e7fdb130-fae5-40cb-aa10-2d9145713cd5</serial>
Nov 29 08:29:10 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:10 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:29:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2172404978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:11 compute-2 nova_compute[232428]: 2025-11-29 08:29:11.530 232432 DEBUG nova.virt.libvirt.driver [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:11 compute-2 nova_compute[232428]: 2025-11-29 08:29:11.531 232432 DEBUG nova.virt.libvirt.driver [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:11 compute-2 nova_compute[232428]: 2025-11-29 08:29:11.531 232432 DEBUG nova.virt.libvirt.driver [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:11 compute-2 nova_compute[232428]: 2025-11-29 08:29:11.531 232432 DEBUG nova.virt.libvirt.driver [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:10:a6:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:29:11 compute-2 podman[304832]: 2025-11-29 08:29:11.658772428 +0000 UTC m=+0.061202988 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:29:11 compute-2 ceph-osd[79833]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 08:29:11 compute-2 nova_compute[232428]: 2025-11-29 08:29:11.798 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:12 compute-2 nova_compute[232428]: 2025-11-29 08:29:12.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:12 compute-2 ceph-mon[77138]: pgmap v2754: 305 pgs: 305 active+clean; 873 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 337 KiB/s wr, 175 op/s
Nov 29 08:29:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3319369817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:12.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:12 compute-2 nova_compute[232428]: 2025-11-29 08:29:12.415 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:12 compute-2 nova_compute[232428]: 2025-11-29 08:29:12.605 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:12 compute-2 nova_compute[232428]: 2025-11-29 08:29:12.904 232432 DEBUG oslo_concurrency.lockutils [None req-2798f8b9-4b08-48c0-adbc-0e0fa02f5848 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:12 compute-2 ovn_controller[134375]: 2025-11-29T08:29:12Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:a6:9c 10.100.0.9
Nov 29 08:29:12 compute-2 ovn_controller[134375]: 2025-11-29T08:29:12Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:a6:9c 10.100.0.9
Nov 29 08:29:13 compute-2 nova_compute[232428]: 2025-11-29 08:29:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:13 compute-2 nova_compute[232428]: 2025-11-29 08:29:13.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:29:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:13.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:13 compute-2 nova_compute[232428]: 2025-11-29 08:29:13.771 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:13 compute-2 nova_compute[232428]: 2025-11-29 08:29:13.771 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:13 compute-2 ceph-mon[77138]: pgmap v2755: 305 pgs: 305 active+clean; 873 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 337 KiB/s wr, 148 op/s
Nov 29 08:29:13 compute-2 nova_compute[232428]: 2025-11-29 08:29:13.900 232432 DEBUG nova.objects.instance [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.311 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.598 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.598 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.599 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.599 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:29:14 compute-2 nova_compute[232428]: 2025-11-29 08:29:14.599 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:29:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2104648148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:15 compute-2 nova_compute[232428]: 2025-11-29 08:29:15.269 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4031100585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:16 compute-2 ceph-mon[77138]: pgmap v2756: 305 pgs: 305 active+clean; 929 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Nov 29 08:29:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2104648148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3632048410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.580 232432 DEBUG nova.compute.manager [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-changed-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.581 232432 DEBUG nova.compute.manager [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing instance network info cache due to event network-changed-ba386159-20fd-49b2-9e6a-783215282d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.582 232432 DEBUG oslo_concurrency.lockutils [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.583 232432 DEBUG oslo_concurrency.lockutils [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.584 232432 DEBUG nova.network.neutron [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Refreshing network info cache for port ba386159-20fd-49b2-9e6a-783215282d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.612 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.613 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.619 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.620 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.620 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.623 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.624 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.624 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.800 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.836 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.838 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3592MB free_disk=20.713916778564453GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.838 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:16 compute-2 nova_compute[232428]: 2025-11-29 08:29:16.839 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:17 compute-2 ceph-mon[77138]: pgmap v2757: 305 pgs: 305 active+clean; 936 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 226 op/s
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.515 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.517 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.518 232432 INFO nova.compute.manager [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attaching volume cf5c779b-adfb-4e81-aa81-44b14dc653ca to /dev/vdc
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.600 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.601 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.601 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 8d30dc23-4d84-4468-94cd-9f1300767585 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.601 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.601 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.619 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.639 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.639 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.651 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:29:17 compute-2 podman[304879]: 2025-11-29 08:29:17.671227518 +0000 UTC m=+0.068934908 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.673 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.688 232432 DEBUG os_brick.utils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.689 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.704 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.704 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[a0eb8228-ec1e-4f3e-add4-4b8898f0b7ab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.706 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.716 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.716 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[4744ef5b-6190-4874-aa16-01ee52db1760]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.719 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.731 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.732 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[d596b38c-1e38-48f5-b677-e86e5a99ccd5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.733 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[3a225bd5-51c2-497d-bbe8-b53ff39e8954]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.734 232432 DEBUG oslo_concurrency.processutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.773 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.813 232432 DEBUG oslo_concurrency.processutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.816 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.816 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.816 232432 DEBUG os_brick.initiator.connectors.lightos [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.817 232432 DEBUG os_brick.utils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (128ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:29:17 compute-2 nova_compute[232428]: 2025-11-29 08:29:17.817 232432 DEBUG nova.virt.block_device [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating existing volume attachment record: e31be471-29a0-458d-a6b0-cbbc8cc7e47c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:29:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:29:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3364173565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:18 compute-2 nova_compute[232428]: 2025-11-29 08:29:18.246 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:18 compute-2 nova_compute[232428]: 2025-11-29 08:29:18.254 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:29:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:18 compute-2 nova_compute[232428]: 2025-11-29 08:29:18.864 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:29:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3364173565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:20 compute-2 nova_compute[232428]: 2025-11-29 08:29:20.303 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:29:20 compute-2 nova_compute[232428]: 2025-11-29 08:29:20.304 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:21 compute-2 sudo[304930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:21 compute-2 sudo[304930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:21 compute-2 sudo[304930]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:21 compute-2 sudo[304955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:21 compute-2 sudo[304955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:21 compute-2 sudo[304955]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:21 compute-2 ceph-mon[77138]: pgmap v2758: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 233 op/s
Nov 29 08:29:21 compute-2 nova_compute[232428]: 2025-11-29 08:29:21.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.230 232432 DEBUG nova.objects.instance [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.333 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attempting to attach volume cf5c779b-adfb-4e81-aa81-44b14dc653ca with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.336 232432 DEBUG nova.virt.libvirt.guest [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:29:22 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-cf5c779b-adfb-4e81-aa81-44b14dc653ca">
Nov 29 08:29:22 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:29:22 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   <target dev="vdc" bus="virtio"/>
Nov 29 08:29:22 compute-2 nova_compute[232428]:   <serial>cf5c779b-adfb-4e81-aa81-44b14dc653ca</serial>
Nov 29 08:29:22 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:22 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:29:22 compute-2 ceph-mon[77138]: pgmap v2759: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Nov 29 08:29:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/358310699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.568 232432 DEBUG nova.network.neutron [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updated VIF entry in instance network info cache for port ba386159-20fd-49b2-9e6a-783215282d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.569 232432 DEBUG nova.network.neutron [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [{"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.609 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.775 232432 DEBUG oslo_concurrency.lockutils [req-2a5d57fb-ce19-45a4-87ef-79463d9dd2f8 req-7b941e0e-5a61-49a8-8865-54a2e1970ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-21fbf4d2-7068-4308-a3fc-70637e7f52b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.782 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.783 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.783 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.783 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.783 232432 DEBUG nova.virt.libvirt.driver [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:10:a6:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.866 232432 DEBUG oslo_concurrency.lockutils [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.866 232432 DEBUG oslo_concurrency.lockutils [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:22 compute-2 nova_compute[232428]: 2025-11-29 08:29:22.911 232432 INFO nova.compute.manager [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Detaching volume d82be9cd-deee-4312-bbbb-bb9d0726ae5c
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.074 232432 INFO nova.virt.block_device [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Attempting to driver detach volume d82be9cd-deee-4312-bbbb-bb9d0726ae5c from mountpoint /dev/vdb
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.084 232432 DEBUG nova.virt.libvirt.driver [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Attempting to detach device vdb from instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.085 232432 DEBUG nova.virt.libvirt.guest [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d82be9cd-deee-4312-bbbb-bb9d0726ae5c">
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <serial>d82be9cd-deee-4312-bbbb-bb9d0726ae5c</serial>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:23 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.201 232432 INFO nova.virt.libvirt.driver [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully detached device vdb from instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 from the persistent domain config.
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.201 232432 DEBUG nova.virt.libvirt.driver [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.203 232432 DEBUG nova.virt.libvirt.guest [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d82be9cd-deee-4312-bbbb-bb9d0726ae5c">
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <serial>d82be9cd-deee-4312-bbbb-bb9d0726ae5c</serial>
Nov 29 08:29:23 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:29:23 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:23 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.281 232432 DEBUG oslo_concurrency.lockutils [None req-d11015d4-1921-4a77-a755-c5ce3dd91444 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.286 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764404963.2861116, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.288 232432 DEBUG nova.virt.libvirt.driver [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.290 232432 INFO nova.virt.libvirt.driver [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully detached device vdb from instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7 from the live domain config.
Nov 29 08:29:23 compute-2 ceph-mon[77138]: pgmap v2760: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 146 op/s
Nov 29 08:29:23 compute-2 nova_compute[232428]: 2025-11-29 08:29:23.946 232432 DEBUG nova.objects.instance [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:24 compute-2 nova_compute[232428]: 2025-11-29 08:29:24.298 232432 DEBUG oslo_concurrency.lockutils [None req-bc5b076d-90af-4ca1-9438-3a2261be862a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:25.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3665741576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:26 compute-2 ceph-mon[77138]: pgmap v2761: 305 pgs: 305 active+clean; 975 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Nov 29 08:29:26 compute-2 nova_compute[232428]: 2025-11-29 08:29:26.808 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:29:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/161752213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:29:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/161752213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/161752213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/161752213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:27 compute-2 nova_compute[232428]: 2025-11-29 08:29:27.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:29:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/384126076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:29:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3127293080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:29:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3127293080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.050 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.051 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.051 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.051 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.051 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.052 232432 INFO nova.compute.manager [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Terminating instance
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.053 232432 DEBUG nova.compute.manager [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:29:28 compute-2 kernel: tapba386159-20 (unregistering): left promiscuous mode
Nov 29 08:29:28 compute-2 NetworkManager[48993]: <info>  [1764404968.1762] device (tapba386159-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:29:28 compute-2 ovn_controller[134375]: 2025-11-29T08:29:28Z|00777|binding|INFO|Releasing lport ba386159-20fd-49b2-9e6a-783215282d96 from this chassis (sb_readonly=0)
Nov 29 08:29:28 compute-2 ovn_controller[134375]: 2025-11-29T08:29:28Z|00778|binding|INFO|Setting lport ba386159-20fd-49b2-9e6a-783215282d96 down in Southbound
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.191 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 ovn_controller[134375]: 2025-11-29T08:29:28Z|00779|binding|INFO|Removing iface tapba386159-20 ovn-installed in OVS
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.200 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:b7:4a 10.100.0.7'], port_security=['fa:16:3e:a2:b7:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21fbf4d2-7068-4308-a3fc-70637e7f52b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '8', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ba386159-20fd-49b2-9e6a-783215282d96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.202 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ba386159-20fd-49b2-9e6a-783215282d96 in datapath cf206693-b177-47ba-9c63-2ab4e51898ce unbound from our chassis
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.204 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf206693-b177-47ba-9c63-2ab4e51898ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.205 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4afca66e-4924-4519-aad0-885253f3dbea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.205 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace which is not needed anymore
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.213 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Nov 29 08:29:28 compute-2 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a4.scope: Consumed 16.657s CPU time.
Nov 29 08:29:28 compute-2 systemd-machined[194747]: Machine qemu-81-instance-000000a4 terminated.
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.292 232432 INFO nova.virt.libvirt.driver [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Instance destroyed successfully.
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.293 232432 DEBUG nova.objects.instance [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'resources' on Instance uuid 21fbf4d2-7068-4308-a3fc-70637e7f52b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.316 232432 DEBUG nova.virt.libvirt.vif [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2108819744',display_name='tempest-TestMinimumBasicScenario-server-2108819744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2108819744',id=164,image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOH4aPpp8Txlyd8KsEts3qHu9394MaXMXIAHGOQ87/9IyEVfVwsUqqibD266w2tVmIG0iA5UFLFCmcOOGuKgAW7H/0vKZXHikjjni8gouN+3Z7UfOLVkIMOyjOHzfXmaoA==',key_name='tempest-TestMinimumBasicScenario-2074196759',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-qoy6pwyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aad237b6-caeb-4300-902b-ba8936a7053b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:28:29Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=21fbf4d2-7068-4308-a3fc-70637e7f52b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.317 232432 DEBUG nova.network.os_vif_util [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "ba386159-20fd-49b2-9e6a-783215282d96", "address": "fa:16:3e:a2:b7:4a", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba386159-20", "ovs_interfaceid": "ba386159-20fd-49b2-9e6a-783215282d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.318 232432 DEBUG nova.network.os_vif_util [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.318 232432 DEBUG os_vif [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.321 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.321 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba386159-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.329 232432 INFO os_vif [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:b7:4a,bridge_name='br-int',has_traffic_filtering=True,id=ba386159-20fd-49b2-9e6a-783215282d96,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba386159-20')
Nov 29 08:29:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:28 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [NOTICE]   (304125) : haproxy version is 2.8.14-c23fe91
Nov 29 08:29:28 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [NOTICE]   (304125) : path to executable is /usr/sbin/haproxy
Nov 29 08:29:28 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [WARNING]  (304125) : Exiting Master process...
Nov 29 08:29:28 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [ALERT]    (304125) : Current worker (304134) exited with code 143 (Terminated)
Nov 29 08:29:28 compute-2 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[304090]: [WARNING]  (304125) : All workers exited. Exiting... (0)
Nov 29 08:29:28 compute-2 systemd[1]: libpod-454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474.scope: Deactivated successfully.
Nov 29 08:29:28 compute-2 podman[305039]: 2025-11-29 08:29:28.374765225 +0000 UTC m=+0.054847559 container died 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:29:28 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474-userdata-shm.mount: Deactivated successfully.
Nov 29 08:29:28 compute-2 systemd[1]: var-lib-containers-storage-overlay-10104f805cb364f7561842cd9850878d9d4cf16ecf61f8895fb913705ca3992e-merged.mount: Deactivated successfully.
Nov 29 08:29:28 compute-2 podman[305039]: 2025-11-29 08:29:28.408235482 +0000 UTC m=+0.088317816 container cleanup 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:29:28 compute-2 systemd[1]: libpod-conmon-454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474.scope: Deactivated successfully.
Nov 29 08:29:28 compute-2 podman[305085]: 2025-11-29 08:29:28.479565515 +0000 UTC m=+0.046330551 container remove 454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.488 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd1bf58-524c-4b5d-a9c1-5b63b84725c1]: (4, ('Sat Nov 29 08:29:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474)\n454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474\nSat Nov 29 08:29:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474)\n454077ed67a75697019536944ce5981013d1e5eff0494406374d7bf282cfa474\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.492 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a20098-c76a-4061-864a-fcc0502ad56a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.494 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 kernel: tapcf206693-b0: left promiscuous mode
Nov 29 08:29:28 compute-2 nova_compute[232428]: 2025-11-29 08:29:28.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.517 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4611b34a-9393-4dab-88e2-a2659e857092]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ceph-mon[77138]: pgmap v2762: 305 pgs: 305 active+clean; 984 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 29 08:29:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/384126076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3127293080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3127293080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.537 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa066a7-7ceb-4fc4-a25e-6ee3198f9320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.539 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd6c189-62bb-4a5f-8adb-1a3381af56fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.556 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[099e63f1-3617-4556-99ff-c64e15481f9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787457, 'reachable_time': 19261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305100, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:28 compute-2 systemd[1]: run-netns-ovnmeta\x2dcf206693\x2db177\x2d47ba\x2d9c63\x2d2ab4e51898ce.mount: Deactivated successfully.
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.559 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:29:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:28.559 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8dcc9b-6b87-4ac7-a4fb-8f9b98f03b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.101 232432 DEBUG nova.compute.manager [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.102 232432 DEBUG oslo_concurrency.lockutils [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.103 232432 DEBUG oslo_concurrency.lockutils [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.103 232432 DEBUG oslo_concurrency.lockutils [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.104 232432 DEBUG nova.compute.manager [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.104 232432 DEBUG nova.compute.manager [req-107a8c43-0f22-46dc-bc4b-704856ed8c25 req-1920870f-b439-4a2c-802a-627b228597a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-unplugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:29:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:29.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.653 232432 INFO nova.virt.libvirt.driver [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Deleting instance files /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7_del
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.654 232432 INFO nova.virt.libvirt.driver [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Deletion of /var/lib/nova/instances/21fbf4d2-7068-4308-a3fc-70637e7f52b7_del complete
Nov 29 08:29:29 compute-2 ceph-mon[77138]: pgmap v2763: 305 pgs: 305 active+clean; 1005 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 4.4 MiB/s wr, 151 op/s
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.749 232432 INFO nova.compute.manager [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Took 1.70 seconds to destroy the instance on the hypervisor.
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.751 232432 DEBUG oslo.service.loopingcall [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.751 232432 DEBUG nova.compute.manager [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:29:29 compute-2 nova_compute[232428]: 2025-11-29 08:29:29.752 232432 DEBUG nova.network.neutron [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:29:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2784251390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:30 compute-2 podman[305103]: 2025-11-29 08:29:30.696871995 +0000 UTC m=+0.091614909 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.191 232432 DEBUG nova.compute.manager [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.192 232432 DEBUG nova.compute.manager [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.192 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.192 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.192 232432 DEBUG nova.network.neutron [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.747 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:31.747 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:29:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:31.748 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.780 232432 DEBUG nova.network.neutron [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.805 232432 INFO nova.compute.manager [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Took 2.05 seconds to deallocate network for instance.
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.810 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.858 232432 DEBUG nova.compute.manager [req-11b52ed7-d899-4a45-a14f-048c9ce92531 req-3cb200fb-e5b4-4f67-9c6d-637d3ae1aeea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-deleted-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.879 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:31 compute-2 nova_compute[232428]: 2025-11-29 08:29:31.880 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:31 compute-2 ceph-mon[77138]: pgmap v2764: 305 pgs: 305 active+clean; 985 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 601 KiB/s rd, 4.3 MiB/s wr, 156 op/s
Nov 29 08:29:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3126927634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.007 232432 DEBUG oslo_concurrency.processutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:29:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:29:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1873339066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.454 232432 DEBUG oslo_concurrency.processutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.461 232432 DEBUG nova.compute.provider_tree [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.477 232432 DEBUG nova.scheduler.client.report [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.501 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.540 232432 INFO nova.scheduler.client.report [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Deleted allocations for instance 21fbf4d2-7068-4308-a3fc-70637e7f52b7
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.604 232432 DEBUG oslo_concurrency.lockutils [None req-d471801f-4b6a-41c7-9fed-823d588eae5a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.930 232432 DEBUG nova.network.neutron [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:32 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.931 232432 DEBUG nova.network.neutron [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1873339066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:32.999 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.000 232432 DEBUG nova.compute.manager [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.001 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.001 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.001 232432 DEBUG oslo_concurrency.lockutils [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "21fbf4d2-7068-4308-a3fc-70637e7f52b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.002 232432 DEBUG nova.compute.manager [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] No waiting events found dispatching network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.002 232432 WARNING nova.compute.manager [req-83f045ae-1b50-43e2-91fd-772c4e373832 req-70068e53-b581-4db4-8929-c5e73fb5327d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Received unexpected event network-vif-plugged-ba386159-20fd-49b2-9e6a-783215282d96 for instance with vm_state active and task_state deleting.
Nov 29 08:29:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:29:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:33.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.451 232432 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.452 232432 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.452 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.452 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:33 compute-2 nova_compute[232428]: 2025-11-29 08:29:33.452 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:33 compute-2 ceph-mon[77138]: pgmap v2765: 305 pgs: 305 active+clean; 985 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 601 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Nov 29 08:29:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 08:29:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:29:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:29:34 compute-2 ceph-mon[77138]: osdmap e364: 3 total, 3 up, 3 in
Nov 29 08:29:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1620097015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:35 compute-2 ceph-mon[77138]: pgmap v2767: 305 pgs: 305 active+clean; 926 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 29 08:29:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:36.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:29:36.750 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.812 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.924 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.925 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.943 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.944 232432 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.944 232432 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.944 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.944 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:36 compute-2 nova_compute[232428]: 2025-11-29 08:29:36.945 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:38 compute-2 ceph-mon[77138]: pgmap v2768: 305 pgs: 305 active+clean; 918 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 182 op/s
Nov 29 08:29:38 compute-2 nova_compute[232428]: 2025-11-29 08:29:38.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:39.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:39 compute-2 nova_compute[232428]: 2025-11-29 08:29:39.735 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:39 compute-2 nova_compute[232428]: 2025-11-29 08:29:39.736 232432 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.016 232432 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:40 compute-2 ceph-mon[77138]: pgmap v2769: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 39 KiB/s wr, 218 op/s
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.857 232432 DEBUG nova.compute.manager [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.858 232432 DEBUG nova.compute.manager [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.859 232432 DEBUG oslo_concurrency.lockutils [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.859 232432 DEBUG oslo_concurrency.lockutils [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:40 compute-2 nova_compute[232428]: 2025-11-29 08:29:40.859 232432 DEBUG nova.network.neutron [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:41.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:41 compute-2 sudo[305159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:41 compute-2 sudo[305159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:41 compute-2 sudo[305159]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:41 compute-2 sudo[305184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:41 compute-2 sudo[305184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:41 compute-2 sudo[305184]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 08:29:41 compute-2 nova_compute[232428]: 2025-11-29 08:29:41.818 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:41 compute-2 ceph-mon[77138]: pgmap v2770: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 48 KiB/s wr, 214 op/s
Nov 29 08:29:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:42 compute-2 podman[305210]: 2025-11-29 08:29:42.663795902 +0000 UTC m=+0.062870880 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 08:29:42 compute-2 ovn_controller[134375]: 2025-11-29T08:29:42Z|00780|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:29:42 compute-2 ovn_controller[134375]: 2025-11-29T08:29:42Z|00781|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:29:42 compute-2 nova_compute[232428]: 2025-11-29 08:29:42.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:42 compute-2 nova_compute[232428]: 2025-11-29 08:29:42.986 232432 DEBUG nova.network.neutron [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:42 compute-2 nova_compute[232428]: 2025-11-29 08:29:42.986 232432 DEBUG nova.network.neutron [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:43 compute-2 nova_compute[232428]: 2025-11-29 08:29:43.020 232432 DEBUG oslo_concurrency.lockutils [req-0bcadece-b2be-4f75-9edd-8a21ff1319fc req-f033da6b-d738-4a0b-a671-aff9ff6b78fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:43 compute-2 ceph-mon[77138]: osdmap e365: 3 total, 3 up, 3 in
Nov 29 08:29:43 compute-2 nova_compute[232428]: 2025-11-29 08:29:43.289 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404968.28607, 21fbf4d2-7068-4308-a3fc-70637e7f52b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:29:43 compute-2 nova_compute[232428]: 2025-11-29 08:29:43.290 232432 INFO nova.compute.manager [-] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] VM Stopped (Lifecycle Event)
Nov 29 08:29:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:43.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:43 compute-2 nova_compute[232428]: 2025-11-29 08:29:43.308 232432 DEBUG nova.compute.manager [None req-571f8114-b739-42c1-bc1b-06e93b88d0de - - - - - -] [instance: 21fbf4d2-7068-4308-a3fc-70637e7f52b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:29:43 compute-2 nova_compute[232428]: 2025-11-29 08:29:43.331 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 08:29:44 compute-2 ceph-mon[77138]: pgmap v2772: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 20 KiB/s wr, 190 op/s
Nov 29 08:29:44 compute-2 ceph-mon[77138]: osdmap e366: 3 total, 3 up, 3 in
Nov 29 08:29:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:44.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.037 232432 DEBUG oslo_concurrency.lockutils [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.038 232432 DEBUG oslo_concurrency.lockutils [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.059 232432 INFO nova.compute.manager [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Detaching volume e7fdb130-fae5-40cb-aa10-2d9145713cd5
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.221 232432 INFO nova.virt.block_device [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attempting to driver detach volume e7fdb130-fae5-40cb-aa10-2d9145713cd5 from mountpoint /dev/vdb
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.237 232432 DEBUG nova.virt.libvirt.driver [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdb from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.238 232432 DEBUG nova.virt.libvirt.guest [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e7fdb130-fae5-40cb-aa10-2d9145713cd5">
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <serial>e7fdb130-fae5-40cb-aa10-2d9145713cd5</serial>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:45 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.253 232432 INFO nova.virt.libvirt.driver [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the persistent domain config.
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.253 232432 DEBUG nova.virt.libvirt.driver [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.254 232432 DEBUG nova.virt.libvirt.guest [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-e7fdb130-fae5-40cb-aa10-2d9145713cd5">
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <serial>e7fdb130-fae5-40cb-aa10-2d9145713cd5</serial>
Nov 29 08:29:45 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:29:45 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:45 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.538 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764404985.536744, 8d30dc23-4d84-4468-94cd-9f1300767585 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.541 232432 DEBUG nova.virt.libvirt.driver [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8d30dc23-4d84-4468-94cd-9f1300767585 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.544 232432 INFO nova.virt.libvirt.driver [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the live domain config.
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.772 232432 DEBUG nova.objects.instance [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:45 compute-2 nova_compute[232428]: 2025-11-29 08:29:45.820 232432 DEBUG oslo_concurrency.lockutils [None req-770e03a2-b219-406b-8a5d-adee3fb77b42 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:46 compute-2 ceph-mon[77138]: pgmap v2774: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 289 KiB/s wr, 173 op/s
Nov 29 08:29:46 compute-2 ovn_controller[134375]: 2025-11-29T08:29:46Z|00782|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:29:46 compute-2 ovn_controller[134375]: 2025-11-29T08:29:46Z|00783|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 08:29:46 compute-2 nova_compute[232428]: 2025-11-29 08:29:46.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:46 compute-2 nova_compute[232428]: 2025-11-29 08:29:46.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:47 compute-2 ceph-mon[77138]: pgmap v2775: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 724 KiB/s rd, 305 KiB/s wr, 64 op/s
Nov 29 08:29:48 compute-2 nova_compute[232428]: 2025-11-29 08:29:48.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:48 compute-2 podman[305234]: 2025-11-29 08:29:48.677151722 +0000 UTC m=+0.080055848 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:29:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3484673518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:29:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:49.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:50 compute-2 ceph-mon[77138]: pgmap v2776: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 706 KiB/s rd, 316 KiB/s wr, 113 op/s
Nov 29 08:29:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:29:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:29:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:29:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190929503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:29:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190929503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4190929503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4190929503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.611 232432 DEBUG oslo_concurrency.lockutils [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.612 232432 DEBUG oslo_concurrency.lockutils [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.641 232432 INFO nova.compute.manager [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Detaching volume cf5c779b-adfb-4e81-aa81-44b14dc653ca
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.824 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.829 232432 INFO nova.virt.block_device [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Attempting to driver detach volume cf5c779b-adfb-4e81-aa81-44b14dc653ca from mountpoint /dev/vdc
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.837 232432 DEBUG nova.virt.libvirt.driver [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdc from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.838 232432 DEBUG nova.virt.libvirt.guest [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-cf5c779b-adfb-4e81-aa81-44b14dc653ca">
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <target dev="vdc" bus="virtio"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <serial>cf5c779b-adfb-4e81-aa81-44b14dc653ca</serial>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:51 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.846 232432 INFO nova.virt.libvirt.driver [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the persistent domain config.
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.847 232432 DEBUG nova.virt.libvirt.driver [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:29:51 compute-2 nova_compute[232428]: 2025-11-29 08:29:51.848 232432 DEBUG nova.virt.libvirt.guest [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-cf5c779b-adfb-4e81-aa81-44b14dc653ca">
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   </source>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <target dev="vdc" bus="virtio"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <serial>cf5c779b-adfb-4e81-aa81-44b14dc653ca</serial>
Nov 29 08:29:51 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 08:29:51 compute-2 nova_compute[232428]: </disk>
Nov 29 08:29:51 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:29:52 compute-2 nova_compute[232428]: 2025-11-29 08:29:52.015 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764404992.0154552, 8d30dc23-4d84-4468-94cd-9f1300767585 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:29:52 compute-2 nova_compute[232428]: 2025-11-29 08:29:52.020 232432 DEBUG nova.virt.libvirt.driver [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 8d30dc23-4d84-4468-94cd-9f1300767585 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:29:52 compute-2 nova_compute[232428]: 2025-11-29 08:29:52.022 232432 INFO nova.virt.libvirt.driver [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance 8d30dc23-4d84-4468-94cd-9f1300767585 from the live domain config.
Nov 29 08:29:52 compute-2 nova_compute[232428]: 2025-11-29 08:29:52.258 232432 DEBUG nova.objects.instance [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:29:52 compute-2 nova_compute[232428]: 2025-11-29 08:29:52.298 232432 DEBUG oslo_concurrency.lockutils [None req-03e18bfd-348a-4638-8315-fa4e2db6413f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:29:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:52 compute-2 ceph-mon[77138]: pgmap v2777: 305 pgs: 305 active+clean; 909 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 814 KiB/s rd, 498 KiB/s wr, 128 op/s
Nov 29 08:29:52 compute-2 sudo[305258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:52 compute-2 sudo[305258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:52 compute-2 sudo[305258]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 08:29:52 compute-2 sudo[305283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:29:52 compute-2 sudo[305283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:52 compute-2 sudo[305283]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:52 compute-2 sudo[305308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:29:52 compute-2 sudo[305308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:52 compute-2 sudo[305308]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:52 compute-2 sudo[305333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:29:52 compute-2 sudo[305333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:29:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:53.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:53 compute-2 sudo[305333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:29:53 compute-2 ceph-mon[77138]: pgmap v2778: 305 pgs: 305 active+clean; 909 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 751 KiB/s rd, 460 KiB/s wr, 118 op/s
Nov 29 08:29:53 compute-2 ceph-mon[77138]: osdmap e367: 3 total, 3 up, 3 in
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:29:53 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.873 232432 DEBUG nova.compute.manager [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.873 232432 DEBUG nova.compute.manager [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.874 232432 DEBUG oslo_concurrency.lockutils [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.874 232432 DEBUG oslo_concurrency.lockutils [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:29:53 compute-2 nova_compute[232428]: 2025-11-29 08:29:53.874 232432 DEBUG nova.network.neutron [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:29:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:29:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760466735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:29:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760466735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 08:29:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3021398887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3021398887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1760466735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1760466735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:55.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:55 compute-2 ceph-mon[77138]: pgmap v2780: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 884 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 680 KiB/s rd, 251 KiB/s wr, 116 op/s
Nov 29 08:29:55 compute-2 ceph-mon[77138]: osdmap e368: 3 total, 3 up, 3 in
Nov 29 08:29:55 compute-2 nova_compute[232428]: 2025-11-29 08:29:55.869 232432 DEBUG nova.network.neutron [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:29:55 compute-2 nova_compute[232428]: 2025-11-29 08:29:55.869 232432 DEBUG nova.network.neutron [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:29:55 compute-2 nova_compute[232428]: 2025-11-29 08:29:55.886 232432 DEBUG oslo_concurrency.lockutils [req-7bf98621-22a1-4f4d-aea1-9bc4e8570eda req-9b4c8899-f9ff-41aa-8796-13f996ad73d5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:29:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:29:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2280375362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:29:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2280375362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:29:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2280375362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:29:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2280375362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:29:56 compute-2 nova_compute[232428]: 2025-11-29 08:29:56.826 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:57.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:29:57 compute-2 ceph-mon[77138]: pgmap v2782: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 873 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 285 KiB/s wr, 95 op/s
Nov 29 08:29:57 compute-2 nova_compute[232428]: 2025-11-29 08:29:57.943 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:58 compute-2 nova_compute[232428]: 2025-11-29 08:29:58.339 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:29:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:29:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:29:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 08:29:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:29:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1105925869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:29:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:29:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:29:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:59.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:00.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:00 compute-2 ceph-mon[77138]: pgmap v2783: 305 pgs: 305 active+clean; 808 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 36 KiB/s wr, 129 op/s
Nov 29 08:30:00 compute-2 ceph-mon[77138]: osdmap e369: 3 total, 3 up, 3 in
Nov 29 08:30:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1105925869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:00 compute-2 sudo[305394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:00 compute-2 sudo[305394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:00 compute-2 sudo[305394]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:00 compute-2 sudo[305425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:30:00 compute-2 sudo[305425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:00 compute-2 sudo[305425]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:00 compute-2 podman[305418]: 2025-11-29 08:30:00.984359886 +0000 UTC m=+0.116302302 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:30:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:01.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3756510675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:01 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:30:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:30:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:30:01 compute-2 ceph-mon[77138]: pgmap v2785: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 42 KiB/s wr, 150 op/s
Nov 29 08:30:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:01 compute-2 sudo[305469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:01 compute-2 sudo[305469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:01 compute-2 sudo[305469]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:01 compute-2 sudo[305494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:01 compute-2 sudo[305494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:01 compute-2 sudo[305494]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:01 compute-2 nova_compute[232428]: 2025-11-29 08:30:01.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:02 compute-2 nova_compute[232428]: 2025-11-29 08:30:02.304 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:02 compute-2 nova_compute[232428]: 2025-11-29 08:30:02.305 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:02 compute-2 nova_compute[232428]: 2025-11-29 08:30:02.435 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:30:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918492867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2918492867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2904235100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:30:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:03.333 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:03.334 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:03.334 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.342 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:03 compute-2 ceph-mon[77138]: pgmap v2786: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 34 KiB/s wr, 138 op/s
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.851 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.851 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:30:03 compute-2 nova_compute[232428]: 2025-11-29 08:30:03.852 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:30:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2851973504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2851973504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:05 compute-2 nova_compute[232428]: 2025-11-29 08:30:05.190 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:30:05 compute-2 nova_compute[232428]: 2025-11-29 08:30:05.216 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:30:05 compute-2 nova_compute[232428]: 2025-11-29 08:30:05.217 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:30:05 compute-2 nova_compute[232428]: 2025-11-29 08:30:05.217 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:05 compute-2 ceph-mon[77138]: pgmap v2787: 305 pgs: 305 active+clean; 729 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 30 KiB/s wr, 146 op/s
Nov 29 08:30:06 compute-2 nova_compute[232428]: 2025-11-29 08:30:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:06 compute-2 nova_compute[232428]: 2025-11-29 08:30:06.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:07 compute-2 nova_compute[232428]: 2025-11-29 08:30:07.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:07.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:07 compute-2 ceph-mon[77138]: pgmap v2788: 305 pgs: 305 active+clean; 668 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 17 KiB/s wr, 130 op/s
Nov 29 08:30:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/915833778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/709335468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:08 compute-2 nova_compute[232428]: 2025-11-29 08:30:08.101 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:08 compute-2 nova_compute[232428]: 2025-11-29 08:30:08.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3072380682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:09.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:09 compute-2 ceph-mon[77138]: pgmap v2789: 305 pgs: 305 active+clean; 589 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 82 KiB/s rd, 26 KiB/s wr, 114 op/s
Nov 29 08:30:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:10 compute-2 nova_compute[232428]: 2025-11-29 08:30:10.621 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/737361689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:11 compute-2 nova_compute[232428]: 2025-11-29 08:30:11.832 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:11 compute-2 ceph-mon[77138]: pgmap v2790: 305 pgs: 305 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 31 KiB/s wr, 103 op/s
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.182 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.183 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.183 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.184 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.184 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.185 232432 INFO nova.compute.manager [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Terminating instance
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.186 232432 DEBUG nova.compute.manager [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:12 compute-2 kernel: tapeff55416-ac (unregistering): left promiscuous mode
Nov 29 08:30:12 compute-2 NetworkManager[48993]: <info>  [1764405012.5041] device (tapeff55416-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:30:12 compute-2 ovn_controller[134375]: 2025-11-29T08:30:12Z|00784|binding|INFO|Releasing lport eff55416-acbb-4845-9fd5-369e04da8afd from this chassis (sb_readonly=0)
Nov 29 08:30:12 compute-2 ovn_controller[134375]: 2025-11-29T08:30:12Z|00785|binding|INFO|Setting lport eff55416-acbb-4845-9fd5-369e04da8afd down in Southbound
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 ovn_controller[134375]: 2025-11-29T08:30:12Z|00786|binding|INFO|Removing iface tapeff55416-ac ovn-installed in OVS
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.529 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:f4:28 10.100.0.14'], port_security=['fa:16:3e:ae:f4:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5d2af1c0-e1ed-48f9-beda-42cc37212de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=eff55416-acbb-4845-9fd5-369e04da8afd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.531 143801 INFO neutron.agent.ovn.metadata.agent [-] Port eff55416-acbb-4845-9fd5-369e04da8afd in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.533 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.534 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f66ae99c-9662-48ef-9eb0-72aa4535df7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.535 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace which is not needed anymore
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Nov 29 08:30:12 compute-2 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009d.scope: Consumed 28.856s CPU time.
Nov 29 08:30:12 compute-2 systemd-machined[194747]: Machine qemu-75-instance-0000009d terminated.
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.638 232432 INFO nova.virt.libvirt.driver [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Instance destroyed successfully.
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.639 232432 DEBUG nova.objects.instance [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 5d2af1c0-e1ed-48f9-beda-42cc37212de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.661 232432 DEBUG nova.virt.libvirt.vif [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1682933774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1682933774',id=157,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-p7c8uwd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:37Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=5d2af1c0-e1ed-48f9-beda-42cc37212de7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.661 232432 DEBUG nova.network.os_vif_util [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "eff55416-acbb-4845-9fd5-369e04da8afd", "address": "fa:16:3e:ae:f4:28", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff55416-ac", "ovs_interfaceid": "eff55416-acbb-4845-9fd5-369e04da8afd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.662 232432 DEBUG nova.network.os_vif_util [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.663 232432 DEBUG os_vif [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.666 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.666 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff55416-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.668 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.671 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.674 232432 INFO os_vif [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f4:28,bridge_name='br-int',has_traffic_filtering=True,id=eff55416-acbb-4845-9fd5-369e04da8afd,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff55416-ac')
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [NOTICE]   (299192) : haproxy version is 2.8.14-c23fe91
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [NOTICE]   (299192) : path to executable is /usr/sbin/haproxy
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [WARNING]  (299192) : Exiting Master process...
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [WARNING]  (299192) : Exiting Master process...
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [ALERT]    (299192) : Current worker (299194) exited with code 143 (Terminated)
Nov 29 08:30:12 compute-2 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[299188]: [WARNING]  (299192) : All workers exited. Exiting... (0)
Nov 29 08:30:12 compute-2 systemd[1]: libpod-539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc.scope: Deactivated successfully.
Nov 29 08:30:12 compute-2 podman[305558]: 2025-11-29 08:30:12.696580495 +0000 UTC m=+0.045303429 container died 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:30:12 compute-2 systemd[1]: var-lib-containers-storage-overlay-33ed13964f2f846764fcac123f714b2aa91c259c5ce23c4beb5b019a38e8ccbd-merged.mount: Deactivated successfully.
Nov 29 08:30:12 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc-userdata-shm.mount: Deactivated successfully.
Nov 29 08:30:12 compute-2 podman[305558]: 2025-11-29 08:30:12.741224183 +0000 UTC m=+0.089947107 container cleanup 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:30:12 compute-2 systemd[1]: libpod-conmon-539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc.scope: Deactivated successfully.
Nov 29 08:30:12 compute-2 podman[305572]: 2025-11-29 08:30:12.782045931 +0000 UTC m=+0.058798891 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:30:12 compute-2 podman[305597]: 2025-11-29 08:30:12.802932675 +0000 UTC m=+0.041040596 container remove 539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.809 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[418a7b99-04ca-4a24-b0ad-da86a84c0012]: (4, ('Sat Nov 29 08:30:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc)\n539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc\nSat Nov 29 08:30:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc)\n539ceb00612ca4c46cafa958bd371696e224512888dc5ac299a672d2027736dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.811 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d22940d-0750-4b59-a44f-067c6c9ed154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.812 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.814 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 kernel: tapabbc8daa-d0: left promiscuous mode
Nov 29 08:30:12 compute-2 nova_compute[232428]: 2025-11-29 08:30:12.829 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.833 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb23dc1e-611e-487a-b8c3-df01332bbeb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.851 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d37edbc5-2418-4a70-8ec8-f53cb4f65f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.852 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[34fcd03d-9fed-4f75-bd8f-8857829e7213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.872 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e53b507-a00c-4957-944f-c5f226e7b314]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764313, 'reachable_time': 37135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305622, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:12 compute-2 systemd[1]: run-netns-ovnmeta\x2dabbc8daa\x2dd665\x2d4e2f\x2dbf74\x2d9e57db481441.mount: Deactivated successfully.
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.875 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:30:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:12.876 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d6ca29-7bed-47a1-965a-3c14c377034b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.087 232432 INFO nova.virt.libvirt.driver [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Deleting instance files /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7_del
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.088 232432 INFO nova.virt.libvirt.driver [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Deletion of /var/lib/nova/instances/5d2af1c0-e1ed-48f9-beda-42cc37212de7_del complete
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.162 232432 INFO nova.compute.manager [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.164 232432 DEBUG oslo.service.loopingcall [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.165 232432 DEBUG nova.compute.manager [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:30:13 compute-2 nova_compute[232428]: 2025-11-29 08:30:13.165 232432 DEBUG nova.network.neutron [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:30:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:13.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:13 compute-2 ceph-mon[77138]: pgmap v2791: 305 pgs: 305 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 89 op/s
Nov 29 08:30:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:14 compute-2 nova_compute[232428]: 2025-11-29 08:30:14.914 232432 DEBUG nova.network.neutron [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:30:14 compute-2 nova_compute[232428]: 2025-11-29 08:30:14.961 232432 INFO nova.compute.manager [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Took 1.80 seconds to deallocate network for instance.
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.001 232432 DEBUG nova.compute.manager [req-2f55b82c-4c44-45ba-b120-61c46bf248dd req-cdbb3f17-e4fd-4b4b-8e66-22bd67603e52 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Received event network-vif-deleted-eff55416-acbb-4845-9fd5-369e04da8afd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.165 232432 INFO nova.compute.manager [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.217 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.218 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.324 232432 DEBUG oslo_concurrency.processutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:15.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:30:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1555248201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.793 232432 DEBUG oslo_concurrency.processutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.799 232432 DEBUG nova.compute.provider_tree [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.820 232432 DEBUG nova.scheduler.client.report [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.850 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:15 compute-2 nova_compute[232428]: 2025-11-29 08:30:15.891 232432 INFO nova.scheduler.client.report [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance 5d2af1c0-e1ed-48f9-beda-42cc37212de7
Nov 29 08:30:15 compute-2 ceph-mon[77138]: pgmap v2792: 305 pgs: 305 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 199 KiB/s wr, 118 op/s
Nov 29 08:30:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/484854823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3417512963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1555248201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1639437657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.026 232432 DEBUG oslo_concurrency.lockutils [None req-f5d2fb07-4c4d-4b27-acf6-e83c5f9f1a93 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5d2af1c0-e1ed-48f9-beda-42cc37212de7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.231 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.231 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:30:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3344825438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.686 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.791 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.792 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:30:16 compute-2 nova_compute[232428]: 2025-11-29 08:30:16.836 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.012 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.013 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4055MB free_disk=20.94214630126953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.014 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.014 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:30:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024691413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:30:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024691413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3344825438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.194 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 8d30dc23-4d84-4468-94cd-9f1300767585 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.194 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.195 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.252 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:30:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204379754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:30:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204379754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:17.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.669 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:30:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4116127769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.725 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.741 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.766 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.795 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:30:17 compute-2 nova_compute[232428]: 2025-11-29 08:30:17.796 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:18 compute-2 ceph-mon[77138]: pgmap v2793: 305 pgs: 305 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 196 KiB/s wr, 92 op/s
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3144057685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1024691413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1024691413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4204379754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4204379754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4116127769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:18.673 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:30:18 compute-2 nova_compute[232428]: 2025-11-29 08:30:18.674 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:18.677 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:30:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:19.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.597 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.597 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.623 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:30:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:19.679 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.690 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.690 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.699 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.700 232432 INFO nova.compute.claims [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:30:19 compute-2 podman[305712]: 2025-11-29 08:30:19.709575575 +0000 UTC m=+0.092990362 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:30:19 compute-2 nova_compute[232428]: 2025-11-29 08:30:19.839 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:20 compute-2 ceph-mon[77138]: pgmap v2794: 305 pgs: 305 active+clean; 487 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 394 KiB/s wr, 164 op/s
Nov 29 08:30:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:30:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800778740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:30:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800778740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:30:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1822204355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.423 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.430 232432 DEBUG nova.compute.provider_tree [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.452 232432 DEBUG nova.scheduler.client.report [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.480 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.482 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.537 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.537 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.553 232432 INFO nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.576 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.676 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.677 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.677 232432 INFO nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Creating image(s)
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.714 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.749 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.780 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.783 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.827 232432 DEBUG nova.policy [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd45f9a4a44664af3884c15ce0f5697e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7e8e7407a7c44208a503e8225c1cf518', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.881 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.882 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.883 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.883 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:20 compute-2 ovn_controller[134375]: 2025-11-29T08:30:20Z|00787|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.920 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.925 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc8140a9-7bef-42f8-867c-13e29f022673_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:20 compute-2 nova_compute[232428]: 2025-11-29 08:30:20.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:21 compute-2 ovn_controller[134375]: 2025-11-29T08:30:21Z|00788|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.227 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/800778740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/800778740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1822204355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.609 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc8140a9-7bef-42f8-867c-13e29f022673_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.692 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] resizing rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:30:21 compute-2 sudo[305870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:21 compute-2 sudo[305870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:21 compute-2 sudo[305870]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:21 compute-2 sudo[305931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:21 compute-2 sudo[305931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:21 compute-2 sudo[305931]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.805 232432 DEBUG nova.objects.instance [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'migration_context' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.819 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.819 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Ensure instance console log exists: /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.820 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.820 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.820 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.837 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:21 compute-2 nova_compute[232428]: 2025-11-29 08:30:21.950 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Successfully created port: 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:30:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:22 compute-2 ceph-mon[77138]: pgmap v2795: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 381 KiB/s wr, 152 op/s
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.794 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Successfully updated port: 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.847 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.848 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquired lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.848 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.875 232432 DEBUG nova.compute.manager [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.875 232432 DEBUG nova.compute.manager [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing instance network info cache due to event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:30:22 compute-2 nova_compute[232428]: 2025-11-29 08:30:22.876 232432 DEBUG oslo_concurrency.lockutils [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:30:23 compute-2 nova_compute[232428]: 2025-11-29 08:30:23.015 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:30:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:23 compute-2 ceph-mon[77138]: pgmap v2796: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 372 KiB/s wr, 144 op/s
Nov 29 08:30:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.285 232432 DEBUG nova.network.neutron [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:30:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:25.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.374 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Releasing lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.374 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Instance network_info: |[{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.375 232432 DEBUG oslo_concurrency.lockutils [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.376 232432 DEBUG nova.network.neutron [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.379 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Start _get_guest_xml network_info=[{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.388 232432 WARNING nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.396 232432 DEBUG nova.virt.libvirt.host [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.397 232432 DEBUG nova.virt.libvirt.host [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.400 232432 DEBUG nova.virt.libvirt.host [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.401 232432 DEBUG nova.virt.libvirt.host [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.402 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.403 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.403 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.403 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.404 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.405 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.405 232432 DEBUG nova.virt.hardware [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.407 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:30:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3115389592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:25 compute-2 ceph-mon[77138]: pgmap v2797: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 160 op/s
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.909 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.942 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:25 compute-2 nova_compute[232428]: 2025-11-29 08:30:25.949 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:30:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3243707930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.419 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.421 232432 DEBUG nova.virt.libvirt.vif [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1520519822',display_name='tempest-TestStampPattern-server-1520519822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1520519822',id=170,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-y414pyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:20Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=dc8140a9-7bef-42f8-867c-13e29f022673,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.421 232432 DEBUG nova.network.os_vif_util [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.423 232432 DEBUG nova.network.os_vif_util [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.424 232432 DEBUG nova.objects.instance [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.497 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <uuid>dc8140a9-7bef-42f8-867c-13e29f022673</uuid>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <name>instance-000000aa</name>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:name>tempest-TestStampPattern-server-1520519822</nova:name>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:30:25</nova:creationTime>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:user uuid="d45f9a4a44664af3884c15ce0f5697e0">tempest-TestStampPattern-1730119083-project-member</nova:user>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:project uuid="7e8e7407a7c44208a503e8225c1cf518">tempest-TestStampPattern-1730119083</nova:project>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <nova:port uuid="024fe302-6cb7-4c8c-9d08-bcd0c8c51da0">
Nov 29 08:30:26 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <system>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="serial">dc8140a9-7bef-42f8-867c-13e29f022673</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="uuid">dc8140a9-7bef-42f8-867c-13e29f022673</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </system>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <os>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </os>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <features>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </features>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dc8140a9-7bef-42f8-867c-13e29f022673_disk">
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </source>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/dc8140a9-7bef-42f8-867c-13e29f022673_disk.config">
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </source>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:30:26 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:02:42:3c"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <target dev="tap024fe302-6c"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/console.log" append="off"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <video>
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </video>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:30:26 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:30:26 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:30:26 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:30:26 compute-2 nova_compute[232428]: </domain>
Nov 29 08:30:26 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.498 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Preparing to wait for external event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.498 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.498 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.498 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.499 232432 DEBUG nova.virt.libvirt.vif [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1520519822',display_name='tempest-TestStampPattern-server-1520519822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1520519822',id=170,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-y414pyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:20Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=dc8140a9-7bef-42f8-867c-13e29f022673,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.499 232432 DEBUG nova.network.os_vif_util [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.500 232432 DEBUG nova.network.os_vif_util [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.500 232432 DEBUG os_vif [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.501 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.502 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.504 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024fe302-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.504 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap024fe302-6c, col_values=(('external_ids', {'iface-id': '024fe302-6cb7-4c8c-9d08-bcd0c8c51da0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:42:3c', 'vm-uuid': 'dc8140a9-7bef-42f8-867c-13e29f022673'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:26 compute-2 NetworkManager[48993]: <info>  [1764405026.5070] manager: (tap024fe302-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.512 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.513 232432 INFO os_vif [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c')
Nov 29 08:30:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.846 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.847 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.847 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No VIF found with MAC fa:16:3e:02:42:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.848 232432 INFO nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Using config drive
Nov 29 08:30:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3115389592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3243707930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:26 compute-2 nova_compute[232428]: 2025-11-29 08:30:26.887 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.442 232432 INFO nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Creating config drive at /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.448 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnucdgz0n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.594 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnucdgz0n" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.625 232432 DEBUG nova.storage.rbd_utils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image dc8140a9-7bef-42f8-867c-13e29f022673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.628 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config dc8140a9-7bef-42f8-867c-13e29f022673_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.674 232432 DEBUG nova.network.neutron [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updated VIF entry in instance network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.675 232432 DEBUG nova.network.neutron [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.677 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405012.635439, 5d2af1c0-e1ed-48f9-beda-42cc37212de7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.677 232432 INFO nova.compute.manager [-] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] VM Stopped (Lifecycle Event)
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.719 232432 DEBUG oslo_concurrency.lockutils [req-3ee11359-071c-45e4-b92b-fa36685c9a08 req-0ceadda4-e6fe-42b5-a3b6-7053c7a8afe7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:30:27 compute-2 nova_compute[232428]: 2025-11-29 08:30:27.728 232432 DEBUG nova.compute.manager [None req-5e7312d3-f41e-4f3e-aa0e-751fb6bdb2da - - - - - -] [instance: 5d2af1c0-e1ed-48f9-beda-42cc37212de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.121 232432 DEBUG oslo_concurrency.processutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config dc8140a9-7bef-42f8-867c-13e29f022673_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.122 232432 INFO nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Deleting local config drive /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673/disk.config because it was imported into RBD.
Nov 29 08:30:28 compute-2 kernel: tap024fe302-6c: entered promiscuous mode
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.1805] manager: (tap024fe302-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 08:30:28 compute-2 ovn_controller[134375]: 2025-11-29T08:30:28Z|00789|binding|INFO|Claiming lport 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 for this chassis.
Nov 29 08:30:28 compute-2 ovn_controller[134375]: 2025-11-29T08:30:28Z|00790|binding|INFO|024fe302-6cb7-4c8c-9d08-bcd0c8c51da0: Claiming fa:16:3e:02:42:3c 10.100.0.4
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.180 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 systemd-udevd[306112]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:30:28 compute-2 systemd-machined[194747]: New machine qemu-83-instance-000000aa.
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.2249] device (tap024fe302-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.2261] device (tap024fe302-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:30:28 compute-2 systemd[1]: Started Virtual Machine qemu-83-instance-000000aa.
Nov 29 08:30:28 compute-2 ceph-mon[77138]: pgmap v2798: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 153 op/s
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.253 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.254 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:42:3c 10.100.0.4'], port_security=['fa:16:3e:02:42:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dc8140a9-7bef-42f8-867c-13e29f022673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e8e7407a7c44208a503e8225c1cf518', 'neutron:revision_number': '2', 'neutron:security_group_ids': '056d3a24-7b10-4a45-884a-1b8e5def99f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a117267-2677-4e97-b3d9-4edd30f1b375, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.256 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 in datapath 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 bound to our chassis
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.257 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6
Nov 29 08:30:28 compute-2 ovn_controller[134375]: 2025-11-29T08:30:28Z|00791|binding|INFO|Setting lport 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 ovn-installed in OVS
Nov 29 08:30:28 compute-2 ovn_controller[134375]: 2025-11-29T08:30:28Z|00792|binding|INFO|Setting lport 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 up in Southbound
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.268 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.274 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[52b539db-a7d0-40ab-9f6d-3c23853ead2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.275 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bbeaef7-11 in ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:30:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:30:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759947072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:30:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759947072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.277 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bbeaef7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.278 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d94a1786-ccba-40e5-b81d-fabeab12848f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.278 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[51ab4ffc-f845-4937-958e-6751a8a4e276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.296 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[104c0310-6830-400b-93b3-77c327863893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.315 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[505e8fc1-68cd-4fc3-80c4-2af3fea85a2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.356 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8028410c-d9d3-4400-9ba1-178f3c985444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.3667] manager: (tap9bbeaef7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.366 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[60dcf1d7-7ba1-4114-8f54-d34ae03c3995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.410 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8a9b0b-b9ae-4c83-bab5-0a49b9b81da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.415 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb75a94-1bcf-4ffa-b6d5-ecde2240056b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.4456] device (tap9bbeaef7-10): carrier: link connected
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.453 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[946a83ca-c153-48aa-9d98-63663d478dc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.478 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8379909b-fa65-4453-81aa-86f1ba080ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bbeaef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:b9:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799471, 'reachable_time': 22945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306146, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.502 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[630f0688-a483-469c-afb0-66aff20332ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:b929'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799471, 'tstamp': 799471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306147, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.526 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[88d8fed0-1b5e-4fcf-82df-234deb87b770]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bbeaef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:b9:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799471, 'reachable_time': 22945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306163, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.567 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[08bc2e73-3613-450f-ba73-eb12ddb808f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.637 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a36d8638-44aa-41c1-b2e7-2a01d80d0c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.639 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bbeaef7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.639 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.639 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bbeaef7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:28 compute-2 NetworkManager[48993]: <info>  [1764405028.6421] manager: (tap9bbeaef7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 kernel: tap9bbeaef7-10: entered promiscuous mode
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.648 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bbeaef7-10, col_values=(('external_ids', {'iface-id': '3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:30:28 compute-2 ovn_controller[134375]: 2025-11-29T08:30:28Z|00793|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.664 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.665 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.666 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bce04d8a-e270-46af-b327-0702772a825b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.667 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:30:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:28.668 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'env', 'PROCESS_TAG=haproxy-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.745 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405028.7450454, dc8140a9-7bef-42f8-867c-13e29f022673 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.746 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] VM Started (Lifecycle Event)
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.793 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.798 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405028.746113, dc8140a9-7bef-42f8-867c-13e29f022673 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.798 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] VM Paused (Lifecycle Event)
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.822 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.825 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.920 232432 DEBUG nova.compute.manager [req-ec6aa7a2-dfe2-41a0-ad77-cab4d930bfcb req-a43daf54-2a4d-487d-a44d-1cbe4eefb25b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.921 232432 DEBUG oslo_concurrency.lockutils [req-ec6aa7a2-dfe2-41a0-ad77-cab4d930bfcb req-a43daf54-2a4d-487d-a44d-1cbe4eefb25b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.921 232432 DEBUG oslo_concurrency.lockutils [req-ec6aa7a2-dfe2-41a0-ad77-cab4d930bfcb req-a43daf54-2a4d-487d-a44d-1cbe4eefb25b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.921 232432 DEBUG oslo_concurrency.lockutils [req-ec6aa7a2-dfe2-41a0-ad77-cab4d930bfcb req-a43daf54-2a4d-487d-a44d-1cbe4eefb25b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.921 232432 DEBUG nova.compute.manager [req-ec6aa7a2-dfe2-41a0-ad77-cab4d930bfcb req-a43daf54-2a4d-487d-a44d-1cbe4eefb25b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Processing event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.922 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.927 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.931 232432 INFO nova.virt.libvirt.driver [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Instance spawned successfully.
Nov 29 08:30:28 compute-2 nova_compute[232428]: 2025-11-29 08:30:28.931 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:30:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:30:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673728373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:29 compute-2 podman[306221]: 2025-11-29 08:30:29.020363733 +0000 UTC m=+0.027102819 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.267 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.268 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405028.9267747, dc8140a9-7bef-42f8-867c-13e29f022673 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.268 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] VM Resumed (Lifecycle Event)
Nov 29 08:30:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.402 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.402 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.403 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.403 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.403 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.404 232432 DEBUG nova.virt.libvirt.driver [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.569 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:30:29 compute-2 nova_compute[232428]: 2025-11-29 08:30:29.575 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:30:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2759947072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2759947072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2673728373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:30 compute-2 nova_compute[232428]: 2025-11-29 08:30:30.449 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:30:30 compute-2 nova_compute[232428]: 2025-11-29 08:30:30.791 232432 INFO nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Took 10.11 seconds to spawn the instance on the hypervisor.
Nov 29 08:30:30 compute-2 nova_compute[232428]: 2025-11-29 08:30:30.791 232432 DEBUG nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:30:30 compute-2 podman[306221]: 2025-11-29 08:30:30.888640607 +0000 UTC m=+1.895379673 container create 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:30:30 compute-2 ceph-mon[77138]: pgmap v2799: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 155 op/s
Nov 29 08:30:30 compute-2 systemd[1]: Started libpod-conmon-5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9.scope.
Nov 29 08:30:30 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:30:30 compute-2 nova_compute[232428]: 2025-11-29 08:30:30.973 232432 INFO nova.compute.manager [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Took 11.30 seconds to build instance.
Nov 29 08:30:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8f318b5b114e802fe080a8ff968c5541511d5859e70766449b088a2ed2730f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:30:30 compute-2 nova_compute[232428]: 2025-11-29 08:30:30.991 232432 DEBUG oslo_concurrency.lockutils [None req-5fb6bec9-ba12-4ab2-a224-4b696e25f97a d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:30 compute-2 podman[306221]: 2025-11-29 08:30:30.99417941 +0000 UTC m=+2.000918506 container init 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:30:31 compute-2 podman[306221]: 2025-11-29 08:30:31.006068453 +0000 UTC m=+2.012807519 container start 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:30:31 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [NOTICE]   (306241) : New worker (306243) forked
Nov 29 08:30:31 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [NOTICE]   (306241) : Loading success.
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.034 232432 DEBUG nova.compute.manager [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.034 232432 DEBUG oslo_concurrency.lockutils [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.035 232432 DEBUG oslo_concurrency.lockutils [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.035 232432 DEBUG oslo_concurrency.lockutils [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.035 232432 DEBUG nova.compute.manager [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] No waiting events found dispatching network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.035 232432 WARNING nova.compute.manager [req-7950590e-2b01-4ef8-a0d5-af1520216086 req-2571f421-2111-4c59-98fc-afa839772f7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received unexpected event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 for instance with vm_state active and task_state None.
Nov 29 08:30:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:31.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:31 compute-2 podman[306252]: 2025-11-29 08:30:31.699754987 +0000 UTC m=+0.101971562 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:30:31 compute-2 nova_compute[232428]: 2025-11-29 08:30:31.842 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:31 compute-2 ceph-mon[77138]: pgmap v2800: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 29 08:30:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000064s ======
Nov 29 08:30:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:32.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Nov 29 08:30:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:30:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3123896156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:33 compute-2 ceph-mon[77138]: pgmap v2801: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 29 08:30:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3123896156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:35.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:35 compute-2 NetworkManager[48993]: <info>  [1764405035.4685] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.469 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:35 compute-2 NetworkManager[48993]: <info>  [1764405035.4750] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:35 compute-2 ovn_controller[134375]: 2025-11-29T08:30:35Z|00794|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 08:30:35 compute-2 ovn_controller[134375]: 2025-11-29T08:30:35Z|00795|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.713 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:35 compute-2 sshd-session[306280]: Invalid user sol from 45.148.10.240 port 59736
Nov 29 08:30:35 compute-2 ceph-mon[77138]: pgmap v2802: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.982 232432 DEBUG nova.compute.manager [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.982 232432 DEBUG nova.compute.manager [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing instance network info cache due to event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.983 232432 DEBUG oslo_concurrency.lockutils [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.983 232432 DEBUG oslo_concurrency.lockutils [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:30:35 compute-2 nova_compute[232428]: 2025-11-29 08:30:35.983 232432 DEBUG nova.network.neutron [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:30:36 compute-2 sshd-session[306280]: Connection closed by invalid user sol 45.148.10.240 port 59736 [preauth]
Nov 29 08:30:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:36 compute-2 nova_compute[232428]: 2025-11-29 08:30:36.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:36 compute-2 nova_compute[232428]: 2025-11-29 08:30:36.844 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:38 compute-2 ceph-mon[77138]: pgmap v2803: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 902 KiB/s wr, 148 op/s
Nov 29 08:30:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:38 compute-2 nova_compute[232428]: 2025-11-29 08:30:38.752 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:38 compute-2 nova_compute[232428]: 2025-11-29 08:30:38.782 232432 DEBUG nova.network.neutron [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updated VIF entry in instance network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:30:38 compute-2 nova_compute[232428]: 2025-11-29 08:30:38.783 232432 DEBUG nova.network.neutron [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:30:38 compute-2 nova_compute[232428]: 2025-11-29 08:30:38.805 232432 DEBUG oslo_concurrency.lockutils [req-51cbd09d-c5d4-46f7-bcb1-1c9c2b1edcbc req-13bb1de1-4183-4fec-8e8a-6f9c574e6cc3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:30:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:39.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:40 compute-2 ceph-mon[77138]: pgmap v2804: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 125 op/s
Nov 29 08:30:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:41 compute-2 nova_compute[232428]: 2025-11-29 08:30:41.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:41 compute-2 ovn_controller[134375]: 2025-11-29T08:30:41Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:42:3c 10.100.0.4
Nov 29 08:30:41 compute-2 ovn_controller[134375]: 2025-11-29T08:30:41Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:42:3c 10.100.0.4
Nov 29 08:30:41 compute-2 nova_compute[232428]: 2025-11-29 08:30:41.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:41 compute-2 sudo[306287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:41 compute-2 sudo[306287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:41 compute-2 sudo[306287]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:41 compute-2 sudo[306312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:30:41 compute-2 sudo[306312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:30:41 compute-2 sudo[306312]: pam_unix(sudo:session): session closed for user root
Nov 29 08:30:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:43 compute-2 ceph-mon[77138]: pgmap v2805: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 120 op/s
Nov 29 08:30:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:43.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:43 compute-2 podman[306337]: 2025-11-29 08:30:43.67173771 +0000 UTC m=+0.068080093 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:30:44 compute-2 ceph-mon[77138]: pgmap v2806: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 60 op/s
Nov 29 08:30:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:45.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:46 compute-2 nova_compute[232428]: 2025-11-29 08:30:46.513 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:46 compute-2 nova_compute[232428]: 2025-11-29 08:30:46.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:47 compute-2 ceph-mon[77138]: pgmap v2807: 305 pgs: 305 active+clean; 524 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Nov 29 08:30:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1973454107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:48 compute-2 ceph-mon[77138]: pgmap v2808: 305 pgs: 305 active+clean; 537 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 29 08:30:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:50 compute-2 podman[306361]: 2025-11-29 08:30:50.724473934 +0000 UTC m=+0.109433856 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd)
Nov 29 08:30:51 compute-2 ceph-mon[77138]: pgmap v2809: 305 pgs: 305 active+clean; 560 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 3.0 MiB/s wr, 82 op/s
Nov 29 08:30:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2030902154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:30:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.733 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.734 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.751 232432 DEBUG nova.objects.instance [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.791 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:51 compute-2 nova_compute[232428]: 2025-11-29 08:30:51.852 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/216055025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:52 compute-2 ceph-mon[77138]: pgmap v2810: 305 pgs: 305 active+clean; 588 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.096 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.097 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.099 232432 INFO nova.compute.manager [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Attaching volume d5b932e2-653b-46a6-9b2c-1d5019441709 to /dev/vdb
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.291 232432 DEBUG os_brick.utils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.295 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.321 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.322 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[60109c9d-c8d1-4f39-978b-e94cd8dd40b5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.324 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.336 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.337 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[376f0c2c-db18-43e1-80aa-68c68a2cade7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.342 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.353 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.354 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bef925-a241-40e2-a642-77df3a2a791d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.356 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[d0af45a5-eb7e-4590-aea2-927cb759abb7]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.357 232432 DEBUG oslo_concurrency.processutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.412 232432 DEBUG oslo_concurrency.processutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "nvme version" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.417 232432 DEBUG os_brick.initiator.connectors.lightos [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.418 232432 DEBUG os_brick.initiator.connectors.lightos [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.418 232432 DEBUG os_brick.initiator.connectors.lightos [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.419 232432 DEBUG os_brick.utils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] <== get_connector_properties: return (126ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:30:52 compute-2 nova_compute[232428]: 2025-11-29 08:30:52.420 232432 DEBUG nova.virt.block_device [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating existing volume attachment record: f2a15aac-a5f1-4d6c-92db-07db274f47ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:30:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:53.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/152890148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/152890148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:53 compute-2 nova_compute[232428]: 2025-11-29 08:30:53.834 232432 DEBUG nova.objects.instance [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:53 compute-2 nova_compute[232428]: 2025-11-29 08:30:53.874 232432 DEBUG nova.virt.libvirt.driver [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Attempting to attach volume d5b932e2-653b-46a6-9b2c-1d5019441709 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:30:53 compute-2 nova_compute[232428]: 2025-11-29 08:30:53.880 232432 DEBUG nova.virt.libvirt.guest [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:30:53 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d5b932e2-653b-46a6-9b2c-1d5019441709">
Nov 29 08:30:53 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   </source>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   <auth username="openstack">
Nov 29 08:30:53 compute-2 nova_compute[232428]:     <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   </auth>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:30:53 compute-2 nova_compute[232428]:   <serial>d5b932e2-653b-46a6-9b2c-1d5019441709</serial>
Nov 29 08:30:53 compute-2 nova_compute[232428]: </disk>
Nov 29 08:30:53 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:30:54 compute-2 nova_compute[232428]: 2025-11-29 08:30:54.058 232432 DEBUG nova.virt.libvirt.driver [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:30:54 compute-2 nova_compute[232428]: 2025-11-29 08:30:54.059 232432 DEBUG nova.virt.libvirt.driver [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:30:54 compute-2 nova_compute[232428]: 2025-11-29 08:30:54.059 232432 DEBUG nova.virt.libvirt.driver [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:30:54 compute-2 nova_compute[232428]: 2025-11-29 08:30:54.059 232432 DEBUG nova.virt.libvirt.driver [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No VIF found with MAC fa:16:3e:02:42:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:30:54 compute-2 nova_compute[232428]: 2025-11-29 08:30:54.297 232432 DEBUG oslo_concurrency.lockutils [None req-64c8b426-5b6a-4046-be76-8f0b507128de d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:30:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:55.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:55 compute-2 ceph-mon[77138]: pgmap v2811: 305 pgs: 305 active+clean; 588 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Nov 29 08:30:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/841218850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:30:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:56 compute-2 ceph-mon[77138]: pgmap v2812: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Nov 29 08:30:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2115914059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:30:56 compute-2 nova_compute[232428]: 2025-11-29 08:30:56.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:30:56 compute-2 nova_compute[232428]: 2025-11-29 08:30:56.855 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:30:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:57.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:30:57 compute-2 ceph-mon[77138]: pgmap v2813: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Nov 29 08:30:57 compute-2 nova_compute[232428]: 2025-11-29 08:30:57.760 232432 DEBUG oslo_concurrency.lockutils [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:30:57 compute-2 nova_compute[232428]: 2025-11-29 08:30:57.761 232432 DEBUG oslo_concurrency.lockutils [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:30:57 compute-2 nova_compute[232428]: 2025-11-29 08:30:57.797 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:30:57 compute-2 nova_compute[232428]: 2025-11-29 08:30:57.859 232432 INFO nova.compute.manager [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Detaching volume d5b932e2-653b-46a6-9b2c-1d5019441709
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.053 232432 INFO nova.virt.block_device [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Attempting to driver detach volume d5b932e2-653b-46a6-9b2c-1d5019441709 from mountpoint /dev/vdb
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.067 232432 DEBUG nova.virt.libvirt.driver [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Attempting to detach device vdb from instance dc8140a9-7bef-42f8-867c-13e29f022673 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.068 232432 DEBUG nova.virt.libvirt.guest [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d5b932e2-653b-46a6-9b2c-1d5019441709">
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   </source>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <serial>d5b932e2-653b-46a6-9b2c-1d5019441709</serial>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]: </disk>
Nov 29 08:30:58 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.149 232432 INFO nova.virt.libvirt.driver [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully detached device vdb from instance dc8140a9-7bef-42f8-867c-13e29f022673 from the persistent domain config.
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.149 232432 DEBUG nova.virt.libvirt.driver [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance dc8140a9-7bef-42f8-867c-13e29f022673 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.150 232432 DEBUG nova.virt.libvirt.guest [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-d5b932e2-653b-46a6-9b2c-1d5019441709">
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   </source>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <serial>d5b932e2-653b-46a6-9b2c-1d5019441709</serial>
Nov 29 08:30:58 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:30:58 compute-2 nova_compute[232428]: </disk>
Nov 29 08:30:58 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:30:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:58 compute-2 nova_compute[232428]: 2025-11-29 08:30:58.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:30:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:58.641 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:30:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:30:58.644 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:30:59 compute-2 nova_compute[232428]: 2025-11-29 08:30:59.107 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764405059.107146, dc8140a9-7bef-42f8-867c-13e29f022673 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:30:59 compute-2 nova_compute[232428]: 2025-11-29 08:30:59.110 232432 DEBUG nova.virt.libvirt.driver [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance dc8140a9-7bef-42f8-867c-13e29f022673 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:30:59 compute-2 nova_compute[232428]: 2025-11-29 08:30:59.113 232432 INFO nova.virt.libvirt.driver [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully detached device vdb from instance dc8140a9-7bef-42f8-867c-13e29f022673 from the live domain config.
Nov 29 08:30:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:30:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3035695155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:30:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:30:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3035695155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:30:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:30:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:30:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:59.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:30:59 compute-2 nova_compute[232428]: 2025-11-29 08:30:59.534 232432 DEBUG nova.objects.instance [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:30:59 compute-2 nova_compute[232428]: 2025-11-29 08:30:59.704 232432 DEBUG oslo_concurrency.lockutils [None req-6341fd56-a8ca-4884-abec-549c7281e7cf d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:00 compute-2 ceph-mon[77138]: pgmap v2814: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 29 08:31:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3035695155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3035695155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:00.646 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:31:01 compute-2 sudo[306417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:01 compute-2 sudo[306417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:01 compute-2 sudo[306417]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 08:31:01 compute-2 sudo[306442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:31:01 compute-2 sudo[306442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:01 compute-2 sudo[306442]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:01 compute-2 sudo[306467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:01 compute-2 sudo[306467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:01 compute-2 sudo[306467]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:01 compute-2 sudo[306492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:31:01 compute-2 sudo[306492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:01 compute-2 nova_compute[232428]: 2025-11-29 08:31:01.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:01 compute-2 sudo[306492]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:01 compute-2 nova_compute[232428]: 2025-11-29 08:31:01.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:02 compute-2 sudo[306549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:02 compute-2 sudo[306549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:02 compute-2 sudo[306549]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:02 compute-2 sudo[306580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:02 compute-2 sudo[306580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:02 compute-2 sudo[306580]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:02 compute-2 podman[306573]: 2025-11-29 08:31:02.134267438 +0000 UTC m=+0.089505883 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:31:02 compute-2 nova_compute[232428]: 2025-11-29 08:31:02.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:02 compute-2 ceph-mon[77138]: pgmap v2815: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 164 op/s
Nov 29 08:31:02 compute-2 ceph-mon[77138]: osdmap e370: 3 total, 3 up, 3 in
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:03.334 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:03.335 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:03.335 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:03.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.530 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.531 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.533 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.533 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.667 232432 DEBUG nova.compute.manager [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:31:03 compute-2 ceph-mon[77138]: pgmap v2817: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 467 KiB/s wr, 180 op/s
Nov 29 08:31:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:31:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:31:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:31:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:31:03 compute-2 nova_compute[232428]: 2025-11-29 08:31:03.875 232432 INFO nova.compute.manager [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] instance snapshotting
Nov 29 08:31:04 compute-2 nova_compute[232428]: 2025-11-29 08:31:04.185 232432 INFO nova.virt.libvirt.driver [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Beginning live snapshot process
Nov 29 08:31:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:04.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:04 compute-2 nova_compute[232428]: 2025-11-29 08:31:04.731 232432 DEBUG nova.virt.libvirt.imagebackend [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 08:31:05 compute-2 nova_compute[232428]: 2025-11-29 08:31:05.154 232432 DEBUG nova.storage.rbd_utils [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] creating snapshot(f6c62de627364ff2b0e931b7b014988d) on rbd image(dc8140a9-7bef-42f8-867c-13e29f022673_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:31:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:31:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:31:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:31:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:31:05 compute-2 nova_compute[232428]: 2025-11-29 08:31:05.879 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:31:05 compute-2 nova_compute[232428]: 2025-11-29 08:31:05.904 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:31:05 compute-2 nova_compute[232428]: 2025-11-29 08:31:05.905 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:31:05 compute-2 nova_compute[232428]: 2025-11-29 08:31:05.905 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:06 compute-2 nova_compute[232428]: 2025-11-29 08:31:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:06 compute-2 nova_compute[232428]: 2025-11-29 08:31:06.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 08:31:06 compute-2 ceph-mon[77138]: pgmap v2818: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 252 KiB/s wr, 147 op/s
Nov 29 08:31:06 compute-2 nova_compute[232428]: 2025-11-29 08:31:06.661 232432 DEBUG nova.storage.rbd_utils [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] cloning vms/dc8140a9-7bef-42f8-867c-13e29f022673_disk@f6c62de627364ff2b0e931b7b014988d to images/29125612-6fcb-47b2-8690-67e6e3459b96 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:31:06 compute-2 nova_compute[232428]: 2025-11-29 08:31:06.784 232432 DEBUG nova.storage.rbd_utils [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] flattening images/29125612-6fcb-47b2-8690-67e6e3459b96 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:31:06 compute-2 nova_compute[232428]: 2025-11-29 08:31:06.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:07 compute-2 nova_compute[232428]: 2025-11-29 08:31:07.231 232432 DEBUG nova.storage.rbd_utils [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] removing snapshot(f6c62de627364ff2b0e931b7b014988d) on rbd image(dc8140a9-7bef-42f8-867c-13e29f022673_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:31:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:07.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:07 compute-2 ceph-mon[77138]: osdmap e371: 3 total, 3 up, 3 in
Nov 29 08:31:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3782315917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:07 compute-2 ceph-mon[77138]: pgmap v2820: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 273 KiB/s wr, 69 op/s
Nov 29 08:31:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 08:31:07 compute-2 nova_compute[232428]: 2025-11-29 08:31:07.975 232432 DEBUG nova.storage.rbd_utils [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] creating snapshot(snap) on rbd image(29125612-6fcb-47b2-8690-67e6e3459b96) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:31:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:08.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.821508) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068821625, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1749, "num_deletes": 258, "total_data_size": 3733318, "memory_usage": 3785064, "flush_reason": "Manual Compaction"}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068841776, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 2449043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59192, "largest_seqno": 60936, "table_properties": {"data_size": 2441815, "index_size": 4171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16166, "raw_average_key_size": 20, "raw_value_size": 2426842, "raw_average_value_size": 3064, "num_data_blocks": 183, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404937, "oldest_key_time": 1764404937, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 20345 microseconds, and 11638 cpu microseconds.
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.841854) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 2449043 bytes OK
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.841886) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.843202) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.843219) EVENT_LOG_v1 {"time_micros": 1764405068843214, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.843240) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3725269, prev total WAL file size 3725269, number of live WAL files 2.
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.844597) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303135' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(2391KB)], [114(10MB)]
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068844713, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13770107, "oldest_snapshot_seqno": -1}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9139 keys, 13628378 bytes, temperature: kUnknown
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068946298, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 13628378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13566726, "index_size": 37732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 237944, "raw_average_key_size": 26, "raw_value_size": 13403624, "raw_average_value_size": 1466, "num_data_blocks": 1473, "num_entries": 9139, "num_filter_entries": 9139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.946988) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13628378 bytes
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.948775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.2 rd, 133.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 9671, records dropped: 532 output_compression: NoCompression
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.948813) EVENT_LOG_v1 {"time_micros": 1764405068948796, "job": 72, "event": "compaction_finished", "compaction_time_micros": 101876, "compaction_time_cpu_micros": 38841, "output_level": 6, "num_output_files": 1, "total_output_size": 13628378, "num_input_records": 9671, "num_output_records": 9139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068949832, "job": 72, "event": "table_file_deletion", "file_number": 116}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068954931, "job": 72, "event": "table_file_deletion", "file_number": 114}
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.844441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.955021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.955028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.955030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.955033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:08 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:31:08.955035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.000 232432 DEBUG nova.compute.manager [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.001 232432 DEBUG nova.compute.manager [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing instance network info cache due to event network-changed-0984e0d6-449f-45b6-bead-2a6a5cc37e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.001 232432 DEBUG oslo_concurrency.lockutils [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.001 232432 DEBUG oslo_concurrency.lockutils [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.002 232432 DEBUG nova.network.neutron [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Refreshing network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:31:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 08:31:09 compute-2 ceph-mon[77138]: osdmap e372: 3 total, 3 up, 3 in
Nov 29 08:31:09 compute-2 nova_compute[232428]: 2025-11-29 08:31:09.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:09.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:10.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:10 compute-2 nova_compute[232428]: 2025-11-29 08:31:10.608 232432 DEBUG nova.network.neutron [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updated VIF entry in instance network info cache for port 0984e0d6-449f-45b6-bead-2a6a5cc37e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:31:10 compute-2 nova_compute[232428]: 2025-11-29 08:31:10.609 232432 DEBUG nova.network.neutron [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [{"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:31:10 compute-2 nova_compute[232428]: 2025-11-29 08:31:10.639 232432 DEBUG oslo_concurrency.lockutils [req-05586f15-92bb-49ff-a45a-ad770ed8a002 req-482eaa95-2624-4b5b-ac7e-27963159470f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8d30dc23-4d84-4468-94cd-9f1300767585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:31:10 compute-2 ceph-mon[77138]: pgmap v2822: 305 pgs: 305 active+clean; 602 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.3 MiB/s wr, 96 op/s
Nov 29 08:31:10 compute-2 ceph-mon[77138]: osdmap e373: 3 total, 3 up, 3 in
Nov 29 08:31:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1857578055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3248946213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:11 compute-2 nova_compute[232428]: 2025-11-29 08:31:11.056 232432 INFO nova.virt.libvirt.driver [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Snapshot image upload complete
Nov 29 08:31:11 compute-2 nova_compute[232428]: 2025-11-29 08:31:11.057 232432 INFO nova.compute.manager [None req-f912856a-57e9-4698-b9a6-9314c33e24a3 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Took 7.18 seconds to snapshot the instance on the hypervisor.
Nov 29 08:31:11 compute-2 nova_compute[232428]: 2025-11-29 08:31:11.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:11 compute-2 nova_compute[232428]: 2025-11-29 08:31:11.528 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:11 compute-2 nova_compute[232428]: 2025-11-29 08:31:11.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:12.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:12 compute-2 ceph-mon[77138]: pgmap v2824: 305 pgs: 305 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 15 MiB/s wr, 339 op/s
Nov 29 08:31:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1382191317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/105282154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:14 compute-2 nova_compute[232428]: 2025-11-29 08:31:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:14 compute-2 ceph-mon[77138]: pgmap v2825: 305 pgs: 305 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 317 op/s
Nov 29 08:31:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:31:14 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:31:14 compute-2 sudo[306771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:14 compute-2 sudo[306771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:14 compute-2 sudo[306771]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:14 compute-2 podman[306795]: 2025-11-29 08:31:14.409704146 +0000 UTC m=+0.056682305 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:31:14 compute-2 sudo[306802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:31:14 compute-2 sudo[306802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:14 compute-2 sudo[306802]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3226655288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:15.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:16 compute-2 nova_compute[232428]: 2025-11-29 08:31:16.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:16 compute-2 nova_compute[232428]: 2025-11-29 08:31:16.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:31:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:16 compute-2 nova_compute[232428]: 2025-11-29 08:31:16.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:16 compute-2 ceph-mon[77138]: pgmap v2826: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 259 op/s
Nov 29 08:31:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:16 compute-2 nova_compute[232428]: 2025-11-29 08:31:16.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.258 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.259 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.259 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.259 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.260 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:17.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:31:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2403948232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:17 compute-2 nova_compute[232428]: 2025-11-29 08:31:17.733 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:17 compute-2 ceph-mon[77138]: pgmap v2827: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 8.1 MiB/s wr, 290 op/s
Nov 29 08:31:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4027604990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.250 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.250 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.255 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.256 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.436 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.438 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3759MB free_disk=20.876049041748047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.438 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:18 compute-2 nova_compute[232428]: 2025-11-29 08:31:18.439 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 08:31:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2403948232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3842349862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:19 compute-2 ceph-mon[77138]: osdmap e374: 3 total, 3 up, 3 in
Nov 29 08:31:19 compute-2 nova_compute[232428]: 2025-11-29 08:31:19.256 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 8d30dc23-4d84-4468-94cd-9f1300767585 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:31:19 compute-2 nova_compute[232428]: 2025-11-29 08:31:19.256 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance dc8140a9-7bef-42f8-867c-13e29f022673 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:31:19 compute-2 nova_compute[232428]: 2025-11-29 08:31:19.256 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:31:19 compute-2 nova_compute[232428]: 2025-11-29 08:31:19.257 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:31:19 compute-2 nova_compute[232428]: 2025-11-29 08:31:19.340 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:31:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3419619429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:20 compute-2 nova_compute[232428]: 2025-11-29 08:31:20.115 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:20 compute-2 nova_compute[232428]: 2025-11-29 08:31:20.124 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:31:20 compute-2 nova_compute[232428]: 2025-11-29 08:31:20.251 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:31:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.005000158s ======
Nov 29 08:31:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000158s
Nov 29 08:31:21 compute-2 ceph-mon[77138]: pgmap v2828: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 274 op/s
Nov 29 08:31:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2437782084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:21 compute-2 nova_compute[232428]: 2025-11-29 08:31:21.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:21 compute-2 podman[306888]: 2025-11-29 08:31:21.677507463 +0000 UTC m=+0.082363729 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:31:21 compute-2 nova_compute[232428]: 2025-11-29 08:31:21.848 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:31:21 compute-2 nova_compute[232428]: 2025-11-29 08:31:21.849 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:21 compute-2 nova_compute[232428]: 2025-11-29 08:31:21.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:22 compute-2 sudo[306909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:22 compute-2 sudo[306909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:22 compute-2 sudo[306909]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:22 compute-2 sudo[306934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:22 compute-2 sudo[306934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:22 compute-2 sudo[306934]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3419619429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:22 compute-2 ceph-mon[77138]: pgmap v2830: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 132 KiB/s wr, 176 op/s
Nov 29 08:31:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:23.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 08:31:24 compute-2 ceph-mon[77138]: pgmap v2831: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 132 KiB/s wr, 176 op/s
Nov 29 08:31:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:26 compute-2 nova_compute[232428]: 2025-11-29 08:31:26.534 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/279430381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:26 compute-2 ceph-mon[77138]: osdmap e375: 3 total, 3 up, 3 in
Nov 29 08:31:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1614106452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:26 compute-2 ceph-mon[77138]: pgmap v2833: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 48 KiB/s wr, 137 op/s
Nov 29 08:31:26 compute-2 nova_compute[232428]: 2025-11-29 08:31:26.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:28 compute-2 ceph-mon[77138]: pgmap v2834: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 435 KiB/s rd, 17 KiB/s wr, 62 op/s
Nov 29 08:31:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:28.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2106926969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2106926969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3899069321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 08:31:30 compute-2 ceph-mon[77138]: pgmap v2835: 305 pgs: 305 active+clean; 701 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 29 08:31:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:31 compute-2 nova_compute[232428]: 2025-11-29 08:31:31.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:31 compute-2 nova_compute[232428]: 2025-11-29 08:31:31.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.170 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.171 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.171 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.172 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.172 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:32 compute-2 ceph-mon[77138]: pgmap v2836: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 MiB/s wr, 99 op/s
Nov 29 08:31:32 compute-2 ceph-mon[77138]: osdmap e376: 3 total, 3 up, 3 in
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.173 232432 INFO nova.compute.manager [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Terminating instance
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.174 232432 DEBUG nova.compute.manager [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:31:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:31:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 12K writes, 61K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1691 writes, 8332 keys, 1691 commit groups, 1.0 writes per commit group, ingest: 16.46 MB, 0.03 MB/s
                                           Interval WAL: 1691 writes, 1691 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.8      1.38              0.35        36    0.038       0      0       0.0       0.0
                                             L6      1/0   13.00 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     83.4     70.9      4.86              1.22        35    0.139    240K    19K       0.0       0.0
                                            Sum      1/0   13.00 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     65.0     67.1      6.24              1.57        71    0.088    240K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     29.0     30.3      2.77              0.30        12    0.231     55K   3154       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     83.4     70.9      4.86              1.22        35    0.139    240K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.3      1.29              0.35        35    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.072, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.41 GB write, 0.09 MB/s write, 0.40 GB read, 0.08 MB/s read, 6.2 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 2.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 47.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000278 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2592,45.70 MB,15.033%) FilterBlock(71,680.55 KB,0.218617%) IndexBlock(71,1.10 MB,0.361101%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:31:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:32 compute-2 kernel: tap0984e0d6-44 (unregistering): left promiscuous mode
Nov 29 08:31:32 compute-2 NetworkManager[48993]: <info>  [1764405092.6637] device (tap0984e0d6-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.673 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 ovn_controller[134375]: 2025-11-29T08:31:32Z|00796|binding|INFO|Releasing lport 0984e0d6-449f-45b6-bead-2a6a5cc37e11 from this chassis (sb_readonly=0)
Nov 29 08:31:32 compute-2 ovn_controller[134375]: 2025-11-29T08:31:32Z|00797|binding|INFO|Setting lport 0984e0d6-449f-45b6-bead-2a6a5cc37e11 down in Southbound
Nov 29 08:31:32 compute-2 ovn_controller[134375]: 2025-11-29T08:31:32Z|00798|binding|INFO|Removing iface tap0984e0d6-44 ovn-installed in OVS
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.681 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:32.685 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a6:9c 10.100.0.9'], port_security=['fa:16:3e:10:a6:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d30dc23-4d84-4468-94cd-9f1300767585', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=0984e0d6-449f-45b6-bead-2a6a5cc37e11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:31:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:32.687 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 0984e0d6-449f-45b6-bead-2a6a5cc37e11 in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e unbound from our chassis
Nov 29 08:31:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:32.690 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 371b699e-06e1-407e-ac77-9768d9a0e76e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:31:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:32.693 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[33124729-2ea4-4008-8fdc-410a42817df9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:32 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:32.694 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace which is not needed anymore
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.695 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Nov 29 08:31:32 compute-2 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a6.scope: Consumed 23.140s CPU time.
Nov 29 08:31:32 compute-2 systemd-machined[194747]: Machine qemu-82-instance-000000a6 terminated.
Nov 29 08:31:32 compute-2 podman[306965]: 2025-11-29 08:31:32.744468733 +0000 UTC m=+0.145647190 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.820 232432 INFO nova.virt.libvirt.driver [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Instance destroyed successfully.
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.821 232432 DEBUG nova.objects.instance [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'resources' on Instance uuid 8d30dc23-4d84-4468-94cd-9f1300767585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.844 232432 DEBUG nova.virt.libvirt.vif [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1662779044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1662779044',id=166,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-zail0yj6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:28:59Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=8d30dc23-4d84-4468-94cd-9f1300767585,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.845 232432 DEBUG nova.network.os_vif_util [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "address": "fa:16:3e:10:a6:9c", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0984e0d6-44", "ovs_interfaceid": "0984e0d6-449f-45b6-bead-2a6a5cc37e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.846 232432 DEBUG nova.network.os_vif_util [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.846 232432 DEBUG os_vif [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.849 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0984e0d6-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.852 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:32 compute-2 nova_compute[232428]: 2025-11-29 08:31:32.856 232432 INFO os_vif [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a6:9c,bridge_name='br-int',has_traffic_filtering=True,id=0984e0d6-449f-45b6-bead-2a6a5cc37e11,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0984e0d6-44')
Nov 29 08:31:32 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [NOTICE]   (304738) : haproxy version is 2.8.14-c23fe91
Nov 29 08:31:32 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [NOTICE]   (304738) : path to executable is /usr/sbin/haproxy
Nov 29 08:31:32 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [WARNING]  (304738) : Exiting Master process...
Nov 29 08:31:32 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [ALERT]    (304738) : Current worker (304740) exited with code 143 (Terminated)
Nov 29 08:31:32 compute-2 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[304724]: [WARNING]  (304738) : All workers exited. Exiting... (0)
Nov 29 08:31:32 compute-2 systemd[1]: libpod-ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008.scope: Deactivated successfully.
Nov 29 08:31:32 compute-2 podman[307025]: 2025-11-29 08:31:32.914658561 +0000 UTC m=+0.080770370 container died ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:31:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008-userdata-shm.mount: Deactivated successfully.
Nov 29 08:31:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-4d2a7a38a557a983edb5f8303ef29388b3a2b644c63126b1979a19c6d71ff896-merged.mount: Deactivated successfully.
Nov 29 08:31:33 compute-2 podman[307025]: 2025-11-29 08:31:33.246914222 +0000 UTC m=+0.413026021 container cleanup ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:31:33 compute-2 ceph-mon[77138]: osdmap e377: 3 total, 3 up, 3 in
Nov 29 08:31:33 compute-2 systemd[1]: libpod-conmon-ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008.scope: Deactivated successfully.
Nov 29 08:31:33 compute-2 podman[307069]: 2025-11-29 08:31:33.450622278 +0000 UTC m=+0.174764892 container remove ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.457 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c195bcb4-5df8-4f32-988c-df060682aa07]: (4, ('Sat Nov 29 08:31:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008)\nba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008\nSat Nov 29 08:31:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (ba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008)\nba4f7f89fef6e171c84577fff4a91b8c8802d74b2f66173c4d2af842e7a8d008\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.460 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36415cdb-1b34-4cea-8b30-80f9c10abec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.462 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:31:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.464 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:33 compute-2 kernel: tap371b699e-00: left promiscuous mode
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.479 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.483 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e81e4ea-d1c5-4203-ab35-18f7ea7fcdf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.498 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd7ea0c-3f28-4c49-95ca-f83cacef7cd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.499 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7bc3fb-543e-42b9-b404-7f6f6619a6d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.523 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dab3a4c5-91d5-4a07-b9a7-83cc32e83368]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790204, 'reachable_time': 21834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307088, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 systemd[1]: run-netns-ovnmeta\x2d371b699e\x2d06e1\x2d407e\x2dac77\x2d9768d9a0e76e.mount: Deactivated successfully.
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.530 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:31:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:33.531 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c581d731-bbb3-4343-993d-df591cd5a0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.533 232432 DEBUG nova.compute.manager [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-unplugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.533 232432 DEBUG oslo_concurrency.lockutils [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.533 232432 DEBUG oslo_concurrency.lockutils [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.534 232432 DEBUG oslo_concurrency.lockutils [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.534 232432 DEBUG nova.compute.manager [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] No waiting events found dispatching network-vif-unplugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:31:33 compute-2 nova_compute[232428]: 2025-11-29 08:31:33.534 232432 DEBUG nova.compute.manager [req-9d99c80f-6abf-4355-9f3f-9f98ffb9ccea req-46b269d7-2f01-40d6-912e-1bdcda40eb84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-unplugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:31:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:34.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:35 compute-2 ceph-mon[77138]: pgmap v2839: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.9 MiB/s wr, 104 op/s
Nov 29 08:31:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.619 232432 DEBUG nova.compute.manager [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.620 232432 DEBUG oslo_concurrency.lockutils [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.620 232432 DEBUG oslo_concurrency.lockutils [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.621 232432 DEBUG oslo_concurrency.lockutils [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.621 232432 DEBUG nova.compute.manager [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] No waiting events found dispatching network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.622 232432 WARNING nova.compute.manager [req-518c7b41-5c43-4929-bd82-0ca05ef4a13b req-7d6f3bdc-f751-4c48-aa48-63291f7f34b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received unexpected event network-vif-plugged-0984e0d6-449f-45b6-bead-2a6a5cc37e11 for instance with vm_state active and task_state deleting.
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.631 232432 INFO nova.virt.libvirt.driver [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Deleting instance files /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585_del
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.632 232432 INFO nova.virt.libvirt.driver [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Deletion of /var/lib/nova/instances/8d30dc23-4d84-4468-94cd-9f1300767585_del complete
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.685 232432 INFO nova.compute.manager [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Took 3.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.686 232432 DEBUG oslo.service.loopingcall [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.686 232432 DEBUG nova.compute.manager [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:31:35 compute-2 nova_compute[232428]: 2025-11-29 08:31:35.686 232432 DEBUG nova.network.neutron [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:31:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:36.321 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.321 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:36.322 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.412 232432 DEBUG nova.network.neutron [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.442 232432 INFO nova.compute.manager [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Took 0.76 seconds to deallocate network for instance.
Nov 29 08:31:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:36.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.660 232432 INFO nova.compute.manager [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Took 0.22 seconds to detach 1 volumes for instance.
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.730 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.731 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.797 232432 DEBUG oslo_concurrency.processutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:36 compute-2 nova_compute[232428]: 2025-11-29 08:31:36.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:36 compute-2 ceph-mon[77138]: pgmap v2840: 305 pgs: 305 active+clean; 762 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.6 MiB/s wr, 202 op/s
Nov 29 08:31:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:31:37.324 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:31:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:31:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/623293125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.366 232432 DEBUG oslo_concurrency.processutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.374 232432 DEBUG nova.compute.provider_tree [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.399 232432 DEBUG nova.scheduler.client.report [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.437 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.466 232432 INFO nova.scheduler.client.report [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Deleted allocations for instance 8d30dc23-4d84-4468-94cd-9f1300767585
Nov 29 08:31:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.527 232432 DEBUG oslo_concurrency.lockutils [None req-10d90fb4-196d-448f-bdda-f6fcae598436 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "8d30dc23-4d84-4468-94cd-9f1300767585" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.710 232432 DEBUG nova.compute.manager [req-f252c5ed-180e-4883-8595-c7324e1b50e3 req-e755587a-b4ac-40aa-b237-eb6968b4b472 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Received event network-vif-deleted-0984e0d6-449f-45b6-bead-2a6a5cc37e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:31:37 compute-2 nova_compute[232428]: 2025-11-29 08:31:37.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:38 compute-2 ceph-mon[77138]: pgmap v2841: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Nov 29 08:31:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/623293125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:38.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 08:31:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:39.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:39 compute-2 ceph-mon[77138]: pgmap v2842: 305 pgs: 305 active+clean; 777 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Nov 29 08:31:39 compute-2 ceph-mon[77138]: osdmap e378: 3 total, 3 up, 3 in
Nov 29 08:31:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1181121692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1181121692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:31:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:40.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:31:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:41 compute-2 nova_compute[232428]: 2025-11-29 08:31:41.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:42 compute-2 sudo[307117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:42 compute-2 sudo[307117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:42 compute-2 sudo[307117]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:42 compute-2 ceph-mon[77138]: pgmap v2844: 305 pgs: 305 active+clean; 757 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.8 MiB/s wr, 240 op/s
Nov 29 08:31:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2775002244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2775002244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:31:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:42.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:31:42 compute-2 sudo[307142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:31:42 compute-2 sudo[307142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:31:42 compute-2 sudo[307142]: pam_unix(sudo:session): session closed for user root
Nov 29 08:31:42 compute-2 nova_compute[232428]: 2025-11-29 08:31:42.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:31:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393742774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:31:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393742774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:43 compute-2 ceph-mon[77138]: pgmap v2845: 305 pgs: 305 active+clean; 757 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 212 op/s
Nov 29 08:31:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1393742774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:31:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1393742774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:31:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2603678167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:31:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:44 compute-2 podman[307168]: 2025-11-29 08:31:44.70993552 +0000 UTC m=+0.103044277 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:31:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:45.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:45 compute-2 ceph-mon[77138]: pgmap v2846: 305 pgs: 305 active+clean; 535 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 417 KiB/s wr, 207 op/s
Nov 29 08:31:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 08:31:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:46 compute-2 ceph-mon[77138]: osdmap e379: 3 total, 3 up, 3 in
Nov 29 08:31:46 compute-2 nova_compute[232428]: 2025-11-29 08:31:46.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:31:47 compute-2 nova_compute[232428]: 2025-11-29 08:31:47.820 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405092.8179998, 8d30dc23-4d84-4468-94cd-9f1300767585 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:31:47 compute-2 nova_compute[232428]: 2025-11-29 08:31:47.820 232432 INFO nova.compute.manager [-] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] VM Stopped (Lifecycle Event)
Nov 29 08:31:47 compute-2 nova_compute[232428]: 2025-11-29 08:31:47.856 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:47 compute-2 ceph-mon[77138]: pgmap v2848: 305 pgs: 305 active+clean; 464 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 415 KiB/s wr, 134 op/s
Nov 29 08:31:48 compute-2 nova_compute[232428]: 2025-11-29 08:31:48.491 232432 DEBUG nova.compute.manager [None req-fb0b72aa-2df1-42e6-827b-ac7a63a08dcc - - - - - -] [instance: 8d30dc23-4d84-4468-94cd-9f1300767585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:31:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:50 compute-2 ceph-mon[77138]: pgmap v2849: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 628 KiB/s wr, 162 op/s
Nov 29 08:31:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:51.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:51 compute-2 nova_compute[232428]: 2025-11-29 08:31:51.860 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:31:51 compute-2 nova_compute[232428]: 2025-11-29 08:31:51.860 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:31:51 compute-2 nova_compute[232428]: 2025-11-29 08:31:51.861 232432 INFO nova.compute.manager [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Unshelving
Nov 29 08:31:51 compute-2 nova_compute[232428]: 2025-11-29 08:31:51.887 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:52 compute-2 nova_compute[232428]: 2025-11-29 08:31:52.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:52.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:52 compute-2 podman[307191]: 2025-11-29 08:31:52.681170656 +0000 UTC m=+0.073317346 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 08:31:52 compute-2 ceph-mon[77138]: pgmap v2850: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 174 op/s
Nov 29 08:31:52 compute-2 nova_compute[232428]: 2025-11-29 08:31:52.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:53.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:53 compute-2 ceph-mon[77138]: pgmap v2851: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 174 op/s
Nov 29 08:31:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.435 232432 INFO nova.virt.block_device [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Booting with volume 40fdec3a-4544-45a5-9bce-a1d84a8f5b1b at /dev/vdc
Nov 29 08:31:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.622 232432 DEBUG os_brick.utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.624 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.639 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.639 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[6150c01e-b979-4f45-9a5f-2c02d9daa25f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.641 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.653 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.653 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[9b109fe4-524f-4fd8-88f7-d710a8d95a37]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.656 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.668 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.669 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[b9948330-0aaa-4a39-ba58-05d149f8e136]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.671 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[e7987930-2819-4feb-9e66-311726c15fdf]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.672 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.720 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "nvme version" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.724 232432 DEBUG os_brick.initiator.connectors.lightos [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.724 232432 DEBUG os_brick.initiator.connectors.lightos [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.725 232432 DEBUG os_brick.initiator.connectors.lightos [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.725 232432 DEBUG os_brick.utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:31:54 compute-2 nova_compute[232428]: 2025-11-29 08:31:54.726 232432 DEBUG nova.virt.block_device [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating existing volume attachment record: 5f84cb9a-07e0-4365-ac0c-cc6b4bd3e717 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:31:55 compute-2 ceph-mon[77138]: osdmap e380: 3 total, 3 up, 3 in
Nov 29 08:31:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:56.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:56 compute-2 ceph-mon[77138]: pgmap v2853: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 337 KiB/s wr, 79 op/s
Nov 29 08:31:56 compute-2 nova_compute[232428]: 2025-11-29 08:31:56.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:31:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:57 compute-2 nova_compute[232428]: 2025-11-29 08:31:57.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:31:58 compute-2 nova_compute[232428]: 2025-11-29 08:31:58.142 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:31:58 compute-2 ceph-mon[77138]: pgmap v2854: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 347 KiB/s wr, 73 op/s
Nov 29 08:31:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:31:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:58.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:31:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:31:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965475671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3965475671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:31:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:31:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:31:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:00.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:00 compute-2 ceph-mon[77138]: pgmap v2855: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 62 KiB/s wr, 21 op/s
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.277 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.280 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.293 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.359 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:01 compute-2 ceph-mon[77138]: pgmap v2856: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 62 KiB/s wr, 4 op/s
Nov 29 08:32:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1357944057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.764 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.764 232432 INFO nova.compute.claims [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:32:01 compute-2 nova_compute[232428]: 2025-11-29 08:32:01.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:02 compute-2 ovn_controller[134375]: 2025-11-29T08:32:02Z|00799|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 08:32:02 compute-2 nova_compute[232428]: 2025-11-29 08:32:02.136 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:02 compute-2 nova_compute[232428]: 2025-11-29 08:32:02.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:02 compute-2 nova_compute[232428]: 2025-11-29 08:32:02.460 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:02.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:02 compute-2 sudo[307224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:02 compute-2 sudo[307224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:02 compute-2 sudo[307224]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:02 compute-2 sudo[307268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:02 compute-2 sudo[307268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:02 compute-2 sudo[307268]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:02 compute-2 nova_compute[232428]: 2025-11-29 08:32:02.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:32:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2239336583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:03 compute-2 nova_compute[232428]: 2025-11-29 08:32:03.033 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:03 compute-2 nova_compute[232428]: 2025-11-29 08:32:03.041 232432 DEBUG nova.compute.provider_tree [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:32:03 compute-2 nova_compute[232428]: 2025-11-29 08:32:03.157 232432 DEBUG nova.scheduler.client.report [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:03.335 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:03.336 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:03.336 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:03 compute-2 nova_compute[232428]: 2025-11-29 08:32:03.553 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:03 compute-2 nova_compute[232428]: 2025-11-29 08:32:03.739 232432 INFO nova.network.neutron [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 29 08:32:03 compute-2 podman[307295]: 2025-11-29 08:32:03.744178724 +0000 UTC m=+0.129791544 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:32:03 compute-2 ceph-mon[77138]: pgmap v2857: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 62 KiB/s wr, 4 op/s
Nov 29 08:32:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2239336583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:04 compute-2 nova_compute[232428]: 2025-11-29 08:32:04.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:04 compute-2 nova_compute[232428]: 2025-11-29 08:32:04.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:32:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:05 compute-2 nova_compute[232428]: 2025-11-29 08:32:05.114 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:32:05 compute-2 nova_compute[232428]: 2025-11-29 08:32:05.114 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:32:05 compute-2 nova_compute[232428]: 2025-11-29 08:32:05.115 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:32:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:06 compute-2 ceph-mon[77138]: pgmap v2858: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 50 KiB/s wr, 5 op/s
Nov 29 08:32:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:06.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:06 compute-2 nova_compute[232428]: 2025-11-29 08:32:06.808 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:32:06 compute-2 nova_compute[232428]: 2025-11-29 08:32:06.809 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:32:06 compute-2 nova_compute[232428]: 2025-11-29 08:32:06.809 232432 DEBUG nova.network.neutron [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:32:06 compute-2 nova_compute[232428]: 2025-11-29 08:32:06.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:07 compute-2 nova_compute[232428]: 2025-11-29 08:32:07.123 232432 DEBUG nova.compute.manager [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:07 compute-2 nova_compute[232428]: 2025-11-29 08:32:07.124 232432 DEBUG nova.compute.manager [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing instance network info cache due to event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:32:07 compute-2 nova_compute[232428]: 2025-11-29 08:32:07.124 232432 DEBUG oslo_concurrency.lockutils [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:32:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:07 compute-2 nova_compute[232428]: 2025-11-29 08:32:07.867 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:07 compute-2 nova_compute[232428]: 2025-11-29 08:32:07.933 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:08 compute-2 ceph-mon[77138]: pgmap v2859: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 48 KiB/s wr, 5 op/s
Nov 29 08:32:08 compute-2 nova_compute[232428]: 2025-11-29 08:32:08.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:32:08 compute-2 nova_compute[232428]: 2025-11-29 08:32:08.235 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:32:08 compute-2 nova_compute[232428]: 2025-11-29 08:32:08.236 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:08 compute-2 nova_compute[232428]: 2025-11-29 08:32:08.237 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:08 compute-2 nova_compute[232428]: 2025-11-29 08:32:08.973 232432 DEBUG nova.network.neutron [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.500 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.503 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.503 232432 INFO nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating image(s)
Nov 29 08:32:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.537 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.541 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.543 232432 DEBUG oslo_concurrency.lockutils [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.543 232432 DEBUG nova.network.neutron [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.753 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.781 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.784 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f407d3a28a0870b657164dec19827b62e3220ca9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:09 compute-2 nova_compute[232428]: 2025-11-29 08:32:09.785 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f407d3a28a0870b657164dec19827b62e3220ca9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:10 compute-2 nova_compute[232428]: 2025-11-29 08:32:10.111 232432 DEBUG nova.virt.libvirt.imagebackend [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/fde91722-ea74-45a9-b57b-7fc9203f0965/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/fde91722-ea74-45a9-b57b-7fc9203f0965/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:32:10 compute-2 ceph-mon[77138]: pgmap v2860: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 9.2 KiB/s wr, 4 op/s
Nov 29 08:32:10 compute-2 nova_compute[232428]: 2025-11-29 08:32:10.334 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:10 compute-2 nova_compute[232428]: 2025-11-29 08:32:10.345 232432 DEBUG nova.virt.libvirt.imagebackend [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/fde91722-ea74-45a9-b57b-7fc9203f0965/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 08:32:10 compute-2 nova_compute[232428]: 2025-11-29 08:32:10.348 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] cloning images/fde91722-ea74-45a9-b57b-7fc9203f0965@snap to None/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:32:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.008 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f407d3a28a0870b657164dec19827b62e3220ca9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.169 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.250 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] flattening vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:32:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/508529758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:11 compute-2 ovn_controller[134375]: 2025-11-29T08:32:11Z|00800|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.905 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Image rbd:vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.907 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.907 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Ensure instance console log exists: /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.908 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.908 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.909 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.912 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start _get_guest_xml network_info=[{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:31:15Z,direct_url=<?>,disk_format='raw',id=fde91722-ea74-45a9-b57b-7fc9203f0965,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-935562196-shelved',owner='37972b49ddde4c519c6523d2ea1569b5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:31:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-40fdec3a-4544-45a5-9bce-a1d84a8f5b1b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '40fdec3a-4544-45a5-9bce-a1d84a8f5b1b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attached', 'instance': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56', 'attached_at': '', 'detached_at': '', 'volume_id': '40fdec3a-4544-45a5-9bce-a1d84a8f5b1b', 'serial': '40fdec3a-4544-45a5-9bce-a1d84a8f5b1b'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': None, 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'delete_on_termination': False, 'attachment_id': '5f84cb9a-07e0-4365-ac0c-cc6b4bd3e717', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.917 232432 WARNING nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.921 232432 DEBUG nova.virt.libvirt.host [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.922 232432 DEBUG nova.virt.libvirt.host [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.924 232432 DEBUG nova.virt.libvirt.host [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.924 232432 DEBUG nova.virt.libvirt.host [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.925 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.926 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:31:15Z,direct_url=<?>,disk_format='raw',id=fde91722-ea74-45a9-b57b-7fc9203f0965,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-935562196-shelved',owner='37972b49ddde4c519c6523d2ea1569b5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:31:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.926 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.926 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.926 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.927 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.927 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.927 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.927 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.927 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.928 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.928 232432 DEBUG nova.virt.hardware [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.928 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:11 compute-2 nova_compute[232428]: 2025-11-29 08:32:11.944 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:32:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678906998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:32:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3920409863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.374 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:12 compute-2 ceph-mon[77138]: pgmap v2861: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 6.1 KiB/s wr, 4 op/s
Nov 29 08:32:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4214025956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2678906998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1043514117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3920409863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.411 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.415 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.461 232432 DEBUG nova.network.neutron [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updated VIF entry in instance network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.463 232432 DEBUG nova.network.neutron [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.478 232432 DEBUG oslo_concurrency.lockutils [req-0e6b0dcc-d2c6-4fdd-9d4b-5f8f0d7cd4cc req-1e20f928-c107-47b2-ae91-40a7763d5c71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:32:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.869 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:32:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2246505227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.892 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.929 232432 DEBUG nova.virt.libvirt.vif [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='fde91722-ea74-45a9-b57b-7fc9203f0965',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1565500821',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:31:35.871379',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fde91722-ea74-45a9-b57b-7fc9203f0965'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.930 232432 DEBUG nova.network.os_vif_util [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.931 232432 DEBUG nova.network.os_vif_util [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.932 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.953 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <uuid>4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</uuid>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <name>instance-000000ab</name>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-935562196</nova:name>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:32:11</nova:creationTime>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:user uuid="e6de0587a3794e30acefc687f435d388">tempest-AttachVolumeShelveTestJSON-1751768432-project-member</nova:user>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:project uuid="37972b49ddde4c519c6523d2ea1569b5">tempest-AttachVolumeShelveTestJSON-1751768432</nova:project>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="fde91722-ea74-45a9-b57b-7fc9203f0965"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <nova:port uuid="1576b647-a0ba-45ac-afa5-c62b909bb7e9">
Nov 29 08:32:12 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <system>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="serial">4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="uuid">4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </system>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <os>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </os>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <features>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </features>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-40fdec3a-4544-45a5-9bce-a1d84a8f5b1b">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </source>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:32:12 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <target dev="vdc" bus="virtio"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <serial>40fdec3a-4544-45a5-9bce-a1d84a8f5b1b</serial>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:53:4e:1f"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <target dev="tap1576b647-a0"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/console.log" append="off"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <video>
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </video>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:32:12 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:32:12 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:32:12 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:32:12 compute-2 nova_compute[232428]: </domain>
Nov 29 08:32:12 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.954 232432 DEBUG nova.compute.manager [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Preparing to wait for external event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.955 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.955 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.956 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.956 232432 DEBUG nova.virt.libvirt.vif [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='fde91722-ea74-45a9-b57b-7fc9203f0965',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1565500821',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:31:35.871379',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fde91722-ea74-45a9-b57b-7fc9203f0965'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.957 232432 DEBUG nova.network.os_vif_util [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.957 232432 DEBUG nova.network.os_vif_util [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.958 232432 DEBUG os_vif [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.958 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.959 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.959 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.964 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1576b647-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.964 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1576b647-a0, col_values=(('external_ids', {'iface-id': '1576b647-a0ba-45ac-afa5-c62b909bb7e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:4e:1f', 'vm-uuid': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:12 compute-2 NetworkManager[48993]: <info>  [1764405132.9669] manager: (tap1576b647-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.969 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.974 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:12 compute-2 nova_compute[232428]: 2025-11-29 08:32:12.976 232432 INFO os_vif [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0')
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.031 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.032 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.033 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.033 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No VIF found with MAC fa:16:3e:53:4e:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.033 232432 INFO nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Using config drive
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.061 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.084 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:13 compute-2 nova_compute[232428]: 2025-11-29 08:32:13.137 232432 DEBUG nova.objects.instance [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'keypairs' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2246505227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.486739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133486808, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1005, "num_deletes": 254, "total_data_size": 1942117, "memory_usage": 1965088, "flush_reason": "Manual Compaction"}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133497633, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 1279833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60941, "largest_seqno": 61941, "table_properties": {"data_size": 1275156, "index_size": 2265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10750, "raw_average_key_size": 20, "raw_value_size": 1265604, "raw_average_value_size": 2396, "num_data_blocks": 99, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405069, "oldest_key_time": 1764405069, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 10942 microseconds, and 6151 cpu microseconds.
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.497677) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 1279833 bytes OK
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.497703) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499218) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499235) EVENT_LOG_v1 {"time_micros": 1764405133499229, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1937072, prev total WAL file size 1937072, number of live WAL files 2.
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(1249KB)], [117(12MB)]
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133500015, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14908211, "oldest_snapshot_seqno": -1}
Nov 29 08:32:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9142 keys, 12980935 bytes, temperature: kUnknown
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133599236, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12980935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12919663, "index_size": 37288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 238846, "raw_average_key_size": 26, "raw_value_size": 12757062, "raw_average_value_size": 1395, "num_data_blocks": 1447, "num_entries": 9142, "num_filter_entries": 9142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.599622) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12980935 bytes
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.601098) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.1 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(21.8) write-amplify(10.1) OK, records in: 9667, records dropped: 525 output_compression: NoCompression
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.601128) EVENT_LOG_v1 {"time_micros": 1764405133601113, "job": 74, "event": "compaction_finished", "compaction_time_micros": 99354, "compaction_time_cpu_micros": 41847, "output_level": 6, "num_output_files": 1, "total_output_size": 12980935, "num_input_records": 9667, "num_output_records": 9142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133601705, "job": 74, "event": "table_file_deletion", "file_number": 119}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133605556, "job": 74, "event": "table_file_deletion", "file_number": 117}
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.605636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.605642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.605645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.605646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:13 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:32:13.605648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:32:14 compute-2 nova_compute[232428]: 2025-11-29 08:32:14.459 232432 INFO nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating config drive at /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config
Nov 29 08:32:14 compute-2 nova_compute[232428]: 2025-11-29 08:32:14.475 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7htqwm4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:14 compute-2 ceph-mon[77138]: pgmap v2862: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 6.1 KiB/s wr, 4 op/s
Nov 29 08:32:14 compute-2 sudo[307623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:14 compute-2 sudo[307623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:14 compute-2 sudo[307623]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:14 compute-2 sudo[307649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:32:14 compute-2 sudo[307649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:14 compute-2 sudo[307649]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:14 compute-2 nova_compute[232428]: 2025-11-29 08:32:14.636 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7htqwm4" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:14 compute-2 sudo[307676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:14 compute-2 nova_compute[232428]: 2025-11-29 08:32:14.688 232432 DEBUG nova.storage.rbd_utils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:32:14 compute-2 sudo[307676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:14 compute-2 sudo[307676]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:14 compute-2 nova_compute[232428]: 2025-11-29 08:32:14.694 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:14 compute-2 sudo[307719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:32:14 compute-2 sudo[307719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:14 compute-2 podman[307744]: 2025-11-29 08:32:14.847248466 +0000 UTC m=+0.056939694 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:32:15 compute-2 sudo[307719]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:15.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:32:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:16 compute-2 ceph-mon[77138]: pgmap v2863: 305 pgs: 305 active+clean; 492 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 73 op/s
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.736 232432 DEBUG oslo_concurrency.processutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.737 232432 INFO nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deleting local config drive /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config because it was imported into RBD.
Nov 29 08:32:16 compute-2 kernel: tap1576b647-a0: entered promiscuous mode
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.816 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:16 compute-2 ovn_controller[134375]: 2025-11-29T08:32:16Z|00801|binding|INFO|Claiming lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 for this chassis.
Nov 29 08:32:16 compute-2 ovn_controller[134375]: 2025-11-29T08:32:16Z|00802|binding|INFO|1576b647-a0ba-45ac-afa5-c62b909bb7e9: Claiming fa:16:3e:53:4e:1f 10.100.0.4
Nov 29 08:32:16 compute-2 NetworkManager[48993]: <info>  [1764405136.8180] manager: (tap1576b647-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.831 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:4e:1f 10.100.0.4'], port_security=['fa:16:3e:53:4e:1f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '496c1f15-8168-427c-a8c0-5ed474644583', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=1576b647-a0ba-45ac-afa5-c62b909bb7e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.833 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 in datapath 4c541784-a3aa-4c55-a753-a31504941937 bound to our chassis
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.835 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:32:16 compute-2 ovn_controller[134375]: 2025-11-29T08:32:16Z|00803|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 ovn-installed in OVS
Nov 29 08:32:16 compute-2 ovn_controller[134375]: 2025-11-29T08:32:16Z|00804|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 up in Southbound
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.852 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5241e48f-df1a-4fe3-9861-1a8504071732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.853 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c541784-a1 in ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:32:16 compute-2 systemd-udevd[307826]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.855 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c541784-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.855 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[05599130-c8bc-4ada-8ecb-9c9acfd07ff3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.856 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e57053de-5f2a-4ff0-86e8-275f75ac0754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 systemd-machined[194747]: New machine qemu-84-instance-000000ab.
Nov 29 08:32:16 compute-2 NetworkManager[48993]: <info>  [1764405136.8704] device (tap1576b647-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:32:16 compute-2 NetworkManager[48993]: <info>  [1764405136.8715] device (tap1576b647-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.873 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[e84ca422-74ba-4e96-aca9-1e969bcb16b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 systemd[1]: Started Virtual Machine qemu-84-instance-000000ab.
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c368af-e347-48b7-bb14-79807ca3d79e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 nova_compute[232428]: 2025-11-29 08:32:16.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.922 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[90a69134-db20-4420-97d3-344994214566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.927 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[44dd4a65-9eca-417d-8d23-b9d73a761990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 systemd-udevd[307829]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:32:16 compute-2 NetworkManager[48993]: <info>  [1764405136.9285] manager: (tap4c541784-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.961 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9998672b-ebba-4cd7-9028-bda01adc9154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.964 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a3021c03-93e5-4451-ad64-ff8375f50ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:16 compute-2 NetworkManager[48993]: <info>  [1764405136.9892] device (tap4c541784-a0): carrier: link connected
Nov 29 08:32:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:16.997 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6b3684-8aee-4b80-9018-b06617b5a436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.020 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4dac4039-bda9-405f-9bad-70c0cae899d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810325, 'reachable_time': 37255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307858, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.039 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[850bb5d8-da7b-4782-9eee-2918da4af369]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:9545'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810325, 'tstamp': 810325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307859, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.059 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[06de4495-920d-476d-90d2-34a7bcc4b9f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810325, 'reachable_time': 37255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307860, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.096 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[70376754-e00a-4c12-aff9-e01f0ec98756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.183 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4bd929-a1dd-4588-97ee-a810065bd2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.185 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.185 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.186 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c541784-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:17 compute-2 NetworkManager[48993]: <info>  [1764405137.1889] manager: (tap4c541784-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 08:32:17 compute-2 kernel: tap4c541784-a0: entered promiscuous mode
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.191 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c541784-a0, col_values=(('external_ids', {'iface-id': '7f1f6d69-4406-4e27-a503-d839c5cccd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.188 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:17 compute-2 ovn_controller[134375]: 2025-11-29T08:32:17Z|00805|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.195 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.196 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d89688-6f7f-4185-a577-9e63fe2e4ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.198 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:32:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:17.200 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'env', 'PROCESS_TAG=haproxy-4c541784-a3aa-4c55-a753-a31504941937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c541784-a3aa-4c55-a753-a31504941937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.211 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.371 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405137.3702154, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.371 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Started (Lifecycle Event)
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.401 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.406 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405137.3707902, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.406 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Paused (Lifecycle Event)
Nov 29 08:32:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.604 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.609 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:32:17 compute-2 podman[307952]: 2025-11-29 08:32:17.563470653 +0000 UTC m=+0.028186483 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.672 232432 DEBUG nova.compute.manager [req-961ee746-2d53-47bd-bd79-52588ec01c97 req-10aaa609-b07a-44e2-9411-4d95c1a04737 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.672 232432 DEBUG oslo_concurrency.lockutils [req-961ee746-2d53-47bd-bd79-52588ec01c97 req-10aaa609-b07a-44e2-9411-4d95c1a04737 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.673 232432 DEBUG oslo_concurrency.lockutils [req-961ee746-2d53-47bd-bd79-52588ec01c97 req-10aaa609-b07a-44e2-9411-4d95c1a04737 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.673 232432 DEBUG oslo_concurrency.lockutils [req-961ee746-2d53-47bd-bd79-52588ec01c97 req-10aaa609-b07a-44e2-9411-4d95c1a04737 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.674 232432 DEBUG nova.compute.manager [req-961ee746-2d53-47bd-bd79-52588ec01c97 req-10aaa609-b07a-44e2-9411-4d95c1a04737 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Processing event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.675 232432 DEBUG nova.compute.manager [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.679 232432 DEBUG nova.virt.libvirt.driver [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.684 232432 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance spawned successfully.
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.748 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.749 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405137.678443, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.749 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Resumed (Lifecycle Event)
Nov 29 08:32:17 compute-2 nova_compute[232428]: 2025-11-29 08:32:17.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.032 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.037 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:32:18 compute-2 ceph-mon[77138]: pgmap v2864: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 08:32:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:32:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:32:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:32:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.385 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.405 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.407 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.407 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.408 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.408 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:18 compute-2 podman[307952]: 2025-11-29 08:32:18.41121408 +0000 UTC m=+0.875929880 container create 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:32:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:18 compute-2 systemd[1]: Started libpod-conmon-7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc.scope.
Nov 29 08:32:18 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:32:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934045e02c3ada40231b611535485f0cae194a0615255141f4ca28e38b88dc18/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:32:18 compute-2 podman[307952]: 2025-11-29 08:32:18.688140369 +0000 UTC m=+1.152856179 container init 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:32:18 compute-2 podman[307952]: 2025-11-29 08:32:18.698087531 +0000 UTC m=+1.162803321 container start 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:32:18 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [NOTICE]   (307992) : New worker (307994) forked
Nov 29 08:32:18 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [NOTICE]   (307992) : Loading success.
Nov 29 08:32:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:32:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 62K writes, 243K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.05 MB/s
                                           Cumulative WAL: 62K writes, 22K syncs, 2.71 writes per sync, written: 0.24 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9937 writes, 36K keys, 9937 commit groups, 1.0 writes per commit group, ingest: 36.37 MB, 0.06 MB/s
                                           Interval WAL: 9937 writes, 3897 syncs, 2.55 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:32:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:32:18 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2130446304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.901 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.981 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.982 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.982 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.985 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:32:18 compute-2 nova_compute[232428]: 2025-11-29 08:32:18.986 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.188 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.190 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3832MB free_disk=20.84532928466797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.190 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.191 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:32:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:32:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:32:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2130446304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.281 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance dc8140a9-7bef-42f8-867c-13e29f022673 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.282 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.282 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.282 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:32:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:19.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.636 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.957 232432 DEBUG nova.compute.manager [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.958 232432 DEBUG oslo_concurrency.lockutils [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.959 232432 DEBUG oslo_concurrency.lockutils [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.959 232432 DEBUG oslo_concurrency.lockutils [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.959 232432 DEBUG nova.compute.manager [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:32:19 compute-2 nova_compute[232428]: 2025-11-29 08:32:19.959 232432 WARNING nova.compute.manager [req-36f74b25-3b76-4708-856a-a9a26e8dc0a9 req-9094578f-aa96-4f73-8b93-e9ed81826c08 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received unexpected event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with vm_state shelved_offloaded and task_state spawning.
Nov 29 08:32:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:32:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/648196895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.295 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.302 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:32:20 compute-2 ceph-mon[77138]: pgmap v2865: 305 pgs: 305 active+clean; 538 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.1 MiB/s wr, 110 op/s
Nov 29 08:32:20 compute-2 ceph-mon[77138]: osdmap e381: 3 total, 3 up, 3 in
Nov 29 08:32:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2750205141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.383 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.434 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.435 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:20 compute-2 nova_compute[232428]: 2025-11-29 08:32:20.716 232432 DEBUG nova.compute.manager [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:21 compute-2 nova_compute[232428]: 2025-11-29 08:32:21.065 232432 DEBUG oslo_concurrency.lockutils [None req-3261ecd4-b69c-44a2-9281-30212824d464 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 29.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/648196895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:21 compute-2 nova_compute[232428]: 2025-11-29 08:32:21.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:22 compute-2 ceph-mon[77138]: pgmap v2867: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.1 MiB/s wr, 197 op/s
Nov 29 08:32:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3905745017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:22 compute-2 sudo[308030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:22 compute-2 sudo[308030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:22 compute-2 sudo[308030]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:22 compute-2 sudo[308056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:22 compute-2 sudo[308056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:22 compute-2 nova_compute[232428]: 2025-11-29 08:32:22.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:22 compute-2 sudo[308056]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:23 compute-2 podman[308054]: 2025-11-29 08:32:23.003506965 +0000 UTC m=+0.087526301 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:32:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:23 compute-2 ceph-mon[77138]: pgmap v2868: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.1 MiB/s wr, 197 op/s
Nov 29 08:32:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:25 compute-2 sudo[308101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:25 compute-2 sudo[308101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:25 compute-2 sudo[308101]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:25 compute-2 sudo[308126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:32:25 compute-2 sudo[308126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:25 compute-2 sudo[308126]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:32:25 compute-2 ceph-mon[77138]: pgmap v2869: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 180 op/s
Nov 29 08:32:25 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:32:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3052037109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:26 compute-2 nova_compute[232428]: 2025-11-29 08:32:26.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2042810630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:27 compute-2 nova_compute[232428]: 2025-11-29 08:32:27.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:32:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455453273' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:32:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455453273' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 ceph-mon[77138]: pgmap v2870: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 457 KiB/s wr, 192 op/s
Nov 29 08:32:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1455453273' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1455453273' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:32:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236428780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:32:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236428780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:28.679 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:32:28 compute-2 nova_compute[232428]: 2025-11-29 08:32:28.680 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:28.682 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:32:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 08:32:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2236428780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2236428780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:29 compute-2 ceph-mon[77138]: osdmap e382: 3 total, 3 up, 3 in
Nov 29 08:32:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:29.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 08:32:30 compute-2 ceph-mon[77138]: pgmap v2871: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 247 KiB/s wr, 176 op/s
Nov 29 08:32:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:30.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:31 compute-2 ovn_controller[134375]: 2025-11-29T08:32:31Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:4e:1f 10.100.0.4
Nov 29 08:32:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:31.685 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:31 compute-2 ceph-mon[77138]: osdmap e383: 3 total, 3 up, 3 in
Nov 29 08:32:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/405876529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/405876529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:31 compute-2 ceph-mon[77138]: pgmap v2874: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 868 KiB/s rd, 21 KiB/s wr, 193 op/s
Nov 29 08:32:31 compute-2 nova_compute[232428]: 2025-11-29 08:32:31.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:32 compute-2 nova_compute[232428]: 2025-11-29 08:32:32.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:32 compute-2 nova_compute[232428]: 2025-11-29 08:32:32.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:32:32 compute-2 nova_compute[232428]: 2025-11-29 08:32:32.237 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:32:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:32.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:32 compute-2 nova_compute[232428]: 2025-11-29 08:32:32.973 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:33.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:34 compute-2 nova_compute[232428]: 2025-11-29 08:32:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:34 compute-2 nova_compute[232428]: 2025-11-29 08:32:34.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:32:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 08:32:34 compute-2 ceph-mon[77138]: pgmap v2875: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 145 KiB/s rd, 17 KiB/s wr, 112 op/s
Nov 29 08:32:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1848767357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1848767357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:34 compute-2 podman[308156]: 2025-11-29 08:32:34.685281701 +0000 UTC m=+0.086377899 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:32:35 compute-2 ceph-mon[77138]: osdmap e384: 3 total, 3 up, 3 in
Nov 29 08:32:35 compute-2 ceph-mon[77138]: pgmap v2877: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 311 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 26 KiB/s wr, 155 op/s
Nov 29 08:32:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:35.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:32:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/652086645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:32:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/652086645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:36.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/652086645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/652086645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:36 compute-2 nova_compute[232428]: 2025-11-29 08:32:36.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.682 232432 DEBUG nova.compute.manager [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.683 232432 DEBUG nova.compute.manager [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing instance network info cache due to event network-changed-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.684 232432 DEBUG oslo_concurrency.lockutils [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.684 232432 DEBUG oslo_concurrency.lockutils [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.685 232432 DEBUG nova.network.neutron [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Refreshing network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:32:37 compute-2 ceph-mon[77138]: pgmap v2878: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 897 KiB/s rd, 22 KiB/s wr, 181 op/s
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.875 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.876 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.876 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.877 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.877 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.879 232432 INFO nova.compute.manager [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Terminating instance
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.880 232432 DEBUG nova.compute.manager [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:32:37 compute-2 nova_compute[232428]: 2025-11-29 08:32:37.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:37 compute-2 kernel: tap024fe302-6c (unregistering): left promiscuous mode
Nov 29 08:32:37 compute-2 NetworkManager[48993]: <info>  [1764405157.9934] device (tap024fe302-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 ovn_controller[134375]: 2025-11-29T08:32:38Z|00806|binding|INFO|Releasing lport 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 from this chassis (sb_readonly=0)
Nov 29 08:32:38 compute-2 ovn_controller[134375]: 2025-11-29T08:32:38Z|00807|binding|INFO|Setting lport 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 down in Southbound
Nov 29 08:32:38 compute-2 ovn_controller[134375]: 2025-11-29T08:32:38Z|00808|binding|INFO|Removing iface tap024fe302-6c ovn-installed in OVS
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.020 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Nov 29 08:32:38 compute-2 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000aa.scope: Consumed 19.747s CPU time.
Nov 29 08:32:38 compute-2 systemd-machined[194747]: Machine qemu-83-instance-000000aa terminated.
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.059 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:42:3c 10.100.0.4'], port_security=['fa:16:3e:02:42:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dc8140a9-7bef-42f8-867c-13e29f022673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e8e7407a7c44208a503e8225c1cf518', 'neutron:revision_number': '4', 'neutron:security_group_ids': '056d3a24-7b10-4a45-884a-1b8e5def99f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a117267-2677-4e97-b3d9-4edd30f1b375, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.061 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 in datapath 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 unbound from our chassis
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.062 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.064 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[551c1304-d226-4440-b1a1-fb9e6430e99d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.065 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 namespace which is not needed anymore
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.112 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.127 232432 INFO nova.virt.libvirt.driver [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Instance destroyed successfully.
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.128 232432 DEBUG nova.objects.instance [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'resources' on Instance uuid dc8140a9-7bef-42f8-867c-13e29f022673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.235 232432 DEBUG nova.virt.libvirt.vif [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1520519822',display_name='tempest-TestStampPattern-server-1520519822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1520519822',id=170,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-y414pyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:11Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=dc8140a9-7bef-42f8-867c-13e29f022673,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.236 232432 DEBUG nova.network.os_vif_util [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:32:38 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [NOTICE]   (306241) : haproxy version is 2.8.14-c23fe91
Nov 29 08:32:38 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [NOTICE]   (306241) : path to executable is /usr/sbin/haproxy
Nov 29 08:32:38 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [WARNING]  (306241) : Exiting Master process...
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.238 232432 DEBUG nova.network.os_vif_util [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:32:38 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [ALERT]    (306241) : Current worker (306243) exited with code 143 (Terminated)
Nov 29 08:32:38 compute-2 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[306237]: [WARNING]  (306241) : All workers exited. Exiting... (0)
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.239 232432 DEBUG os_vif [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:32:38 compute-2 systemd[1]: libpod-5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9.scope: Deactivated successfully.
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.245 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024fe302-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 podman[308217]: 2025-11-29 08:32:38.249415513 +0000 UTC m=+0.071632970 container died 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.250 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.257 232432 INFO os_vif [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:42:3c,bridge_name='br-int',has_traffic_filtering=True,id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap024fe302-6c')
Nov 29 08:32:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9-userdata-shm.mount: Deactivated successfully.
Nov 29 08:32:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-4e8f318b5b114e802fe080a8ff968c5541511d5859e70766449b088a2ed2730f-merged.mount: Deactivated successfully.
Nov 29 08:32:38 compute-2 podman[308217]: 2025-11-29 08:32:38.527268861 +0000 UTC m=+0.349486308 container cleanup 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:32:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:38 compute-2 podman[308263]: 2025-11-29 08:32:38.70207532 +0000 UTC m=+0.148638757 container remove 5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.710 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d4668b21-af58-4ff1-9e5f-32db70e098c3]: (4, ('Sat Nov 29 08:32:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 (5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9)\n5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9\nSat Nov 29 08:32:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 (5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9)\n5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.713 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[faeb7aa8-29e2-4da7-b44b-b3c5088a7d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.714 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bbeaef7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.716 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 kernel: tap9bbeaef7-10: left promiscuous mode
Nov 29 08:32:38 compute-2 systemd[1]: libpod-conmon-5e3c021c376f8e74db63ef28c03ed3e90967dc04c5ec0fb8a486a3ecd9b919b9.scope: Deactivated successfully.
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.724 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b258b9-d95b-4dfb-a815-e12b13cca2a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 nova_compute[232428]: 2025-11-29 08:32:38.734 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.737 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2edc9121-cbd7-4b5e-b19a-81a2d174aef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.738 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c836198-7b41-46c1-9964-b2d6740b5cdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.759 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[342e746e-2e63-4dce-b0a7-ed6df3406d4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799461, 'reachable_time': 41472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308276, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 systemd[1]: run-netns-ovnmeta\x2d9bbeaef7\x2d1d9b\x2d48d5\x2db82f\x2da3c3a4c84cd6.mount: Deactivated successfully.
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.763 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:32:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:38.763 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[61c2a186-7b8a-4c66-95b2-1770ee0a02d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.486 232432 INFO nova.virt.libvirt.driver [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Deleting instance files /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673_del
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.487 232432 INFO nova.virt.libvirt.driver [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Deletion of /var/lib/nova/instances/dc8140a9-7bef-42f8-867c-13e29f022673_del complete
Nov 29 08:32:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.595 232432 INFO nova.compute.manager [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Took 1.71 seconds to destroy the instance on the hypervisor.
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.596 232432 DEBUG oslo.service.loopingcall [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.596 232432 DEBUG nova.compute.manager [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:32:39 compute-2 nova_compute[232428]: 2025-11-29 08:32:39.597 232432 DEBUG nova.network.neutron [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:32:40 compute-2 ceph-mon[77138]: pgmap v2879: 305 pgs: 305 active+clean; 283 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 37 KiB/s wr, 191 op/s
Nov 29 08:32:40 compute-2 ceph-mon[77138]: osdmap e385: 3 total, 3 up, 3 in
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.141 232432 DEBUG nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-unplugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.142 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.142 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.142 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.143 232432 DEBUG nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] No waiting events found dispatching network-vif-unplugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.143 232432 DEBUG nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-unplugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.143 232432 DEBUG nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.143 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.144 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.144 232432 DEBUG oslo_concurrency.lockutils [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.145 232432 DEBUG nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] No waiting events found dispatching network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.145 232432 WARNING nova.compute.manager [req-613bfe52-954a-47bd-88fc-fc0ee1d02a75 req-fb260e5f-10f3-431d-afd0-d1e0c574bea4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received unexpected event network-vif-plugged-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 for instance with vm_state active and task_state deleting.
Nov 29 08:32:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.746 232432 DEBUG oslo_concurrency.lockutils [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.747 232432 DEBUG oslo_concurrency.lockutils [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.816 232432 INFO nova.compute.manager [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Detaching volume 40fdec3a-4544-45a5-9bce-a1d84a8f5b1b
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.952 232432 INFO nova.virt.block_device [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Attempting to driver detach volume 40fdec3a-4544-45a5-9bce-a1d84a8f5b1b from mountpoint /dev/vdc
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.967 232432 DEBUG nova.virt.libvirt.driver [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Attempting to detach device vdc from instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:32:40 compute-2 nova_compute[232428]: 2025-11-29 08:32:40.969 232432 DEBUG nova.virt.libvirt.guest [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:32:40 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-40fdec3a-4544-45a5-9bce-a1d84a8f5b1b">
Nov 29 08:32:40 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]:   </source>
Nov 29 08:32:40 compute-2 nova_compute[232428]:   <target dev="vdc" bus="virtio"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]:   <serial>40fdec3a-4544-45a5-9bce-a1d84a8f5b1b</serial>
Nov 29 08:32:40 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 08:32:40 compute-2 nova_compute[232428]: </disk>
Nov 29 08:32:40 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.045 232432 INFO nova.virt.libvirt.driver [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully detached device vdc from instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 from the persistent domain config.
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.046 232432 DEBUG nova.virt.libvirt.driver [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.047 232432 DEBUG nova.virt.libvirt.guest [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:32:41 compute-2 nova_compute[232428]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]:   <source protocol="rbd" name="volumes/volume-40fdec3a-4544-45a5-9bce-a1d84a8f5b1b">
Nov 29 08:32:41 compute-2 nova_compute[232428]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]:     <host name="192.168.122.102" port="6789"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]:     <host name="192.168.122.101" port="6789"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]:   </source>
Nov 29 08:32:41 compute-2 nova_compute[232428]:   <target dev="vdc" bus="virtio"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]:   <serial>40fdec3a-4544-45a5-9bce-a1d84a8f5b1b</serial>
Nov 29 08:32:41 compute-2 nova_compute[232428]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 08:32:41 compute-2 nova_compute[232428]: </disk>
Nov 29 08:32:41 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.105 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764405161.104612, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.106 232432 DEBUG nova.virt.libvirt.driver [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.109 232432 INFO nova.virt.libvirt.driver [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully detached device vdc from instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 from the live domain config.
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.364 232432 DEBUG nova.network.neutron [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updated VIF entry in instance network info cache for port 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.365 232432 DEBUG nova.network.neutron [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [{"id": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "address": "fa:16:3e:02:42:3c", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap024fe302-6c", "ovs_interfaceid": "024fe302-6cb7-4c8c-9d08-bcd0c8c51da0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.418 232432 DEBUG nova.compute.manager [req-369e884c-6473-4000-ba2b-8ca359bd51d5 req-fc498972-24dd-4b07-9abe-7714a279e2d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Received event network-vif-deleted-024fe302-6cb7-4c8c-9d08-bcd0c8c51da0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.419 232432 INFO nova.compute.manager [req-369e884c-6473-4000-ba2b-8ca359bd51d5 req-fc498972-24dd-4b07-9abe-7714a279e2d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Neutron deleted interface 024fe302-6cb7-4c8c-9d08-bcd0c8c51da0; detaching it from the instance and deleting it from the info cache
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.419 232432 DEBUG nova.network.neutron [req-369e884c-6473-4000-ba2b-8ca359bd51d5 req-fc498972-24dd-4b07-9abe-7714a279e2d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.484 232432 DEBUG nova.network.neutron [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.486 232432 DEBUG oslo_concurrency.lockutils [req-24749549-fac3-49a7-9dc9-9f208dbdf0b2 req-40f58f73-c8de-4a9c-9bf6-360fe317cd19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dc8140a9-7bef-42f8-867c-13e29f022673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.504 232432 DEBUG nova.objects.instance [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'flavor' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.679 232432 INFO nova.compute.manager [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Took 2.08 seconds to deallocate network for instance.
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.685 232432 DEBUG nova.compute.manager [req-369e884c-6473-4000-ba2b-8ca359bd51d5 req-fc498972-24dd-4b07-9abe-7714a279e2d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Detach interface failed, port_id=024fe302-6cb7-4c8c-9d08-bcd0c8c51da0, reason: Instance dc8140a9-7bef-42f8-867c-13e29f022673 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:32:41 compute-2 nova_compute[232428]: 2025-11-29 08:32:41.911 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.017 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.017 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.034 232432 DEBUG oslo_concurrency.lockutils [None req-0cea8c9f-0255-48fa-a5dd-acf92bd053cc e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.163 232432 DEBUG oslo_concurrency.processutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:42 compute-2 ceph-mon[77138]: pgmap v2881: 305 pgs: 305 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 802 KiB/s rd, 24 KiB/s wr, 163 op/s
Nov 29 08:32:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:42.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:42 compute-2 ovn_controller[134375]: 2025-11-29T08:32:42Z|00809|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.643 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:32:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2763537911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.685 232432 DEBUG oslo_concurrency.processutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.691 232432 DEBUG nova.compute.provider_tree [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.884 232432 DEBUG nova.scheduler.client.report [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.910 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:42 compute-2 nova_compute[232428]: 2025-11-29 08:32:42.946 232432 INFO nova.scheduler.client.report [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Deleted allocations for instance dc8140a9-7bef-42f8-867c-13e29f022673
Nov 29 08:32:43 compute-2 sudo[308305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:43 compute-2 sudo[308305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:43 compute-2 sudo[308305]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:43 compute-2 sudo[308330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:32:43 compute-2 sudo[308330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:32:43 compute-2 sudo[308330]: pam_unix(sudo:session): session closed for user root
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.162 232432 DEBUG oslo_concurrency.lockutils [None req-1a47b2c6-8659-448d-8658-a3947d9a9b4c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "dc8140a9-7bef-42f8-867c-13e29f022673" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.249 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2763537911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.328 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.328 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.329 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.329 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.330 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.331 232432 INFO nova.compute.manager [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Terminating instance
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.333 232432 DEBUG nova.compute.manager [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:32:43 compute-2 kernel: tap1576b647-a0 (unregistering): left promiscuous mode
Nov 29 08:32:43 compute-2 NetworkManager[48993]: <info>  [1764405163.4689] device (tap1576b647-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:32:43 compute-2 ovn_controller[134375]: 2025-11-29T08:32:43Z|00810|binding|INFO|Releasing lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 from this chassis (sb_readonly=0)
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.481 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 ovn_controller[134375]: 2025-11-29T08:32:43Z|00811|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 down in Southbound
Nov 29 08:32:43 compute-2 ovn_controller[134375]: 2025-11-29T08:32:43Z|00812|binding|INFO|Removing iface tap1576b647-a0 ovn-installed in OVS
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.483 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:43.489 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:4e:1f 10.100.0.4'], port_security=['fa:16:3e:53:4e:1f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '496c1f15-8168-427c-a8c0-5ed474644583', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=1576b647-a0ba-45ac-afa5-c62b909bb7e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:32:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:43.491 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 in datapath 4c541784-a3aa-4c55-a753-a31504941937 unbound from our chassis
Nov 29 08:32:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:43.493 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c541784-a3aa-4c55-a753-a31504941937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:32:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:43.494 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0aa794-9985-4dae-a223-a72309bf857c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:43.495 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace which is not needed anymore
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Nov 29 08:32:43 compute-2 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ab.scope: Consumed 15.255s CPU time.
Nov 29 08:32:43 compute-2 systemd-machined[194747]: Machine qemu-84-instance-000000ab terminated.
Nov 29 08:32:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.556 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.573 232432 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance destroyed successfully.
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.574 232432 DEBUG nova.objects.instance [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'resources' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.618 232432 DEBUG nova.virt.libvirt.vif [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/QbFQHfxyoj1W/t5pawyERQGZRClAr1DxU8gg8udDNKRDAgSRqjviYC9CV8DByogltybpLGJLh5e67lMbhPKRIYOrGJnVOyLrNIthayQV7k/8lr+xvE29t9ygQsTGfcQ==',key_name='tempest-keypair-1565500821',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:32:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.619 232432 DEBUG nova.network.os_vif_util [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.620 232432 DEBUG nova.network.os_vif_util [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.620 232432 DEBUG os_vif [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.622 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1576b647-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.627 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.629 232432 INFO os_vif [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0')
Nov 29 08:32:43 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [NOTICE]   (307992) : haproxy version is 2.8.14-c23fe91
Nov 29 08:32:43 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [NOTICE]   (307992) : path to executable is /usr/sbin/haproxy
Nov 29 08:32:43 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [WARNING]  (307992) : Exiting Master process...
Nov 29 08:32:43 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [ALERT]    (307992) : Current worker (307994) exited with code 143 (Terminated)
Nov 29 08:32:43 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[307988]: [WARNING]  (307992) : All workers exited. Exiting... (0)
Nov 29 08:32:43 compute-2 systemd[1]: libpod-7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc.scope: Deactivated successfully.
Nov 29 08:32:43 compute-2 podman[308390]: 2025-11-29 08:32:43.677547837 +0000 UTC m=+0.061413122 container died 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:32:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc-userdata-shm.mount: Deactivated successfully.
Nov 29 08:32:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-934045e02c3ada40231b611535485f0cae194a0615255141f4ca28e38b88dc18-merged.mount: Deactivated successfully.
Nov 29 08:32:43 compute-2 podman[308390]: 2025-11-29 08:32:43.728441881 +0000 UTC m=+0.112307146 container cleanup 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:32:43 compute-2 systemd[1]: libpod-conmon-7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc.scope: Deactivated successfully.
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.942 232432 DEBUG nova.compute.manager [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.943 232432 DEBUG oslo_concurrency.lockutils [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.943 232432 DEBUG oslo_concurrency.lockutils [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.943 232432 DEBUG oslo_concurrency.lockutils [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.943 232432 DEBUG nova.compute.manager [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:32:43 compute-2 nova_compute[232428]: 2025-11-29 08:32:43.944 232432 DEBUG nova.compute.manager [req-6b8fcf0d-3a4f-4a28-98ae-cf5593800e7c req-e93e7fc6-2f02-44fb-aad5-072d03f4291e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:32:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 08:32:43 compute-2 podman[308436]: 2025-11-29 08:32:43.995186163 +0000 UTC m=+0.243005205 container remove 7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.002 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d15928fd-470d-40fc-a2e9-6e6d7cb62056]: (4, ('Sat Nov 29 08:32:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc)\n7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc\nSat Nov 29 08:32:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc)\n7b308bce7e8e1e12cee017a5689fddacdf76c9872def251f5aa371a127ca62fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.005 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[525bfe77-f27b-4ada-bdcf-9a18eafac55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.006 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.008 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:44 compute-2 kernel: tap4c541784-a0: left promiscuous mode
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.024 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.026 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0237bd7a-245a-4ddb-9782-dda6fd1a8e69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.044 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f75d5037-967a-4c3e-95ac-53db993a4d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.045 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[703eb878-bc75-4195-a565-7027f7428f88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.066 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d30b77-3360-4a24-8697-66537ebf6b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810318, 'reachable_time': 23222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308452, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.068 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:32:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:32:44.069 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f40cc5b5-d101-489b-93c0-dbc5d7fa5154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:32:44 compute-2 systemd[1]: run-netns-ovnmeta\x2d4c541784\x2da3aa\x2d4c55\x2da753\x2da31504941937.mount: Deactivated successfully.
Nov 29 08:32:44 compute-2 ceph-mon[77138]: pgmap v2882: 305 pgs: 305 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 19 KiB/s wr, 98 op/s
Nov 29 08:32:44 compute-2 ceph-mon[77138]: osdmap e386: 3 total, 3 up, 3 in
Nov 29 08:32:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:44.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:44 compute-2 sshd-session[308454]: Invalid user user from 45.148.10.240 port 34020
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.844 232432 INFO nova.virt.libvirt.driver [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deleting instance files /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_del
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.845 232432 INFO nova.virt.libvirt.driver [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deletion of /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_del complete
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.904 232432 INFO nova.compute.manager [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Took 1.57 seconds to destroy the instance on the hypervisor.
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.905 232432 DEBUG oslo.service.loopingcall [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.905 232432 DEBUG nova.compute.manager [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:32:44 compute-2 nova_compute[232428]: 2025-11-29 08:32:44.905 232432 DEBUG nova.network.neutron [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:32:44 compute-2 sshd-session[308454]: Connection closed by invalid user user 45.148.10.240 port 34020 [preauth]
Nov 29 08:32:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1974619220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1974619220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:45 compute-2 podman[308456]: 2025-11-29 08:32:45.679528942 +0000 UTC m=+0.074385647 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.096 232432 DEBUG nova.network.neutron [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.115 232432 INFO nova.compute.manager [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Took 1.21 seconds to deallocate network for instance.
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.160 232432 DEBUG nova.compute.manager [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.160 232432 DEBUG oslo_concurrency.lockutils [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.161 232432 DEBUG oslo_concurrency.lockutils [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.162 232432 DEBUG oslo_concurrency.lockutils [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.162 232432 DEBUG nova.compute.manager [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.163 232432 WARNING nova.compute.manager [req-43b00860-20e3-4bca-ae15-e055768068a3 req-1df3c852-b8bd-472b-b6aa-bc79458b5021 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received unexpected event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with vm_state active and task_state deleting.
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.178 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.178 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.229 232432 DEBUG oslo_concurrency.processutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.302 232432 DEBUG nova.compute.manager [req-c0f9f7f3-d67c-4315-ac49-3133b2a00d8b req-db8a7db1-99f4-46e2-af31-b652bde91e82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-deleted-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:32:46 compute-2 ceph-mon[77138]: pgmap v2884: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 19 KiB/s wr, 65 op/s
Nov 29 08:32:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:46.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:32:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3748187535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.747 232432 DEBUG oslo_concurrency.processutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.759 232432 DEBUG nova.compute.provider_tree [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.778 232432 DEBUG nova.scheduler.client.report [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.804 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.916 232432 INFO nova.scheduler.client.report [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Deleted allocations for instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56
Nov 29 08:32:46 compute-2 nova_compute[232428]: 2025-11-29 08:32:46.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:47 compute-2 nova_compute[232428]: 2025-11-29 08:32:47.035 232432 DEBUG oslo_concurrency.lockutils [None req-999765c8-e981-446d-b56e-5eb8cc87cf7a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:32:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3748187535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:32:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:48 compute-2 nova_compute[232428]: 2025-11-29 08:32:48.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:48 compute-2 nova_compute[232428]: 2025-11-29 08:32:48.367 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:48 compute-2 ceph-mon[77138]: pgmap v2885: 305 pgs: 305 active+clean; 158 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.7 KiB/s wr, 73 op/s
Nov 29 08:32:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:48 compute-2 nova_compute[232428]: 2025-11-29 08:32:48.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:32:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611268167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:32:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611268167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:49 compute-2 ceph-mon[77138]: pgmap v2886: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 91 op/s
Nov 29 08:32:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1611268167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:32:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1611268167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:32:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:51 compute-2 ceph-mon[77138]: pgmap v2887: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 29 08:32:51 compute-2 nova_compute[232428]: 2025-11-29 08:32:51.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:52 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 08:32:53 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 08:32:53 compute-2 nova_compute[232428]: 2025-11-29 08:32:53.126 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405158.1242454, dc8140a9-7bef-42f8-867c-13e29f022673 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:32:53 compute-2 nova_compute[232428]: 2025-11-29 08:32:53.126 232432 INFO nova.compute.manager [-] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] VM Stopped (Lifecycle Event)
Nov 29 08:32:53 compute-2 radosgw[83394]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 08:32:53 compute-2 nova_compute[232428]: 2025-11-29 08:32:53.237 232432 DEBUG nova.compute.manager [None req-e56279f9-be8a-4182-bd94-515b479e67ae - - - - - -] [instance: dc8140a9-7bef-42f8-867c-13e29f022673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:53.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:53 compute-2 nova_compute[232428]: 2025-11-29 08:32:53.626 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:53 compute-2 podman[308502]: 2025-11-29 08:32:53.678747652 +0000 UTC m=+0.075653015 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:32:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:54.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:55 compute-2 ceph-mon[77138]: pgmap v2888: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 29 08:32:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:56 compute-2 ceph-mon[77138]: pgmap v2889: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.6 KiB/s wr, 78 op/s
Nov 29 08:32:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:32:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:56.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:32:56 compute-2 nova_compute[232428]: 2025-11-29 08:32:56.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:57 compute-2 nova_compute[232428]: 2025-11-29 08:32:57.235 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:32:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:32:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:57.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:32:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:32:57 compute-2 ceph-mon[77138]: pgmap v2890: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.3 KiB/s wr, 86 op/s
Nov 29 08:32:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:58.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:58 compute-2 nova_compute[232428]: 2025-11-29 08:32:58.571 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405163.5705762, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:32:58 compute-2 nova_compute[232428]: 2025-11-29 08:32:58.572 232432 INFO nova.compute.manager [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Stopped (Lifecycle Event)
Nov 29 08:32:58 compute-2 nova_compute[232428]: 2025-11-29 08:32:58.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:32:58 compute-2 nova_compute[232428]: 2025-11-29 08:32:58.751 232432 DEBUG nova.compute.manager [None req-08a42ff1-c7d1-44bf-8d04-b2819dafed01 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:32:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:32:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:32:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:59.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:32:59 compute-2 nova_compute[232428]: 2025-11-29 08:32:59.883 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:32:59 compute-2 nova_compute[232428]: 2025-11-29 08:32:59.884 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:32:59 compute-2 ceph-mon[77138]: pgmap v2891: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 2.3 KiB/s wr, 116 op/s
Nov 29 08:32:59 compute-2 nova_compute[232428]: 2025-11-29 08:32:59.952 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:33:00 compute-2 nova_compute[232428]: 2025-11-29 08:33:00.063 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:00 compute-2 nova_compute[232428]: 2025-11-29 08:33:00.063 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:00 compute-2 nova_compute[232428]: 2025-11-29 08:33:00.076 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:33:00 compute-2 nova_compute[232428]: 2025-11-29 08:33:00.077 232432 INFO nova.compute.claims [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:33:00 compute-2 nova_compute[232428]: 2025-11-29 08:33:00.492 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:33:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2574267808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.001 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.008 232432 DEBUG nova.compute.provider_tree [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.390 232432 DEBUG nova.scheduler.client.report [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.452 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.453 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.535 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.536 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.552 232432 INFO nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.565 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:33:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:01.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.658 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.660 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.660 232432 INFO nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating image(s)
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.693 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.721 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.751 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.755 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.831 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.832 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.833 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.833 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.862 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.867 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f7256761-4dda-41d4-bd20-f34c7a8478ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.906 232432 DEBUG nova.policy [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6de0587a3794e30acefc687f435d388', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '37972b49ddde4c519c6523d2ea1569b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:33:01 compute-2 nova_compute[232428]: 2025-11-29 08:33:01.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:02 compute-2 ceph-mon[77138]: pgmap v2892: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 426 B/s wr, 110 op/s
Nov 29 08:33:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2574267808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:02 compute-2 nova_compute[232428]: 2025-11-29 08:33:02.490 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f7256761-4dda-41d4-bd20-f34c7a8478ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:02 compute-2 nova_compute[232428]: 2025-11-29 08:33:02.600 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] resizing rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.020 232432 DEBUG nova.objects.instance [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'migration_context' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.034 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.034 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Ensure instance console log exists: /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.035 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.035 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.035 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:03 compute-2 sudo[308718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:03 compute-2 sudo[308718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:03 compute-2 sudo[308718]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:03 compute-2 sudo[308743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:03 compute-2 sudo[308743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:03 compute-2 sudo[308743]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:03.336 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:03.337 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:03.337 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.345 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Successfully created port: 3da501a9-b467-445e-8d0c-b03956d7a1b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:33:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:03.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:03 compute-2 nova_compute[232428]: 2025-11-29 08:33:03.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:04 compute-2 ceph-mon[77138]: pgmap v2893: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 255 B/s wr, 107 op/s
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.200 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.200 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.475 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:33:04 compute-2 nova_compute[232428]: 2025-11-29 08:33:04.475 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:33:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.089 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Successfully updated port: 3da501a9-b467-445e-8d0c-b03956d7a1b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.116 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.116 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.117 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.256 232432 DEBUG nova.compute.manager [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.256 232432 DEBUG nova.compute.manager [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing instance network info cache due to event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.256 232432 DEBUG oslo_concurrency.lockutils [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:33:05 compute-2 nova_compute[232428]: 2025-11-29 08:33:05.336 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:33:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:05.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:05 compute-2 podman[308769]: 2025-11-29 08:33:05.726199559 +0000 UTC m=+0.117911751 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 29 08:33:06 compute-2 ceph-mon[77138]: pgmap v2894: 305 pgs: 305 active+clean; 136 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 684 KiB/s wr, 166 op/s
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.268 232432 DEBUG nova.network.neutron [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.289 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.290 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance network_info: |[{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.291 232432 DEBUG oslo_concurrency.lockutils [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.292 232432 DEBUG nova.network.neutron [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.297 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start _get_guest_xml network_info=[{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.305 232432 WARNING nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.316 232432 DEBUG nova.virt.libvirt.host [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.317 232432 DEBUG nova.virt.libvirt.host [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.322 232432 DEBUG nova.virt.libvirt.host [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.323 232432 DEBUG nova.virt.libvirt.host [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.325 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.325 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.326 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.327 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.327 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.328 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.328 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.329 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.329 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.330 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.330 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.331 232432 DEBUG nova.virt.hardware [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.336 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:06.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:33:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3847347626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.888 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.922 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.926 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:06 compute-2 nova_compute[232428]: 2025-11-29 08:33:06.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3847347626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:33:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3549475830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.379 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.382 232432 DEBUG nova.virt.libvirt.vif [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLhBaWnE1HT/mfoEXjDGlphQpqM+jzqDgGTCm5uAntITZ58l1wGQewG1RN4NYQpvce0WyCRcwFUsBq8uNucz7UAquvABOF3BuO6/PuBr//qzuFWP1XXEpnTWT0qE2Cj+A==',key_name='tempest-keypair-1730006032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.383 232432 DEBUG nova.network.os_vif_util [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.384 232432 DEBUG nova.network.os_vif_util [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.387 232432 DEBUG nova.objects.instance [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.429 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <uuid>f7256761-4dda-41d4-bd20-f34c7a8478ef</uuid>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <name>instance-000000ae</name>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-206995635</nova:name>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:33:06</nova:creationTime>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:user uuid="e6de0587a3794e30acefc687f435d388">tempest-AttachVolumeShelveTestJSON-1751768432-project-member</nova:user>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:project uuid="37972b49ddde4c519c6523d2ea1569b5">tempest-AttachVolumeShelveTestJSON-1751768432</nova:project>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <nova:port uuid="3da501a9-b467-445e-8d0c-b03956d7a1b2">
Nov 29 08:33:07 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <system>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="serial">f7256761-4dda-41d4-bd20-f34c7a8478ef</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="uuid">f7256761-4dda-41d4-bd20-f34c7a8478ef</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </system>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <os>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </os>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <features>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </features>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk">
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </source>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config">
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </source>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:33:07 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:61:8b:ed"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <target dev="tap3da501a9-b4"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/console.log" append="off"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <video>
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </video>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:33:07 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:33:07 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:33:07 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:33:07 compute-2 nova_compute[232428]: </domain>
Nov 29 08:33:07 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.431 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Preparing to wait for external event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.432 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.432 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.433 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.434 232432 DEBUG nova.virt.libvirt.vif [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLhBaWnE1HT/mfoEXjDGlphQpqM+jzqDgGTCm5uAntITZ58l1wGQewG1RN4NYQpvce0WyCRcwFUsBq8uNucz7UAquvABOF3BuO6/PuBr//qzuFWP1XXEpnTWT0qE2Cj+A==',key_name='tempest-keypair-1730006032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.435 232432 DEBUG nova.network.os_vif_util [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.436 232432 DEBUG nova.network.os_vif_util [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.437 232432 DEBUG os_vif [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.439 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.439 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.440 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.446 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3da501a9-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.447 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3da501a9-b4, col_values=(('external_ids', {'iface-id': '3da501a9-b467-445e-8d0c-b03956d7a1b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:8b:ed', 'vm-uuid': 'f7256761-4dda-41d4-bd20-f34c7a8478ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:07 compute-2 NetworkManager[48993]: <info>  [1764405187.4508] manager: (tap3da501a9-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.462 232432 INFO os_vif [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4')
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.476 232432 DEBUG nova.network.neutron [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updated VIF entry in instance network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.476 232432 DEBUG nova.network.neutron [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.490 232432 DEBUG oslo_concurrency.lockutils [req-b9b17d23-e16a-46df-b10f-504ee3641249 req-2f404436-d2ef-4305-8caf-a8ffa01862be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.516 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.517 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.517 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No VIF found with MAC fa:16:3e:61:8b:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.518 232432 INFO nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Using config drive
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.556 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.934 232432 INFO nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating config drive at /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config
Nov 29 08:33:07 compute-2 nova_compute[232428]: 2025-11-29 08:33:07.948 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygfeivc2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.119 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygfeivc2" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.171 232432 DEBUG nova.storage.rbd_utils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.177 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:08 compute-2 ceph-mon[77138]: pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 29 08:33:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3549475830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.470 232432 DEBUG oslo_concurrency.processutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.473 232432 INFO nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deleting local config drive /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config because it was imported into RBD.
Nov 29 08:33:08 compute-2 kernel: tap3da501a9-b4: entered promiscuous mode
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.5494] manager: (tap3da501a9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 08:33:08 compute-2 ovn_controller[134375]: 2025-11-29T08:33:08Z|00813|binding|INFO|Claiming lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 for this chassis.
Nov 29 08:33:08 compute-2 ovn_controller[134375]: 2025-11-29T08:33:08Z|00814|binding|INFO|3da501a9-b467-445e-8d0c-b03956d7a1b2: Claiming fa:16:3e:61:8b:ed 10.100.0.12
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.550 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.572 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:ed 10.100.0.12'], port_security=['fa:16:3e:61:8b:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f7256761-4dda-41d4-bd20-f34c7a8478ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26306486-d603-420f-a001-1b03f9962e31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3da501a9-b467-445e-8d0c-b03956d7a1b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.574 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3da501a9-b467-445e-8d0c-b03956d7a1b2 in datapath 4c541784-a3aa-4c55-a753-a31504941937 bound to our chassis
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.576 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:33:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:08.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.599 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[65583eea-2af4-4b1d-8c21-beec9951506c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.601 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c541784-a1 in ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.604 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c541784-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.604 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[48ca3e7a-f077-43ee-a721-57f62375492f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.606 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7532bfcd-cbc3-47ab-93cc-578efcb38eb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 systemd-machined[194747]: New machine qemu-85-instance-000000ae.
Nov 29 08:33:08 compute-2 systemd[1]: Started Virtual Machine qemu-85-instance-000000ae.
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.630 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[266acaca-8b4b-4df3-a25d-e3da5f287565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 systemd-udevd[308935]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.653 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.660 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.662 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd40f87-7b39-4a44-8a8f-350db8b2abe6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_controller[134375]: 2025-11-29T08:33:08Z|00815|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 ovn-installed in OVS
Nov 29 08:33:08 compute-2 ovn_controller[134375]: 2025-11-29T08:33:08Z|00816|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 up in Southbound
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.6648] device (tap3da501a9-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.664 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.6664] device (tap3da501a9-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.705 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6d7db5-50ee-4d44-9778-68743d086c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 systemd-udevd[308938]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.7140] manager: (tap4c541784-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.712 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8a32bb-9f5d-4cf9-abd7-e1cf6e922e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.751 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[edc7355b-4a70-4c13-b2eb-6668a0e8812f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.755 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[464a8ba2-56ab-4fda-b6d1-f155c1f9c90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.7820] device (tap4c541784-a0): carrier: link connected
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.788 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b67f01-2784-46c1-a049-5c75de6e6587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.807 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3fd469-3586-46dd-87e8-4db67d31e312]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815505, 'reachable_time': 34923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308966, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.828 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[12998ab7-2a75-4e84-b61c-795000caffba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:9545'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815505, 'tstamp': 815505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308974, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.848 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0cf134-1f5e-4742-933b-161f80ff63e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815505, 'reachable_time': 34923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308984, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.899 232432 DEBUG nova.compute.manager [req-e2801817-ec66-404e-b6a8-9d3cb443993b req-6a7d6a39-85e6-4986-83d8-7090b2ed7ac8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.900 232432 DEBUG oslo_concurrency.lockutils [req-e2801817-ec66-404e-b6a8-9d3cb443993b req-6a7d6a39-85e6-4986-83d8-7090b2ed7ac8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.900 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a1958a12-f660-43f5-8744-b0f195c91e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.900 232432 DEBUG oslo_concurrency.lockutils [req-e2801817-ec66-404e-b6a8-9d3cb443993b req-6a7d6a39-85e6-4986-83d8-7090b2ed7ac8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.901 232432 DEBUG oslo_concurrency.lockutils [req-e2801817-ec66-404e-b6a8-9d3cb443993b req-6a7d6a39-85e6-4986-83d8-7090b2ed7ac8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.902 232432 DEBUG nova.compute.manager [req-e2801817-ec66-404e-b6a8-9d3cb443993b req-6a7d6a39-85e6-4986-83d8-7090b2ed7ac8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Processing event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.987 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a36b35dc-d39c-41dd-8fd2-b0405a1f910b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.988 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.989 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.989 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c541784-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.991 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 kernel: tap4c541784-a0: entered promiscuous mode
Nov 29 08:33:08 compute-2 NetworkManager[48993]: <info>  [1764405188.9926] manager: (tap4c541784-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.994 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:08.996 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c541784-a0, col_values=(('external_ids', {'iface-id': '7f1f6d69-4406-4e27-a503-d839c5cccd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:08 compute-2 ovn_controller[134375]: 2025-11-29T08:33:08Z|00817|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:33:08 compute-2 nova_compute[232428]: 2025-11-29 08:33:08.998 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.031 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:09.033 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:09.034 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[191d417b-82ed-49f5-a516-969c516a2ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:09.035 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:33:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:09.036 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'env', 'PROCESS_TAG=haproxy-4c541784-a3aa-4c55-a753-a31504941937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c541784-a3aa-4c55-a753-a31504941937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.197 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405189.1971202, f7256761-4dda-41d4-bd20-f34c7a8478ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.198 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Started (Lifecycle Event)
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.202 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.210 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.215 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance spawned successfully.
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.215 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.227 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.232 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.276 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.277 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405189.1972752, f7256761-4dda-41d4-bd20-f34c7a8478ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.277 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Paused (Lifecycle Event)
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.284 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.285 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.286 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.286 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.287 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.287 232432 DEBUG nova.virt.libvirt.driver [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.335 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.340 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405189.209719, f7256761-4dda-41d4-bd20-f34c7a8478ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.340 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Resumed (Lifecycle Event)
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.372 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.384 232432 INFO nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Took 7.73 seconds to spawn the instance on the hypervisor.
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.384 232432 DEBUG nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.386 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.416 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.454 232432 INFO nova.compute.manager [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Took 9.43 seconds to build instance.
Nov 29 08:33:09 compute-2 nova_compute[232428]: 2025-11-29 08:33:09.471 232432 DEBUG oslo_concurrency.lockutils [None req-45b05f4d-ac4d-43c8-a0fe-1b6fa3ad095a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:09 compute-2 podman[309042]: 2025-11-29 08:33:09.497177279 +0000 UTC m=+0.078546345 container create 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:33:09 compute-2 podman[309042]: 2025-11-29 08:33:09.454821481 +0000 UTC m=+0.036190547 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:33:09 compute-2 systemd[1]: Started libpod-conmon-5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0.scope.
Nov 29 08:33:09 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:33:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:09.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8348006e5730fe91934168b5af929f832c39acec92557447deff7125971cd9f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:33:09 compute-2 podman[309042]: 2025-11-29 08:33:09.621876881 +0000 UTC m=+0.203245997 container init 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:33:09 compute-2 podman[309042]: 2025-11-29 08:33:09.634156742 +0000 UTC m=+0.215525798 container start 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:33:09 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [NOTICE]   (309061) : New worker (309063) forked
Nov 29 08:33:09 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [NOTICE]   (309061) : Loading success.
Nov 29 08:33:10 compute-2 nova_compute[232428]: 2025-11-29 08:33:10.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:10 compute-2 ceph-mon[77138]: pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 29 08:33:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:10.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.010 232432 DEBUG nova.compute.manager [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.011 232432 DEBUG oslo_concurrency.lockutils [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.012 232432 DEBUG oslo_concurrency.lockutils [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.012 232432 DEBUG oslo_concurrency.lockutils [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.013 232432 DEBUG nova.compute.manager [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.013 232432 WARNING nova.compute.manager [req-09b59c74-c58b-4cdf-bb5e-fc258aa6f596 req-8aebdb07-6177-41d2-991c-d9434e344c7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received unexpected event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with vm_state active and task_state None.
Nov 29 08:33:11 compute-2 NetworkManager[48993]: <info>  [1764405191.5793] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 08:33:11 compute-2 NetworkManager[48993]: <info>  [1764405191.5802] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:11.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.739 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:11 compute-2 ovn_controller[134375]: 2025-11-29T08:33:11Z|00818|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:11 compute-2 nova_compute[232428]: 2025-11-29 08:33:11.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:12 compute-2 ceph-mon[77138]: pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 29 08:33:12 compute-2 nova_compute[232428]: 2025-11-29 08:33:12.450 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:12.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:13 compute-2 nova_compute[232428]: 2025-11-29 08:33:13.094 232432 DEBUG nova.compute.manager [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:13 compute-2 nova_compute[232428]: 2025-11-29 08:33:13.094 232432 DEBUG nova.compute.manager [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing instance network info cache due to event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:33:13 compute-2 nova_compute[232428]: 2025-11-29 08:33:13.095 232432 DEBUG oslo_concurrency.lockutils [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:33:13 compute-2 nova_compute[232428]: 2025-11-29 08:33:13.095 232432 DEBUG oslo_concurrency.lockutils [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:33:13 compute-2 nova_compute[232428]: 2025-11-29 08:33:13.095 232432 DEBUG nova.network.neutron [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:33:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3868432709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/872567019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:14 compute-2 ceph-mon[77138]: pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 764 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 29 08:33:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3026191862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:14.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:15 compute-2 nova_compute[232428]: 2025-11-29 08:33:15.192 232432 DEBUG nova.network.neutron [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updated VIF entry in instance network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:33:15 compute-2 nova_compute[232428]: 2025-11-29 08:33:15.193 232432 DEBUG nova.network.neutron [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:33:15 compute-2 nova_compute[232428]: 2025-11-29 08:33:15.209 232432 DEBUG oslo_concurrency.lockutils [req-55e2203a-26e4-42f9-8768-04fb91511975 req-ea5a79d4-17cc-4749-a9fb-ccdf96b1d7fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:33:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:16 compute-2 nova_compute[232428]: 2025-11-29 08:33:16.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:16 compute-2 nova_compute[232428]: 2025-11-29 08:33:16.201 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:16 compute-2 ceph-mon[77138]: pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Nov 29 08:33:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:16.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:16 compute-2 podman[309077]: 2025-11-29 08:33:16.701435898 +0000 UTC m=+0.097930858 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:33:16 compute-2 nova_compute[232428]: 2025-11-29 08:33:16.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:17 compute-2 nova_compute[232428]: 2025-11-29 08:33:17.204 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:17 compute-2 nova_compute[232428]: 2025-11-29 08:33:17.452 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:18 compute-2 nova_compute[232428]: 2025-11-29 08:33:18.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:18 compute-2 nova_compute[232428]: 2025-11-29 08:33:18.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:33:18 compute-2 ceph-mon[77138]: pgmap v2900: 305 pgs: 305 active+clean; 185 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 08:33:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3608195406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:18.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.241 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.242 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3236600486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:19.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:33:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2751484386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.730 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.828 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:33:19 compute-2 nova_compute[232428]: 2025-11-29 08:33:19.829 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.022 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.023 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3975MB free_disk=20.946605682373047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.023 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.024 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.160 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance f7256761-4dda-41d4-bd20-f34c7a8478ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.160 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.160 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.287 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:33:20 compute-2 ceph-mon[77138]: pgmap v2901: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 29 08:33:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2751484386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2092077475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:33:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2157841827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.784 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.796 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.816 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.854 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:33:20 compute-2 nova_compute[232428]: 2025-11-29 08:33:20.855 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:21 compute-2 ceph-mon[77138]: pgmap v2902: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 08:33:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2157841827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4064002057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:21.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:21 compute-2 nova_compute[232428]: 2025-11-29 08:33:21.930 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:22 compute-2 nova_compute[232428]: 2025-11-29 08:33:22.454 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:22 compute-2 nova_compute[232428]: 2025-11-29 08:33:22.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:23 compute-2 sudo[309143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:23 compute-2 sudo[309143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:23 compute-2 sudo[309143]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:23 compute-2 sudo[309168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:23 compute-2 sudo[309168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:23 compute-2 sudo[309168]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:23.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:24 compute-2 ovn_controller[134375]: 2025-11-29T08:33:24Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:8b:ed 10.100.0.12
Nov 29 08:33:24 compute-2 ovn_controller[134375]: 2025-11-29T08:33:24Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:8b:ed 10.100.0.12
Nov 29 08:33:24 compute-2 ceph-mon[77138]: pgmap v2903: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 29 08:33:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:24 compute-2 podman[309194]: 2025-11-29 08:33:24.689254673 +0000 UTC m=+0.088615659 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:33:25 compute-2 sudo[309214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:25 compute-2 sudo[309214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:25 compute-2 sudo[309214]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:25 compute-2 sudo[309239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:33:25 compute-2 sudo[309239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:25 compute-2 sudo[309239]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:25 compute-2 sudo[309264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:25 compute-2 sudo[309264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:25 compute-2 sudo[309264]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:25.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:25 compute-2 sudo[309289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:33:25 compute-2 sudo[309289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:26 compute-2 sudo[309289]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:26 compute-2 nova_compute[232428]: 2025-11-29 08:33:26.932 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:27 compute-2 ceph-mon[77138]: pgmap v2904: 305 pgs: 305 active+clean; 233 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Nov 29 08:33:27 compute-2 nova_compute[232428]: 2025-11-29 08:33:27.457 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: pgmap v2905: 305 pgs: 305 active+clean; 244 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:33:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2826897057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:33:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:33:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2826897057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:33:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:28.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2826897057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:33:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2826897057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:33:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:29.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:30 compute-2 ceph-mon[77138]: pgmap v2906: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Nov 29 08:33:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:31 compute-2 nova_compute[232428]: 2025-11-29 08:33:31.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:32 compute-2 ceph-mon[77138]: pgmap v2907: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Nov 29 08:33:32 compute-2 nova_compute[232428]: 2025-11-29 08:33:32.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:32.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:34 compute-2 ceph-mon[77138]: pgmap v2908: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Nov 29 08:33:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:34 compute-2 sudo[309350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:34 compute-2 sudo[309350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:34 compute-2 sudo[309350]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:35 compute-2 sudo[309375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:33:35 compute-2 sudo[309375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:35 compute-2 sudo[309375]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:35.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:33:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:33:36 compute-2 ceph-mon[77138]: pgmap v2909: 305 pgs: 305 active+clean; 249 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Nov 29 08:33:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:36.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:36 compute-2 podman[309401]: 2025-11-29 08:33:36.723280562 +0000 UTC m=+0.126849348 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 08:33:36 compute-2 nova_compute[232428]: 2025-11-29 08:33:36.938 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:37 compute-2 nova_compute[232428]: 2025-11-29 08:33:37.462 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:38 compute-2 ceph-mon[77138]: pgmap v2910: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 103 op/s
Nov 29 08:33:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:33:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/485713177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/485713177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 08:33:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:39.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:40.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 08:33:40 compute-2 ceph-mon[77138]: pgmap v2911: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 08:33:40 compute-2 ceph-mon[77138]: osdmap e387: 3 total, 3 up, 3 in
Nov 29 08:33:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:41 compute-2 nova_compute[232428]: 2025-11-29 08:33:41.940 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:42 compute-2 ceph-mon[77138]: pgmap v2913: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 119 op/s
Nov 29 08:33:42 compute-2 ceph-mon[77138]: osdmap e388: 3 total, 3 up, 3 in
Nov 29 08:33:42 compute-2 nova_compute[232428]: 2025-11-29 08:33:42.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:42.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 08:33:43 compute-2 sudo[309430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:43 compute-2 sudo[309430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:43 compute-2 sudo[309430]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:43 compute-2 sudo[309455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:33:43 compute-2 sudo[309455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:33:43 compute-2 sudo[309455]: pam_unix(sudo:session): session closed for user root
Nov 29 08:33:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 08:33:44 compute-2 ceph-mon[77138]: osdmap e389: 3 total, 3 up, 3 in
Nov 29 08:33:44 compute-2 ceph-mon[77138]: pgmap v2916: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 5.0 MiB/s wr, 126 op/s
Nov 29 08:33:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:45 compute-2 ceph-mon[77138]: osdmap e390: 3 total, 3 up, 3 in
Nov 29 08:33:45 compute-2 nova_compute[232428]: 2025-11-29 08:33:45.973 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:45 compute-2 nova_compute[232428]: 2025-11-29 08:33:45.974 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:45 compute-2 nova_compute[232428]: 2025-11-29 08:33:45.974 232432 INFO nova.compute.manager [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Shelving
Nov 29 08:33:45 compute-2 nova_compute[232428]: 2025-11-29 08:33:45.995 232432 DEBUG nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:33:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:46.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:46 compute-2 nova_compute[232428]: 2025-11-29 08:33:46.943 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:47 compute-2 ceph-mon[77138]: pgmap v2918: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 08:33:47 compute-2 nova_compute[232428]: 2025-11-29 08:33:47.467 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:47.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:47 compute-2 podman[309482]: 2025-11-29 08:33:47.662590754 +0000 UTC m=+0.065624234 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 08:33:48 compute-2 ceph-mon[77138]: pgmap v2919: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Nov 29 08:33:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3825895591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:33:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:48.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:33:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868452491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 08:33:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/868452491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:33:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:49.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:50 compute-2 nova_compute[232428]: 2025-11-29 08:33:50.020 232432 INFO nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance shutdown successfully after 4 seconds.
Nov 29 08:33:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:50 compute-2 ceph-mon[77138]: pgmap v2920: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Nov 29 08:33:50 compute-2 ceph-mon[77138]: osdmap e391: 3 total, 3 up, 3 in
Nov 29 08:33:50 compute-2 kernel: tap3da501a9-b4 (unregistering): left promiscuous mode
Nov 29 08:33:50 compute-2 NetworkManager[48993]: <info>  [1764405230.7568] device (tap3da501a9-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:33:50 compute-2 ovn_controller[134375]: 2025-11-29T08:33:50Z|00819|binding|INFO|Releasing lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 from this chassis (sb_readonly=0)
Nov 29 08:33:50 compute-2 ovn_controller[134375]: 2025-11-29T08:33:50Z|00820|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 down in Southbound
Nov 29 08:33:50 compute-2 ovn_controller[134375]: 2025-11-29T08:33:50Z|00821|binding|INFO|Removing iface tap3da501a9-b4 ovn-installed in OVS
Nov 29 08:33:50 compute-2 nova_compute[232428]: 2025-11-29 08:33:50.773 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:50 compute-2 nova_compute[232428]: 2025-11-29 08:33:50.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:50.787 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:ed 10.100.0.12'], port_security=['fa:16:3e:61:8b:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f7256761-4dda-41d4-bd20-f34c7a8478ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26306486-d603-420f-a001-1b03f9962e31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3da501a9-b467-445e-8d0c-b03956d7a1b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:33:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:50.793 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3da501a9-b467-445e-8d0c-b03956d7a1b2 in datapath 4c541784-a3aa-4c55-a753-a31504941937 unbound from our chassis
Nov 29 08:33:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:50.797 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c541784-a3aa-4c55-a753-a31504941937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:33:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:50.799 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb688a9-63d6-4d19-ac79-fc2a5b4ef5c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:50.801 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace which is not needed anymore
Nov 29 08:33:50 compute-2 nova_compute[232428]: 2025-11-29 08:33:50.823 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:50 compute-2 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Nov 29 08:33:50 compute-2 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ae.scope: Consumed 15.888s CPU time.
Nov 29 08:33:50 compute-2 systemd-machined[194747]: Machine qemu-85-instance-000000ae terminated.
Nov 29 08:33:51 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [NOTICE]   (309061) : haproxy version is 2.8.14-c23fe91
Nov 29 08:33:51 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [NOTICE]   (309061) : path to executable is /usr/sbin/haproxy
Nov 29 08:33:51 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [WARNING]  (309061) : Exiting Master process...
Nov 29 08:33:51 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [ALERT]    (309061) : Current worker (309063) exited with code 143 (Terminated)
Nov 29 08:33:51 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[309057]: [WARNING]  (309061) : All workers exited. Exiting... (0)
Nov 29 08:33:51 compute-2 systemd[1]: libpod-5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0.scope: Deactivated successfully.
Nov 29 08:33:51 compute-2 podman[309530]: 2025-11-29 08:33:51.011180878 +0000 UTC m=+0.058515992 container died 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:33:51 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0-userdata-shm.mount: Deactivated successfully.
Nov 29 08:33:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-8348006e5730fe91934168b5af929f832c39acec92557447deff7125971cd9f6-merged.mount: Deactivated successfully.
Nov 29 08:33:51 compute-2 podman[309530]: 2025-11-29 08:33:51.048724786 +0000 UTC m=+0.096059900 container cleanup 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.066 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance destroyed successfully.
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.068 232432 DEBUG nova.objects.instance [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:33:51 compute-2 systemd[1]: libpod-conmon-5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0.scope: Deactivated successfully.
Nov 29 08:33:51 compute-2 podman[309565]: 2025-11-29 08:33:51.134286589 +0000 UTC m=+0.054393304 container remove 5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.142 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c538108e-4ad9-4ea2-9c84-4a5133203f4f]: (4, ('Sat Nov 29 08:33:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0)\n5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0\nSat Nov 29 08:33:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0)\n5dfbf09f26148f986bb1c2657e0cb68c9247315e05a2232ad1e6b9127b5974c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.144 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb0b438-db7a-4bf4-8d44-e485f32a5175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.144 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.146 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:51 compute-2 kernel: tap4c541784-a0: left promiscuous mode
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.165 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.168 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e3418a59-74c2-43d3-8890-97c3d0b87c33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.192 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e1545ae1-4d63-4291-bc6f-544733e45924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.194 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ca0433-646b-4147-a4a3-baf08727b9f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a00b1ba-7d2e-4407-9f01-06254ce42611]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815496, 'reachable_time': 17268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309589, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 systemd[1]: run-netns-ovnmeta\x2d4c541784\x2da3aa\x2d4c55\x2da753\x2da31504941937.mount: Deactivated successfully.
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.218 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:33:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:51.218 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8edd77-9226-4237-ac97-69b3a8e3ffa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.325 232432 INFO nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Beginning cold snapshot process
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.509 232432 DEBUG nova.virt.libvirt.imagebackend [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 08:33:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.729 232432 DEBUG nova.storage.rbd_utils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] creating snapshot(766c0424250646f6ac0edb7f27ab827a) on rbd image(f7256761-4dda-41d4-bd20-f34c7a8478ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:33:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 08:33:51 compute-2 ceph-mon[77138]: pgmap v2922: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 63 op/s
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.900 232432 DEBUG nova.compute.manager [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.901 232432 DEBUG oslo_concurrency.lockutils [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.901 232432 DEBUG oslo_concurrency.lockutils [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.901 232432 DEBUG oslo_concurrency.lockutils [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.902 232432 DEBUG nova.compute.manager [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.902 232432 WARNING nova.compute.manager [req-01a3c9f4-0c39-4312-83fb-a8b75048b176 req-a3b70f45-1258-45f6-a6cb-6e76dd4073e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received unexpected event network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with vm_state active and task_state shelving_image_uploading.
Nov 29 08:33:51 compute-2 nova_compute[232428]: 2025-11-29 08:33:51.945 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:52 compute-2 nova_compute[232428]: 2025-11-29 08:33:52.470 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:53 compute-2 ceph-mon[77138]: osdmap e392: 3 total, 3 up, 3 in
Nov 29 08:33:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 08:33:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:53 compute-2 nova_compute[232428]: 2025-11-29 08:33:53.684 232432 DEBUG nova.storage.rbd_utils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] cloning vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk@766c0424250646f6ac0edb7f27ab827a to images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:33:53 compute-2 nova_compute[232428]: 2025-11-29 08:33:53.824 232432 DEBUG nova.storage.rbd_utils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] flattening images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.084 232432 DEBUG nova.compute.manager [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.085 232432 DEBUG oslo_concurrency.lockutils [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.085 232432 DEBUG oslo_concurrency.lockutils [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.086 232432 DEBUG oslo_concurrency.lockutils [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.087 232432 DEBUG nova.compute.manager [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.088 232432 WARNING nova.compute.manager [req-06b0f52c-af5c-43f8-869a-d164217acea1 req-585fcee2-b44b-4a6c-917b-018c03ebc92a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received unexpected event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with vm_state active and task_state shelving_image_uploading.
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:54.104 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:33:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:33:54.106 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.309 232432 DEBUG nova.storage.rbd_utils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] removing snapshot(766c0424250646f6ac0edb7f27ab827a) on rbd image(f7256761-4dda-41d4-bd20-f34c7a8478ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:33:54 compute-2 ceph-mon[77138]: pgmap v2924: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 29 08:33:54 compute-2 ceph-mon[77138]: osdmap e393: 3 total, 3 up, 3 in
Nov 29 08:33:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 08:33:54 compute-2 nova_compute[232428]: 2025-11-29 08:33:54.679 232432 DEBUG nova.storage.rbd_utils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] creating snapshot(snap) on rbd image(5e39b3d9-9c5d-4d2b-801f-dfff461af72e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:33:55 compute-2 ceph-mon[77138]: osdmap e394: 3 total, 3 up, 3 in
Nov 29 08:33:55 compute-2 ceph-mon[77138]: pgmap v2927: 305 pgs: 305 active+clean; 410 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 77 op/s
Nov 29 08:33:55 compute-2 podman[309734]: 2025-11-29 08:33:55.654141325 +0000 UTC m=+0.055972383 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 08:33:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 08:33:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:33:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:33:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:56 compute-2 ceph-mon[77138]: osdmap e395: 3 total, 3 up, 3 in
Nov 29 08:33:56 compute-2 nova_compute[232428]: 2025-11-29 08:33:56.948 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.270 232432 INFO nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Snapshot image upload complete
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.271 232432 DEBUG nova.compute.manager [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.320 232432 INFO nova.compute.manager [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Shelve offloading
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.326 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance destroyed successfully.
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.327 232432 DEBUG nova.compute.manager [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.331 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.331 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.331 232432 DEBUG nova.network.neutron [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:33:57 compute-2 nova_compute[232428]: 2025-11-29 08:33:57.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:33:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:33:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:33:57 compute-2 ceph-mon[77138]: pgmap v2929: 305 pgs: 305 active+clean; 450 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.8 MiB/s wr, 145 op/s
Nov 29 08:33:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:33:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:33:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 08:33:59 compute-2 nova_compute[232428]: 2025-11-29 08:33:59.169 232432 DEBUG nova.network.neutron [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:33:59 compute-2 nova_compute[232428]: 2025-11-29 08:33:59.460 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:33:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:33:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:33:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.005000157s ======
Nov 29 08:34:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:00.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000157s
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.733 232432 DEBUG nova.compute.manager [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.734 232432 DEBUG nova.compute.manager [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing instance network info cache due to event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.734 232432 DEBUG oslo_concurrency.lockutils [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.734 232432 DEBUG oslo_concurrency.lockutils [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.734 232432 DEBUG nova.network.neutron [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:34:00 compute-2 nova_compute[232428]: 2025-11-29 08:34:00.854 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.051 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance destroyed successfully.
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.052 232432 DEBUG nova.objects.instance [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'resources' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:01 compute-2 ceph-mon[77138]: osdmap e396: 3 total, 3 up, 3 in
Nov 29 08:34:01 compute-2 ceph-mon[77138]: pgmap v2931: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.7 MiB/s wr, 212 op/s
Nov 29 08:34:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3813860604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.078 232432 DEBUG nova.virt.libvirt.vif [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLhBaWnE1HT/mfoEXjDGlphQpqM+jzqDgGTCm5uAntITZ58l1wGQewG1RN4NYQpvce0WyCRcwFUsBq8uNucz7UAquvABOF3BuO6/PuBr//qzuFWP1XXEpnTWT0qE2Cj+A==',key_name='tempest-keypair-1730006032',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:33:57.271359',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5e39b3d9-9c5d-4d2b-801f-dfff461af72e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.079 232432 DEBUG nova.network.os_vif_util [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.079 232432 DEBUG nova.network.os_vif_util [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.080 232432 DEBUG os_vif [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.082 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.082 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3da501a9-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.090 232432 INFO os_vif [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4')
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.553 232432 INFO nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deleting instance files /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef_del
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.555 232432 INFO nova.virt.libvirt.driver [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deletion of /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef_del complete
Nov 29 08:34:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.683 232432 INFO nova.scheduler.client.report [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Deleted allocations for instance f7256761-4dda-41d4-bd20-f34c7a8478ef
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.778 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.779 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.812 232432 DEBUG oslo_concurrency.processutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:01 compute-2 nova_compute[232428]: 2025-11-29 08:34:01.950 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:02.107 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1085468899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:02 compute-2 ceph-mon[77138]: pgmap v2932: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 8.1 MiB/s wr, 213 op/s
Nov 29 08:34:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2185830496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:34:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2266877805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.257 232432 DEBUG oslo_concurrency.processutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.264 232432 DEBUG nova.compute.provider_tree [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.278 232432 DEBUG nova.network.neutron [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updated VIF entry in instance network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.278 232432 DEBUG nova.network.neutron [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3da501a9-b4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.287 232432 DEBUG nova.scheduler.client.report [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.313 232432 DEBUG oslo_concurrency.lockutils [req-b5641075-0143-4d74-83d5-0d2f5e00d694 req-88d87990-071f-4a31-bffd-bb42df8efe91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.331 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:02 compute-2 nova_compute[232428]: 2025-11-29 08:34:02.429 232432 DEBUG oslo_concurrency.lockutils [None req-3459bd40-b94b-4d5e-9c37-c6241ab4fb08 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:02.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:34:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1639367061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2266877805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1639367061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:03.337 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:03.338 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:03.338 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:03 compute-2 sudo[309801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:03 compute-2 sudo[309801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:03 compute-2 sudo[309801]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:03 compute-2 sudo[309826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:03 compute-2 sudo[309826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:03 compute-2 sudo[309826]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:04 compute-2 ceph-mon[77138]: pgmap v2933: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.3 MiB/s wr, 164 op/s
Nov 29 08:34:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 08:34:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:04.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:05 compute-2 nova_compute[232428]: 2025-11-29 08:34:05.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:05 compute-2 ceph-mon[77138]: osdmap e397: 3 total, 3 up, 3 in
Nov 29 08:34:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3284355851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2893780491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.064 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405231.0629008, f7256761-4dda-41d4-bd20-f34c7a8478ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.065 232432 INFO nova.compute.manager [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Stopped (Lifecycle Event)
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.093 232432 DEBUG nova.compute.manager [None req-e4e91426-9799-4619-87f0-e20c353c2089 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.223 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.251 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.251 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.251 232432 INFO nova.compute.manager [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Unshelving
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.348 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.348 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.358 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_requests' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.377 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.409 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.409 232432 INFO nova.compute.claims [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:34:06 compute-2 ceph-mon[77138]: pgmap v2935: 305 pgs: 305 active+clean; 459 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.502 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:06 compute-2 nova_compute[232428]: 2025-11-29 08:34:06.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:34:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/851649997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:07 compute-2 nova_compute[232428]: 2025-11-29 08:34:07.025 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:07 compute-2 nova_compute[232428]: 2025-11-29 08:34:07.031 232432 DEBUG nova.compute.provider_tree [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:34:07 compute-2 nova_compute[232428]: 2025-11-29 08:34:07.046 232432 DEBUG nova.scheduler.client.report [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:34:07 compute-2 nova_compute[232428]: 2025-11-29 08:34:07.067 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/851649997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:07 compute-2 nova_compute[232428]: 2025-11-29 08:34:07.449 232432 INFO nova.network.neutron [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating port 3da501a9-b467-445e-8d0c-b03956d7a1b2 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 29 08:34:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:07 compute-2 podman[309875]: 2025-11-29 08:34:07.694296274 +0000 UTC m=+0.094228183 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.049 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.049 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.050 232432 DEBUG nova.network.neutron [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.162 232432 DEBUG nova.compute.manager [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.163 232432 DEBUG nova.compute.manager [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing instance network info cache due to event network-changed-3da501a9-b467-445e-8d0c-b03956d7a1b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:34:08 compute-2 nova_compute[232428]: 2025-11-29 08:34:08.163 232432 DEBUG oslo_concurrency.lockutils [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:34:08 compute-2 ceph-mon[77138]: pgmap v2936: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 KiB/s wr, 179 op/s
Nov 29 08:34:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:09 compute-2 nova_compute[232428]: 2025-11-29 08:34:09.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:10 compute-2 nova_compute[232428]: 2025-11-29 08:34:10.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:10 compute-2 ceph-mon[77138]: pgmap v2937: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 08:34:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:10.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.091 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.657 232432 DEBUG nova.network.neutron [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.681 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.684 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.685 232432 INFO nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating image(s)
Nov 29 08:34:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.713 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.717 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.719 232432 DEBUG oslo_concurrency.lockutils [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.720 232432 DEBUG nova.network.neutron [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Refreshing network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.769 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.802 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.808 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "cd3bbb89b30a7e9b23f16790d258a62626402410" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.809 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "cd3bbb89b30a7e9b23f16790d258a62626402410" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:11 compute-2 nova_compute[232428]: 2025-11-29 08:34:11.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.291 232432 DEBUG nova.virt.libvirt.imagebackend [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.373 232432 DEBUG nova.virt.libvirt.imagebackend [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.375 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] cloning images/5e39b3d9-9c5d-4d2b-801f-dfff461af72e@snap to None/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:34:12 compute-2 ceph-mon[77138]: pgmap v2938: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.508 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "cd3bbb89b30a7e9b23f16790d258a62626402410" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:12.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.671 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'migration_context' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:12 compute-2 nova_compute[232428]: 2025-11-29 08:34:12.750 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] flattening vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.135 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Image rbd:vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.136 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.136 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Ensure instance console log exists: /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.137 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.137 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.138 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.140 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start _get_guest_xml network_info=[{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:33:45Z,direct_url=<?>,disk_format='raw',id=5e39b3d9-9c5d-4d2b-801f-dfff461af72e,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-206995635-shelved',owner='37972b49ddde4c519c6523d2ea1569b5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:33:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.145 232432 WARNING nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.150 232432 DEBUG nova.virt.libvirt.host [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.151 232432 DEBUG nova.virt.libvirt.host [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.154 232432 DEBUG nova.virt.libvirt.host [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.154 232432 DEBUG nova.virt.libvirt.host [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.156 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.156 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:33:45Z,direct_url=<?>,disk_format='raw',id=5e39b3d9-9c5d-4d2b-801f-dfff461af72e,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-206995635-shelved',owner='37972b49ddde4c519c6523d2ea1569b5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:33:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.156 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.157 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.157 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.157 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.158 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.158 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.158 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.158 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.159 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.159 232432 DEBUG nova.virt.hardware [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.159 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.193 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:13 compute-2 ceph-mon[77138]: pgmap v2939: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 08:34:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:34:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1100919983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:13.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.706 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.732 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:13 compute-2 nova_compute[232428]: 2025-11-29 08:34:13.736 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:34:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2690506026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.181 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.184 232432 DEBUG nova.virt.libvirt.vif [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='5e39b3d9-9c5d-4d2b-801f-dfff461af72e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1730006032',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:33:57.271359',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5e39b3d9-9c5d-4d2b-801f-dfff461af72e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.185 232432 DEBUG nova.network.os_vif_util [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.186 232432 DEBUG nova.network.os_vif_util [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.189 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.204 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <uuid>f7256761-4dda-41d4-bd20-f34c7a8478ef</uuid>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <name>instance-000000ae</name>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-206995635</nova:name>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:34:13</nova:creationTime>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:user uuid="e6de0587a3794e30acefc687f435d388">tempest-AttachVolumeShelveTestJSON-1751768432-project-member</nova:user>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:project uuid="37972b49ddde4c519c6523d2ea1569b5">tempest-AttachVolumeShelveTestJSON-1751768432</nova:project>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="5e39b3d9-9c5d-4d2b-801f-dfff461af72e"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <nova:port uuid="3da501a9-b467-445e-8d0c-b03956d7a1b2">
Nov 29 08:34:14 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <system>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="serial">f7256761-4dda-41d4-bd20-f34c7a8478ef</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="uuid">f7256761-4dda-41d4-bd20-f34c7a8478ef</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </system>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <os>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </os>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <features>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </features>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk">
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config">
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:34:14 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:61:8b:ed"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <target dev="tap3da501a9-b4"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/console.log" append="off"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <video>
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </video>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <input type="keyboard" bus="usb"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:34:14 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:34:14 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:34:14 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:34:14 compute-2 nova_compute[232428]: </domain>
Nov 29 08:34:14 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.207 232432 DEBUG nova.compute.manager [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Preparing to wait for external event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.207 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.207 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.208 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.208 232432 DEBUG nova.virt.libvirt.vif [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='5e39b3d9-9c5d-4d2b-801f-dfff461af72e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1730006032',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:33:57.271359',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5e39b3d9-9c5d-4d2b-801f-dfff461af72e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.209 232432 DEBUG nova.network.os_vif_util [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.209 232432 DEBUG nova.network.os_vif_util [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.209 232432 DEBUG os_vif [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.211 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.211 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.214 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.215 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3da501a9-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.215 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3da501a9-b4, col_values=(('external_ids', {'iface-id': '3da501a9-b467-445e-8d0c-b03956d7a1b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:8b:ed', 'vm-uuid': 'f7256761-4dda-41d4-bd20-f34c7a8478ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.217 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:14 compute-2 NetworkManager[48993]: <info>  [1764405254.2179] manager: (tap3da501a9-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.223 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.223 232432 INFO os_vif [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4')
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.289 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.289 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.290 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No VIF found with MAC fa:16:3e:61:8b:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.290 232432 INFO nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Using config drive
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.316 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.383 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 08:34:14 compute-2 nova_compute[232428]: 2025-11-29 08:34:14.450 232432 DEBUG nova.objects.instance [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'keypairs' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1100919983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2690506026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:34:14 compute-2 ceph-mon[77138]: osdmap e398: 3 total, 3 up, 3 in
Nov 29 08:34:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:14.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.112 232432 DEBUG nova.network.neutron [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updated VIF entry in instance network info cache for port 3da501a9-b467-445e-8d0c-b03956d7a1b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.112 232432 DEBUG nova.network.neutron [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [{"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.116 232432 INFO nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Creating config drive at /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.123 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_eue8eo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.182 232432 DEBUG oslo_concurrency.lockutils [req-e23a9297-74f2-4f93-a2c4-e124853fb175 req-3b1d0deb-9b32-4df0-a481-0f906c039791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f7256761-4dda-41d4-bd20-f34c7a8478ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.299 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_eue8eo" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.342 232432 DEBUG nova.storage.rbd_utils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.345 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.528 232432 DEBUG oslo_concurrency.processutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config f7256761-4dda-41d4-bd20-f34c7a8478ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.529 232432 INFO nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deleting local config drive /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef/disk.config because it was imported into RBD.
Nov 29 08:34:15 compute-2 ceph-mon[77138]: pgmap v2941: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.0 MiB/s wr, 211 op/s
Nov 29 08:34:15 compute-2 kernel: tap3da501a9-b4: entered promiscuous mode
Nov 29 08:34:15 compute-2 NetworkManager[48993]: <info>  [1764405255.6099] manager: (tap3da501a9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:15 compute-2 ovn_controller[134375]: 2025-11-29T08:34:15Z|00822|binding|INFO|Claiming lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 for this chassis.
Nov 29 08:34:15 compute-2 ovn_controller[134375]: 2025-11-29T08:34:15Z|00823|binding|INFO|3da501a9-b467-445e-8d0c-b03956d7a1b2: Claiming fa:16:3e:61:8b:ed 10.100.0.12
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.620 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:ed 10.100.0.12'], port_security=['fa:16:3e:61:8b:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f7256761-4dda-41d4-bd20-f34c7a8478ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '26306486-d603-420f-a001-1b03f9962e31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3da501a9-b467-445e-8d0c-b03956d7a1b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.621 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3da501a9-b467-445e-8d0c-b03956d7a1b2 in datapath 4c541784-a3aa-4c55-a753-a31504941937 bound to our chassis
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.622 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:34:15 compute-2 ovn_controller[134375]: 2025-11-29T08:34:15Z|00824|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 ovn-installed in OVS
Nov 29 08:34:15 compute-2 ovn_controller[134375]: 2025-11-29T08:34:15Z|00825|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 up in Southbound
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:15 compute-2 nova_compute[232428]: 2025-11-29 08:34:15.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.637 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a2a06e-3456-4801-b0a8-98696468eef6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.638 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c541784-a1 in ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.640 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c541784-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.641 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e02653-87fd-420a-a6ce-9455f15b47d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.641 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfae8fb-003e-47c6-9be8-43a5c6db1533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 systemd-udevd[310254]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.654 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[25112a24-f6c6-4e0d-9d46-b05ef747b0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 systemd-machined[194747]: New machine qemu-86-instance-000000ae.
Nov 29 08:34:15 compute-2 NetworkManager[48993]: <info>  [1764405255.6640] device (tap3da501a9-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:34:15 compute-2 NetworkManager[48993]: <info>  [1764405255.6656] device (tap3da501a9-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.678 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d6164354-9df2-4b4f-a479-dd33467c94d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 systemd[1]: Started Virtual Machine qemu-86-instance-000000ae.
Nov 29 08:34:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:15.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.710 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a066b8c9-0c5e-4ea0-a5f4-e1bc73c4162a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.717 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e70f45-c22d-484d-8fc6-0bc66c3f56aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 systemd-udevd[310259]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:34:15 compute-2 NetworkManager[48993]: <info>  [1764405255.7195] manager: (tap4c541784-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.759 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[454c8da6-dd80-4dcd-8536-cebc318f1066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.762 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1a88a7fb-6bb4-4050-a0ff-773992a90be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 NetworkManager[48993]: <info>  [1764405255.7955] device (tap4c541784-a0): carrier: link connected
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.804 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b544e7d3-2917-46b1-976a-e9be2c16688e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.835 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b8d6dd-f7d3-4727-bafa-2a335ffaebf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822206, 'reachable_time': 42979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310288, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.862 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8aeb47-1fd6-43a9-b960-dac4baa5f701]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:9545'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822206, 'tstamp': 822206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310289, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.894 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[58f4dcfa-63f6-45c9-a32a-d9ba2bbc2e65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822206, 'reachable_time': 42979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310290, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:15.938 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b83c726f-4019-49e3-9881-b737e2e6c343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.044 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[29b35b0f-6473-4f9a-aca4-6913f10111b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.046 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.047 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.047 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c541784-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:16 compute-2 NetworkManager[48993]: <info>  [1764405256.0512] manager: (tap4c541784-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 08:34:16 compute-2 kernel: tap4c541784-a0: entered promiscuous mode
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.058 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c541784-a0, col_values=(('external_ids', {'iface-id': '7f1f6d69-4406-4e27-a503-d839c5cccd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.060 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:16 compute-2 ovn_controller[134375]: 2025-11-29T08:34:16Z|00826|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.095 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.095 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.096 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[982d873a-fe3f-4d6f-935f-f205e344a71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.097 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:34:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:16.098 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'env', 'PROCESS_TAG=haproxy-4c541784-a3aa-4c55-a753-a31504941937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c541784-a3aa-4c55-a753-a31504941937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.158 232432 DEBUG nova.compute.manager [req-af6d1958-0682-4d7f-b5a4-13476b4c351e req-36dda08f-119f-477e-8bf7-fa15e89855dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.159 232432 DEBUG oslo_concurrency.lockutils [req-af6d1958-0682-4d7f-b5a4-13476b4c351e req-36dda08f-119f-477e-8bf7-fa15e89855dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.159 232432 DEBUG oslo_concurrency.lockutils [req-af6d1958-0682-4d7f-b5a4-13476b4c351e req-36dda08f-119f-477e-8bf7-fa15e89855dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.160 232432 DEBUG oslo_concurrency.lockutils [req-af6d1958-0682-4d7f-b5a4-13476b4c351e req-36dda08f-119f-477e-8bf7-fa15e89855dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.160 232432 DEBUG nova.compute.manager [req-af6d1958-0682-4d7f-b5a4-13476b4c351e req-36dda08f-119f-477e-8bf7-fa15e89855dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Processing event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.394 232432 DEBUG nova.compute.manager [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.396 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405256.3947651, f7256761-4dda-41d4-bd20-f34c7a8478ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.396 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Started (Lifecycle Event)
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.401 232432 DEBUG nova.virt.libvirt.driver [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.405 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance spawned successfully.
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.443 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.447 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.486 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.487 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405256.394999, f7256761-4dda-41d4-bd20-f34c7a8478ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.487 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Paused (Lifecycle Event)
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.504 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.508 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405256.4005592, f7256761-4dda-41d4-bd20-f34c7a8478ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.508 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Resumed (Lifecycle Event)
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.528 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.533 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:34:16 compute-2 podman[310364]: 2025-11-29 08:34:16.53427388 +0000 UTC m=+0.053269559 container create 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:34:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3142441442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.558 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:34:16 compute-2 systemd[1]: Started libpod-conmon-2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3.scope.
Nov 29 08:34:16 compute-2 podman[310364]: 2025-11-29 08:34:16.507182296 +0000 UTC m=+0.026178005 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:34:16 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:34:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29836dce3fc1d84823f051b07040b358623d47b22861ce76c02cf798492fa1cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:34:16 compute-2 podman[310364]: 2025-11-29 08:34:16.637063118 +0000 UTC m=+0.156058817 container init 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:34:16 compute-2 podman[310364]: 2025-11-29 08:34:16.643431337 +0000 UTC m=+0.162427016 container start 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:34:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:16 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [NOTICE]   (310383) : New worker (310385) forked
Nov 29 08:34:16 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [NOTICE]   (310383) : Loading success.
Nov 29 08:34:16 compute-2 nova_compute[232428]: 2025-11-29 08:34:16.955 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:17 compute-2 nova_compute[232428]: 2025-11-29 08:34:17.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:17 compute-2 ceph-mon[77138]: pgmap v2942: 305 pgs: 305 active+clean; 479 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.7 MiB/s wr, 208 op/s
Nov 29 08:34:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 08:34:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.001 232432 DEBUG nova.compute.manager [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.061 232432 DEBUG oslo_concurrency.lockutils [None req-da2d8495-6c6e-45a9-a956-98e2d284d8fa e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.259 232432 DEBUG nova.compute.manager [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.262 232432 DEBUG oslo_concurrency.lockutils [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.263 232432 DEBUG oslo_concurrency.lockutils [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.263 232432 DEBUG oslo_concurrency.lockutils [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.264 232432 DEBUG nova.compute.manager [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:34:18 compute-2 nova_compute[232428]: 2025-11-29 08:34:18.265 232432 WARNING nova.compute.manager [req-4eaf579a-5f36-44ee-b571-cadf261e48ba req-469de357-73b3-4284-8841-cf8d2ccf4d0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received unexpected event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with vm_state active and task_state None.
Nov 29 08:34:18 compute-2 ceph-mon[77138]: osdmap e399: 3 total, 3 up, 3 in
Nov 29 08:34:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1374812936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:18.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:18 compute-2 podman[310395]: 2025-11-29 08:34:18.66078184 +0000 UTC m=+0.061936799 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.218 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.233 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.235 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.235 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:19 compute-2 ceph-mon[77138]: pgmap v2944: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 451 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 5.9 MiB/s wr, 318 op/s
Nov 29 08:34:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:19.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:34:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/221701087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.724 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.851 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:34:19 compute-2 nova_compute[232428]: 2025-11-29 08:34:19.853 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.041 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.043 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3961MB free_disk=20.896869659423828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.043 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.043 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.519 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance f7256761-4dda-41d4-bd20-f34c7a8478ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.519 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.520 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.573 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:34:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:20.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/221701087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:20 compute-2 nova_compute[232428]: 2025-11-29 08:34:20.999 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.000 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.030 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.092 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.147 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:34:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/928632538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.606 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.614 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:34:21 compute-2 ceph-mon[77138]: pgmap v2945: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 433 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.3 MiB/s wr, 357 op/s
Nov 29 08:34:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/293133485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/928632538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.666 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.695 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.696 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:21.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:21 compute-2 nova_compute[232428]: 2025-11-29 08:34:21.958 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/518622181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:22.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:23 compute-2 ceph-mon[77138]: pgmap v2946: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 433 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Nov 29 08:34:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:23 compute-2 sudo[310460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:23 compute-2 sudo[310460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:23 compute-2 sudo[310460]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:23 compute-2 sudo[310485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:24 compute-2 sudo[310485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:24 compute-2 sudo[310485]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:24 compute-2 nova_compute[232428]: 2025-11-29 08:34:24.221 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:24.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:25.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:26 compute-2 ceph-mon[77138]: pgmap v2947: 305 pgs: 305 active+clean; 431 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 294 op/s
Nov 29 08:34:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:26.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:26 compute-2 podman[310511]: 2025-11-29 08:34:26.6757861 +0000 UTC m=+0.066537511 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:34:26 compute-2 nova_compute[232428]: 2025-11-29 08:34:26.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1350265111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:27.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:28 compute-2 ceph-mon[77138]: pgmap v2948: 305 pgs: 305 active+clean; 406 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 262 op/s
Nov 29 08:34:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:34:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/543414883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:34:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:34:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/543414883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:34:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:28.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:29 compute-2 nova_compute[232428]: 2025-11-29 08:34:29.225 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/543414883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:34:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/543414883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:34:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 08:34:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:29.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:30.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:30 compute-2 ceph-mon[77138]: pgmap v2949: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Nov 29 08:34:30 compute-2 ceph-mon[77138]: osdmap e400: 3 total, 3 up, 3 in
Nov 29 08:34:31 compute-2 ovn_controller[134375]: 2025-11-29T08:34:31Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:8b:ed 10.100.0.12
Nov 29 08:34:31 compute-2 ceph-mon[77138]: pgmap v2951: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 1.5 MiB/s wr, 128 op/s
Nov 29 08:34:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:31 compute-2 nova_compute[232428]: 2025-11-29 08:34:31.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:32.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:34 compute-2 nova_compute[232428]: 2025-11-29 08:34:34.227 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:34 compute-2 ceph-mon[77138]: pgmap v2952: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 1.5 MiB/s wr, 128 op/s
Nov 29 08:34:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:34.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:35 compute-2 sudo[310535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:35 compute-2 sudo[310535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:35 compute-2 sudo[310535]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:35 compute-2 sudo[310560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:34:35 compute-2 sudo[310560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:35 compute-2 sudo[310560]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:35 compute-2 sudo[310585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:35 compute-2 sudo[310585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:35 compute-2 sudo[310585]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:35 compute-2 sudo[310610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:34:35 compute-2 sudo[310610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:35 compute-2 podman[310705]: 2025-11-29 08:34:35.94489287 +0000 UTC m=+0.070039101 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:34:36 compute-2 podman[310705]: 2025-11-29 08:34:36.083895117 +0000 UTC m=+0.209041338 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:34:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:36.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:36 compute-2 podman[310858]: 2025-11-29 08:34:36.841017399 +0000 UTC m=+0.068309366 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:34:36 compute-2 podman[310858]: 2025-11-29 08:34:36.857165182 +0000 UTC m=+0.084457109 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:34:36 compute-2 nova_compute[232428]: 2025-11-29 08:34:36.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:37 compute-2 podman[310923]: 2025-11-29 08:34:37.512837757 +0000 UTC m=+0.481722333 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 08:34:37 compute-2 ceph-mon[77138]: pgmap v2953: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 29 KiB/s wr, 80 op/s
Nov 29 08:34:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:37 compute-2 podman[310944]: 2025-11-29 08:34:37.714637648 +0000 UTC m=+0.146064397 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Nov 29 08:34:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:34:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:34:37 compute-2 podman[310923]: 2025-11-29 08:34:37.948109924 +0000 UTC m=+0.916994490 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, distribution-scope=public, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 08:34:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:34:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:38.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:34:38 compute-2 sudo[310610]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:38 compute-2 podman[310958]: 2025-11-29 08:34:38.823107825 +0000 UTC m=+0.827688099 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:34:38 compute-2 ceph-mon[77138]: pgmap v2954: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 29 KiB/s wr, 65 op/s
Nov 29 08:34:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:39 compute-2 nova_compute[232428]: 2025-11-29 08:34:39.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:39 compute-2 sudo[310984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:39 compute-2 sudo[310984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:39 compute-2 sudo[310984]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:39 compute-2 sudo[311009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:34:39 compute-2 sudo[311009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:39 compute-2 sudo[311009]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.425861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279426021, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 262, "total_data_size": 4524385, "memory_usage": 4583888, "flush_reason": "Manual Compaction"}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279486150, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 2950223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61946, "largest_seqno": 63999, "table_properties": {"data_size": 2941679, "index_size": 5230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17721, "raw_average_key_size": 19, "raw_value_size": 2924274, "raw_average_value_size": 3274, "num_data_blocks": 226, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405134, "oldest_key_time": 1764405134, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 60364 microseconds, and 15699 cpu microseconds.
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.486212) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 2950223 bytes OK
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.486250) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.506876) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.506932) EVENT_LOG_v1 {"time_micros": 1764405279506921, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.506960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 4515057, prev total WAL file size 4515057, number of live WAL files 2.
Nov 29 08:34:39 compute-2 sudo[311034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.509429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(2881KB)], [120(12MB)]
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279509548, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15931158, "oldest_snapshot_seqno": -1}
Nov 29 08:34:39 compute-2 sudo[311034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:39 compute-2 sudo[311034]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:39 compute-2 sudo[311059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:34:39 compute-2 sudo[311059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9493 keys, 14787664 bytes, temperature: kUnknown
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279741715, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 14787664, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14722395, "index_size": 40453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 248163, "raw_average_key_size": 26, "raw_value_size": 14551735, "raw_average_value_size": 1532, "num_data_blocks": 1565, "num_entries": 9493, "num_filter_entries": 9493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.742508) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14787664 bytes
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.750601) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.5 rd, 63.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.4) write-amplify(5.0) OK, records in: 10035, records dropped: 542 output_compression: NoCompression
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.750675) EVENT_LOG_v1 {"time_micros": 1764405279750631, "job": 76, "event": "compaction_finished", "compaction_time_micros": 232552, "compaction_time_cpu_micros": 37908, "output_level": 6, "num_output_files": 1, "total_output_size": 14787664, "num_input_records": 10035, "num_output_records": 9493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279753244, "job": 76, "event": "table_file_deletion", "file_number": 122}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279758481, "job": 76, "event": "table_file_deletion", "file_number": 120}
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.508964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.758685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.758694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.758697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.758699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:34:39.758703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:34:40 compute-2 ovn_controller[134375]: 2025-11-29T08:34:40Z|00827|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:34:40 compute-2 nova_compute[232428]: 2025-11-29 08:34:40.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:40 compute-2 sudo[311059]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:40 compute-2 ceph-mon[77138]: pgmap v2955: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 644 KiB/s rd, 40 KiB/s wr, 63 op/s
Nov 29 08:34:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:34:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:34:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:40.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:34:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:34:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:34:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:41 compute-2 nova_compute[232428]: 2025-11-29 08:34:41.969 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:42 compute-2 ceph-mon[77138]: pgmap v2956: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 36 KiB/s wr, 61 op/s
Nov 29 08:34:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3477834669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:34:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:42.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:34:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:43 compute-2 ceph-mon[77138]: pgmap v2957: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 22 KiB/s wr, 46 op/s
Nov 29 08:34:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:44 compute-2 sudo[311118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:44 compute-2 sudo[311118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:44 compute-2 sudo[311118]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:44 compute-2 nova_compute[232428]: 2025-11-29 08:34:44.234 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:44 compute-2 sudo[311143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:44 compute-2 sudo[311143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:44 compute-2 sudo[311143]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 08:34:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:34:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641273606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:34:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641273606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 ceph-mon[77138]: osdmap e401: 3 total, 3 up, 3 in
Nov 29 08:34:45 compute-2 ceph-mon[77138]: pgmap v2959: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 16 op/s
Nov 29 08:34:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2641273606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2641273606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/292646434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/292646434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:34:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 08:34:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:46.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:46 compute-2 ceph-mon[77138]: osdmap e402: 3 total, 3 up, 3 in
Nov 29 08:34:46 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:46 compute-2 nova_compute[232428]: 2025-11-29 08:34:46.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:47 compute-2 sudo[311169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:34:47 compute-2 sudo[311169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:47 compute-2 sudo[311169]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:47 compute-2 sudo[311194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:34:47 compute-2 sudo[311194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:34:47 compute-2 sudo[311194]: pam_unix(sudo:session): session closed for user root
Nov 29 08:34:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:34:48 compute-2 ceph-mon[77138]: pgmap v2961: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 4.9 KiB/s wr, 18 op/s
Nov 29 08:34:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:48.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:49 compute-2 nova_compute[232428]: 2025-11-29 08:34:49.237 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 08:34:49 compute-2 podman[311220]: 2025-11-29 08:34:49.680422234 +0000 UTC m=+0.082397425 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 29 08:34:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:49.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:50 compute-2 ceph-mon[77138]: pgmap v2962: 305 pgs: 305 active+clean; 232 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 8.0 KiB/s wr, 77 op/s
Nov 29 08:34:50 compute-2 ceph-mon[77138]: osdmap e403: 3 total, 3 up, 3 in
Nov 29 08:34:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:50.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:51 compute-2 ceph-mon[77138]: pgmap v2964: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 14 KiB/s wr, 124 op/s
Nov 29 08:34:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:51.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:51 compute-2 nova_compute[232428]: 2025-11-29 08:34:51.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 ovn_controller[134375]: 2025-11-29T08:34:52Z|00828|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.172 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.489 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.490 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.490 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.491 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.491 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.493 232432 INFO nova.compute.manager [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Terminating instance
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.494 232432 DEBUG nova.compute.manager [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:34:52 compute-2 kernel: tap3da501a9-b4 (unregistering): left promiscuous mode
Nov 29 08:34:52 compute-2 NetworkManager[48993]: <info>  [1764405292.5560] device (tap3da501a9-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:34:52 compute-2 ovn_controller[134375]: 2025-11-29T08:34:52Z|00829|binding|INFO|Releasing lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 from this chassis (sb_readonly=0)
Nov 29 08:34:52 compute-2 ovn_controller[134375]: 2025-11-29T08:34:52Z|00830|binding|INFO|Setting lport 3da501a9-b467-445e-8d0c-b03956d7a1b2 down in Southbound
Nov 29 08:34:52 compute-2 ovn_controller[134375]: 2025-11-29T08:34:52Z|00831|binding|INFO|Removing iface tap3da501a9-b4 ovn-installed in OVS
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.571 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:ed 10.100.0.12'], port_security=['fa:16:3e:61:8b:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f7256761-4dda-41d4-bd20-f34c7a8478ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '26306486-d603-420f-a001-1b03f9962e31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=3da501a9-b467-445e-8d0c-b03956d7a1b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.573 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 3da501a9-b467-445e-8d0c-b03956d7a1b2 in datapath 4c541784-a3aa-4c55-a753-a31504941937 unbound from our chassis
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.574 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c541784-a3aa-4c55-a753-a31504941937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.575 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3859d22d-d9fc-4ebb-8aa7-f0fa855b67f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.576 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace which is not needed anymore
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Nov 29 08:34:52 compute-2 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ae.scope: Consumed 15.935s CPU time.
Nov 29 08:34:52 compute-2 systemd-machined[194747]: Machine qemu-86-instance-000000ae terminated.
Nov 29 08:34:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:52.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:52 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [NOTICE]   (310383) : haproxy version is 2.8.14-c23fe91
Nov 29 08:34:52 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [NOTICE]   (310383) : path to executable is /usr/sbin/haproxy
Nov 29 08:34:52 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [ALERT]    (310383) : Current worker (310385) exited with code 143 (Terminated)
Nov 29 08:34:52 compute-2 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[310379]: [WARNING]  (310383) : All workers exited. Exiting... (0)
Nov 29 08:34:52 compute-2 systemd[1]: libpod-2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3.scope: Deactivated successfully.
Nov 29 08:34:52 compute-2 podman[311265]: 2025-11-29 08:34:52.720524187 +0000 UTC m=+0.044824056 container died 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.737 232432 INFO nova.virt.libvirt.driver [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Instance destroyed successfully.
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.738 232432 DEBUG nova.objects.instance [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'resources' on Instance uuid f7256761-4dda-41d4-bd20-f34c7a8478ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:34:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:34:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-29836dce3fc1d84823f051b07040b358623d47b22861ce76c02cf798492fa1cf-merged.mount: Deactivated successfully.
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.757 232432 DEBUG nova.virt.libvirt.vif [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-206995635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-206995635',id=174,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLhBaWnE1HT/mfoEXjDGlphQpqM+jzqDgGTCm5uAntITZ58l1wGQewG1RN4NYQpvce0WyCRcwFUsBq8uNucz7UAquvABOF3BuO6/PuBr//qzuFWP1XXEpnTWT0qE2Cj+A==',key_name='tempest-keypair-1730006032',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-i6bzklll',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=f7256761-4dda-41d4-bd20-f34c7a8478ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.758 232432 DEBUG nova.network.os_vif_util [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "address": "fa:16:3e:61:8b:ed", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3da501a9-b4", "ovs_interfaceid": "3da501a9-b467-445e-8d0c-b03956d7a1b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.759 232432 DEBUG nova.network.os_vif_util [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.760 232432 DEBUG os_vif [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.763 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.764 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3da501a9-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.770 232432 INFO os_vif [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:ed,bridge_name='br-int',has_traffic_filtering=True,id=3da501a9-b467-445e-8d0c-b03956d7a1b2,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3da501a9-b4')
Nov 29 08:34:52 compute-2 podman[311265]: 2025-11-29 08:34:52.771151063 +0000 UTC m=+0.095450922 container cleanup 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:34:52 compute-2 systemd[1]: libpod-conmon-2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3.scope: Deactivated successfully.
Nov 29 08:34:52 compute-2 podman[311314]: 2025-11-29 08:34:52.843519435 +0000 UTC m=+0.048449638 container remove 2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.849 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1ea33b-e4ef-4b81-8871-9b0406f740db]: (4, ('Sat Nov 29 08:34:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3)\n2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3\nSat Nov 29 08:34:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3)\n2e892300ef47f9b8e431e0550ba5b4f2350ec64ebf051f58df08dec2c03dcaf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.851 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ceb63e-b869-48a7-b4a3-241177785b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.852 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 kernel: tap4c541784-a0: left promiscuous mode
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.867 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.872 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[127e9360-3ca8-47f0-8f8f-530b27d3a49e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[acfe0406-4636-4a73-96e6-64f1ae1fb4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.892 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a30118b9-9a9b-4ddc-bf31-b955f8d05138]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.913 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8592d775-11e3-4b63-a32b-66c9efde1c8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822197, 'reachable_time': 40511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311340, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 systemd[1]: run-netns-ovnmeta\x2d4c541784\x2da3aa\x2d4c55\x2da753\x2da31504941937.mount: Deactivated successfully.
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.917 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:34:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:52.918 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[540dba36-52f9-49b3-a1a9-13f040efaf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.936 232432 DEBUG nova.compute.manager [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.937 232432 DEBUG oslo_concurrency.lockutils [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.937 232432 DEBUG oslo_concurrency.lockutils [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.937 232432 DEBUG oslo_concurrency.lockutils [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.937 232432 DEBUG nova.compute.manager [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:34:52 compute-2 nova_compute[232428]: 2025-11-29 08:34:52.938 232432 DEBUG nova.compute.manager [req-50603cc2-efe8-4fa7-9d97-b2cca7447c92 req-7168cc5f-1b04-45bd-92ec-c173ca2474ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-unplugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.170 232432 INFO nova.virt.libvirt.driver [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deleting instance files /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef_del
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.172 232432 INFO nova.virt.libvirt.driver [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deletion of /var/lib/nova/instances/f7256761-4dda-41d4-bd20-f34c7a8478ef_del complete
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.277 232432 INFO nova.compute.manager [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.277 232432 DEBUG oslo.service.loopingcall [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.278 232432 DEBUG nova.compute.manager [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:34:53 compute-2 nova_compute[232428]: 2025-11-29 08:34:53.278 232432 DEBUG nova.network.neutron [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:34:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:53.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:54 compute-2 ceph-mon[77138]: pgmap v2965: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 10 KiB/s wr, 95 op/s
Nov 29 08:34:54 compute-2 sshd-session[311343]: Invalid user solv from 45.148.10.240 port 57806
Nov 29 08:34:54 compute-2 sshd-session[311343]: Connection closed by invalid user solv 45.148.10.240 port 57806 [preauth]
Nov 29 08:34:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 08:34:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:54.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.134 232432 DEBUG nova.compute.manager [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.135 232432 DEBUG oslo_concurrency.lockutils [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.135 232432 DEBUG oslo_concurrency.lockutils [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.135 232432 DEBUG oslo_concurrency.lockutils [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.136 232432 DEBUG nova.compute.manager [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] No waiting events found dispatching network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.136 232432 WARNING nova.compute.manager [req-5e88c3d4-b654-4959-b3b7-f3c832eed525 req-f02fb9f0-ab8a-4f94-8408-2840d47c01be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received unexpected event network-vif-plugged-3da501a9-b467-445e-8d0c-b03956d7a1b2 for instance with vm_state active and task_state deleting.
Nov 29 08:34:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:55.143 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.144 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:55.145 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:34:55 compute-2 ceph-mon[77138]: osdmap e404: 3 total, 3 up, 3 in
Nov 29 08:34:55 compute-2 ceph-mon[77138]: pgmap v2967: 305 pgs: 305 active+clean; 157 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 18 KiB/s wr, 105 op/s
Nov 29 08:34:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.884 232432 DEBUG nova.network.neutron [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.941 232432 DEBUG nova.compute.manager [req-d0751dd5-cc02-47c1-83df-f2469324c93c req-804310f4-44af-4ac9-a986-7effc2d7c0c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Received event network-vif-deleted-3da501a9-b467-445e-8d0c-b03956d7a1b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.942 232432 INFO nova.compute.manager [req-d0751dd5-cc02-47c1-83df-f2469324c93c req-804310f4-44af-4ac9-a986-7effc2d7c0c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Neutron deleted interface 3da501a9-b467-445e-8d0c-b03956d7a1b2; detaching it from the instance and deleting it from the info cache
Nov 29 08:34:55 compute-2 nova_compute[232428]: 2025-11-29 08:34:55.942 232432 DEBUG nova.network.neutron [req-d0751dd5-cc02-47c1-83df-f2469324c93c req-804310f4-44af-4ac9-a986-7effc2d7c0c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.080 232432 INFO nova.compute.manager [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Took 2.80 seconds to deallocate network for instance.
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.103 232432 DEBUG nova.compute.manager [req-d0751dd5-cc02-47c1-83df-f2469324c93c req-804310f4-44af-4ac9-a986-7effc2d7c0c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Detach interface failed, port_id=3da501a9-b467-445e-8d0c-b03956d7a1b2, reason: Instance f7256761-4dda-41d4-bd20-f34c7a8478ef could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.231 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.231 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.275 232432 DEBUG oslo_concurrency.processutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:34:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:34:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:56.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:34:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:34:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/945967373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.729 232432 DEBUG oslo_concurrency.processutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.737 232432 DEBUG nova.compute.provider_tree [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.813 232432 DEBUG nova.scheduler.client.report [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:34:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/945967373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:34:56 compute-2 nova_compute[232428]: 2025-11-29 08:34:56.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:57 compute-2 nova_compute[232428]: 2025-11-29 08:34:57.025 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:57 compute-2 nova_compute[232428]: 2025-11-29 08:34:57.057 232432 INFO nova.scheduler.client.report [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Deleted allocations for instance f7256761-4dda-41d4-bd20-f34c7a8478ef
Nov 29 08:34:57 compute-2 nova_compute[232428]: 2025-11-29 08:34:57.283 232432 DEBUG oslo_concurrency.lockutils [None req-e9fe6fa2-55bf-46d0-ae4b-5a7579d2f263 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "f7256761-4dda-41d4-bd20-f34c7a8478ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:34:57 compute-2 podman[311368]: 2025-11-29 08:34:57.660119266 +0000 UTC m=+0.059888025 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 29 08:34:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:57 compute-2 nova_compute[232428]: 2025-11-29 08:34:57.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:34:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:34:58 compute-2 ceph-mon[77138]: pgmap v2968: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 15 KiB/s wr, 45 op/s
Nov 29 08:34:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:34:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:34:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:34:59.146 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:34:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:34:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:34:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:00 compute-2 ceph-mon[77138]: pgmap v2969: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 53 op/s
Nov 29 08:35:00 compute-2 nova_compute[232428]: 2025-11-29 08:35:00.619 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:00 compute-2 nova_compute[232428]: 2025-11-29 08:35:00.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:01 compute-2 ceph-mon[77138]: pgmap v2970: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 7.4 KiB/s wr, 35 op/s
Nov 29 08:35:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3260366582' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:35:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3260366582' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:35:01 compute-2 nova_compute[232428]: 2025-11-29 08:35:01.696 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:01 compute-2 nova_compute[232428]: 2025-11-29 08:35:01.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:02.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:02 compute-2 nova_compute[232428]: 2025-11-29 08:35:02.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:35:03.339 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:35:03.340 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:35:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:35:03.340 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:35:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:03 compute-2 ceph-mon[77138]: pgmap v2971: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 7.4 KiB/s wr, 35 op/s
Nov 29 08:35:04 compute-2 sudo[311394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:04 compute-2 sudo[311394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:04 compute-2 sudo[311394]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:04 compute-2 sudo[311419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:04 compute-2 sudo[311419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:04 compute-2 sudo[311419]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3087804535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:35:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3087804535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:35:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:35:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:35:06 compute-2 ceph-mon[77138]: pgmap v2972: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 08:35:06 compute-2 nova_compute[232428]: 2025-11-29 08:35:06.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:06.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:06 compute-2 nova_compute[232428]: 2025-11-29 08:35:06.982 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2388779141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.733 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405292.7322114, f7256761-4dda-41d4-bd20-f34c7a8478ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.734 232432 INFO nova.compute.manager [-] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] VM Stopped (Lifecycle Event)
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.762 232432 DEBUG nova.compute.manager [None req-97303060-94fc-4b8c-b5b7-e9ac7ac08f5a - - - - - -] [instance: f7256761-4dda-41d4-bd20-f34c7a8478ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:35:07 compute-2 nova_compute[232428]: 2025-11-29 08:35:07.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:08 compute-2 ceph-mon[77138]: pgmap v2973: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1023 B/s wr, 39 op/s
Nov 29 08:35:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.621063) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309621162, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 253, "total_data_size": 996856, "memory_usage": 1009584, "flush_reason": "Manual Compaction"}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309626661, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 503251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64004, "largest_seqno": 64642, "table_properties": {"data_size": 500227, "index_size": 932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8401, "raw_average_key_size": 21, "raw_value_size": 493784, "raw_average_value_size": 1253, "num_data_blocks": 40, "num_entries": 394, "num_filter_entries": 394, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405279, "oldest_key_time": 1764405279, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 5625 microseconds, and 2195 cpu microseconds.
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.626692) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 503251 bytes OK
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.626712) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.628142) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.628164) EVENT_LOG_v1 {"time_micros": 1764405309628158, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.628184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 993254, prev total WAL file size 993254, number of live WAL files 2.
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.628839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303134' seq:72057594037927935, type:22 .. '6D6772737461740032323636' seq:0, type:0; will stop at (end)
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(491KB)], [123(14MB)]
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309628883, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 15290915, "oldest_snapshot_seqno": -1}
Nov 29 08:35:09 compute-2 podman[311446]: 2025-11-29 08:35:09.686636862 +0000 UTC m=+0.093153180 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9375 keys, 11574658 bytes, temperature: kUnknown
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309729661, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11574658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11514476, "index_size": 35622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 246024, "raw_average_key_size": 26, "raw_value_size": 11350045, "raw_average_value_size": 1210, "num_data_blocks": 1363, "num_entries": 9375, "num_filter_entries": 9375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.730007) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11574658 bytes
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.731955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.6 rd, 114.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(53.4) write-amplify(23.0) OK, records in: 9887, records dropped: 512 output_compression: NoCompression
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.731973) EVENT_LOG_v1 {"time_micros": 1764405309731964, "job": 78, "event": "compaction_finished", "compaction_time_micros": 100887, "compaction_time_cpu_micros": 30421, "output_level": 6, "num_output_files": 1, "total_output_size": 11574658, "num_input_records": 9887, "num_output_records": 9375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309732219, "job": 78, "event": "table_file_deletion", "file_number": 125}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309735545, "job": 78, "event": "table_file_deletion", "file_number": 123}
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.628759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.735650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.735657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.735661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.735664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:09.735668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:10 compute-2 nova_compute[232428]: 2025-11-29 08:35:10.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:10 compute-2 ceph-mon[77138]: pgmap v2974: 305 pgs: 305 active+clean; 142 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 924 KiB/s wr, 51 op/s
Nov 29 08:35:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:10.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:11 compute-2 nova_compute[232428]: 2025-11-29 08:35:11.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:11 compute-2 nova_compute[232428]: 2025-11-29 08:35:11.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:11 compute-2 nova_compute[232428]: 2025-11-29 08:35:11.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:12 compute-2 ceph-mon[77138]: pgmap v2975: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 08:35:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1400392762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:35:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:12 compute-2 nova_compute[232428]: 2025-11-29 08:35:12.772 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3431350111' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:35:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:14 compute-2 ceph-mon[77138]: pgmap v2976: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 08:35:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4247046449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:16 compute-2 ceph-mon[77138]: pgmap v2977: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 08:35:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/440048892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.332296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316332372, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 332, "num_deletes": 251, "total_data_size": 175854, "memory_usage": 182424, "flush_reason": "Manual Compaction"}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316335591, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 115345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64647, "largest_seqno": 64974, "table_properties": {"data_size": 113274, "index_size": 234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5302, "raw_average_key_size": 18, "raw_value_size": 109211, "raw_average_value_size": 381, "num_data_blocks": 10, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405310, "oldest_key_time": 1764405310, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 3288 microseconds, and 1103 cpu microseconds.
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.335620) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 115345 bytes OK
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.335636) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337615) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337631) EVENT_LOG_v1 {"time_micros": 1764405316337626, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 173531, prev total WAL file size 173531, number of live WAL files 2.
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.338078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(112KB)], [126(11MB)]
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316338143, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 11690003, "oldest_snapshot_seqno": -1}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9151 keys, 9787942 bytes, temperature: kUnknown
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316440545, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 9787942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9730927, "index_size": 33030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 242072, "raw_average_key_size": 26, "raw_value_size": 9572018, "raw_average_value_size": 1046, "num_data_blocks": 1248, "num_entries": 9151, "num_filter_entries": 9151, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.440876) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9787942 bytes
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.442613) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.1 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(186.2) write-amplify(84.9) OK, records in: 9661, records dropped: 510 output_compression: NoCompression
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.442632) EVENT_LOG_v1 {"time_micros": 1764405316442623, "job": 80, "event": "compaction_finished", "compaction_time_micros": 102496, "compaction_time_cpu_micros": 43576, "output_level": 6, "num_output_files": 1, "total_output_size": 9787942, "num_input_records": 9661, "num_output_records": 9151, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316442791, "job": 80, "event": "table_file_deletion", "file_number": 128}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316445215, "job": 80, "event": "table_file_deletion", "file_number": 126}
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.445270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.445278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.445280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.445281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:35:16.445283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:35:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:16 compute-2 nova_compute[232428]: 2025-11-29 08:35:16.984 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:17 compute-2 nova_compute[232428]: 2025-11-29 08:35:17.775 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:18 compute-2 nova_compute[232428]: 2025-11-29 08:35:18.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:18 compute-2 ceph-mon[77138]: pgmap v2978: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 08:35:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:18.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.210 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.241 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.242 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:35:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:35:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4151762485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.689 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:35:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:19 compute-2 podman[311500]: 2025-11-29 08:35:19.818815112 +0000 UTC m=+0.084108619 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.886 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.888 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4213MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.888 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.889 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.981 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:35:19 compute-2 nova_compute[232428]: 2025-11-29 08:35:19.982 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.000 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:35:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:35:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4138445284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:20 compute-2 ceph-mon[77138]: pgmap v2979: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 08:35:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4151762485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.511 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.518 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.545 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.578 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:35:20 compute-2 nova_compute[232428]: 2025-11-29 08:35:20.579 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:35:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:35:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:20.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:35:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4138445284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:35:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:35:21 compute-2 nova_compute[232428]: 2025-11-29 08:35:21.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:22 compute-2 nova_compute[232428]: 2025-11-29 08:35:22.570 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:35:22 compute-2 nova_compute[232428]: 2025-11-29 08:35:22.570 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:35:22 compute-2 ceph-mon[77138]: pgmap v2980: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 904 KiB/s wr, 87 op/s
Nov 29 08:35:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:22.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:22 compute-2 nova_compute[232428]: 2025-11-29 08:35:22.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:23 compute-2 ceph-mon[77138]: pgmap v2981: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:35:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/178896073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:24 compute-2 sudo[311545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:24 compute-2 sudo[311545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:24 compute-2 sudo[311545]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:24 compute-2 sudo[311570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:24 compute-2 sudo[311570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:24 compute-2 sudo[311570]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:35:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:24.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:35:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:26 compute-2 ceph-mon[77138]: pgmap v2982: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:35:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:26.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:26 compute-2 nova_compute[232428]: 2025-11-29 08:35:26.987 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1290390513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:27 compute-2 nova_compute[232428]: 2025-11-29 08:35:27.779 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:28 compute-2 ceph-mon[77138]: pgmap v2983: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:35:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/993834800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:35:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/993834800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:35:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:28 compute-2 podman[311597]: 2025-11-29 08:35:28.646205316 +0000 UTC m=+0.056836261 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:35:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:28.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:30 compute-2 ceph-mon[77138]: pgmap v2984: 305 pgs: 305 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 510 KiB/s wr, 90 op/s
Nov 29 08:35:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:30.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:31 compute-2 nova_compute[232428]: 2025-11-29 08:35:31.990 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:32 compute-2 ceph-mon[77138]: pgmap v2985: 305 pgs: 305 active+clean; 182 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 71 op/s
Nov 29 08:35:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:32.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:32 compute-2 nova_compute[232428]: 2025-11-29 08:35:32.782 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:34.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:34 compute-2 ceph-mon[77138]: pgmap v2986: 305 pgs: 305 active+clean; 182 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 208 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Nov 29 08:35:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:35.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:36 compute-2 ceph-mon[77138]: pgmap v2987: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:35:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:36 compute-2 nova_compute[232428]: 2025-11-29 08:35:36.992 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:37 compute-2 nova_compute[232428]: 2025-11-29 08:35:37.784 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:38 compute-2 ceph-mon[77138]: pgmap v2988: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:35:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:38.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:40 compute-2 ceph-mon[77138]: pgmap v2989: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:35:40 compute-2 podman[311623]: 2025-11-29 08:35:40.740881142 +0000 UTC m=+0.134664001 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:35:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:41 compute-2 ceph-mon[77138]: pgmap v2990: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 29 08:35:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:41.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:41 compute-2 nova_compute[232428]: 2025-11-29 08:35:41.994 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:42 compute-2 nova_compute[232428]: 2025-11-29 08:35:42.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:44 compute-2 ceph-mon[77138]: pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 665 KiB/s wr, 26 op/s
Nov 29 08:35:44 compute-2 sudo[311653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:44 compute-2 sudo[311653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:44 compute-2 sudo[311653]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:44.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:44 compute-2 sudo[311678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:44 compute-2 sudo[311678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:44 compute-2 sudo[311678]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:45.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:46 compute-2 ceph-mon[77138]: pgmap v2992: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 665 KiB/s wr, 26 op/s
Nov 29 08:35:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:46 compute-2 nova_compute[232428]: 2025-11-29 08:35:46.997 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:47 compute-2 sudo[311704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:47 compute-2 sudo[311704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:47 compute-2 sudo[311704]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:47 compute-2 sudo[311729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:35:47 compute-2 sudo[311729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:47 compute-2 sudo[311729]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:47 compute-2 sudo[311754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:47 compute-2 sudo[311754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:47 compute-2 sudo[311754]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:47 compute-2 sudo[311779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:35:47 compute-2 sudo[311779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:47 compute-2 nova_compute[232428]: 2025-11-29 08:35:47.789 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:48 compute-2 sudo[311779]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:48 compute-2 ceph-mon[77138]: pgmap v2993: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 08:35:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:35:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:48.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:35:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:35:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:50 compute-2 ceph-mon[77138]: pgmap v2994: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 08:35:50 compute-2 podman[311838]: 2025-11-29 08:35:50.662236082 +0000 UTC m=+0.062312310 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 08:35:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:52 compute-2 nova_compute[232428]: 2025-11-29 08:35:52.000 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:52 compute-2 ceph-mon[77138]: pgmap v2995: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 08:35:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2946036246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:35:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:52 compute-2 nova_compute[232428]: 2025-11-29 08:35:52.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:54 compute-2 ceph-mon[77138]: pgmap v2996: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 08:35:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:54.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:55 compute-2 sudo[311860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:35:55 compute-2 sudo[311860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:55 compute-2 sudo[311860]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:55 compute-2 sudo[311885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:35:55 compute-2 sudo[311885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:35:55 compute-2 sudo[311885]: pam_unix(sudo:session): session closed for user root
Nov 29 08:35:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:35:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:35:56 compute-2 ceph-mon[77138]: pgmap v2997: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 08:35:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:35:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:35:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:57 compute-2 nova_compute[232428]: 2025-11-29 08:35:57.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:57 compute-2 nova_compute[232428]: 2025-11-29 08:35:57.792 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:58 compute-2 ceph-mon[77138]: pgmap v2998: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 08:35:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:35:58.467 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:35:58 compute-2 nova_compute[232428]: 2025-11-29 08:35:58.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:35:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:35:58.468 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:35:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:35:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:35:59 compute-2 podman[311912]: 2025-11-29 08:35:59.655825177 +0000 UTC m=+0.063730393 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:35:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:35:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:35:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:00 compute-2 ceph-mon[77138]: pgmap v2999: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 12 KiB/s wr, 2 op/s
Nov 29 08:36:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:01 compute-2 nova_compute[232428]: 2025-11-29 08:36:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:02 compute-2 nova_compute[232428]: 2025-11-29 08:36:02.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:02 compute-2 ceph-mon[77138]: pgmap v3000: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 18 KiB/s wr, 3 op/s
Nov 29 08:36:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:02.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:36:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:02 compute-2 nova_compute[232428]: 2025-11-29 08:36:02.795 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:03.340 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:03.340 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:36:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:03.340 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:36:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:04 compute-2 ceph-mon[77138]: pgmap v3001: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 17 KiB/s wr, 3 op/s
Nov 29 08:36:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 08:36:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:04 compute-2 sudo[311936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:04 compute-2 sudo[311936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:04 compute-2 sudo[311936]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:04 compute-2 sudo[311961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:04 compute-2 sudo[311961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:04 compute-2 sudo[311961]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:05 compute-2 ceph-mon[77138]: osdmap e405: 3 total, 3 up, 3 in
Nov 29 08:36:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3269295202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:06 compute-2 ceph-mon[77138]: pgmap v3003: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 20 KiB/s wr, 4 op/s
Nov 29 08:36:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2560117873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:06.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:07 compute-2 nova_compute[232428]: 2025-11-29 08:36:07.007 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:07 compute-2 nova_compute[232428]: 2025-11-29 08:36:07.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:07 compute-2 nova_compute[232428]: 2025-11-29 08:36:07.797 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:08 compute-2 ceph-mon[77138]: pgmap v3004: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 20 KiB/s wr, 16 op/s
Nov 29 08:36:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:09 compute-2 nova_compute[232428]: 2025-11-29 08:36:09.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:09 compute-2 nova_compute[232428]: 2025-11-29 08:36:09.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:36:09 compute-2 nova_compute[232428]: 2025-11-29 08:36:09.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:36:09 compute-2 nova_compute[232428]: 2025-11-29 08:36:09.234 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:36:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:10 compute-2 ceph-mon[77138]: pgmap v3005: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 984 KiB/s rd, 7.2 KiB/s wr, 56 op/s
Nov 29 08:36:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:11 compute-2 podman[311989]: 2025-11-29 08:36:11.731240105 +0000 UTC m=+0.132774104 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:36:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:12 compute-2 nova_compute[232428]: 2025-11-29 08:36:12.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:12 compute-2 nova_compute[232428]: 2025-11-29 08:36:12.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:12 compute-2 nova_compute[232428]: 2025-11-29 08:36:12.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:12 compute-2 nova_compute[232428]: 2025-11-29 08:36:12.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:12 compute-2 ceph-mon[77138]: pgmap v3006: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 511 B/s wr, 97 op/s
Nov 29 08:36:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:12 compute-2 nova_compute[232428]: 2025-11-29 08:36:12.798 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2159527247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:13 compute-2 ovn_controller[134375]: 2025-11-29T08:36:13Z|00832|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 08:36:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:14 compute-2 nova_compute[232428]: 2025-11-29 08:36:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:14 compute-2 ceph-mon[77138]: pgmap v3007: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 511 B/s wr, 97 op/s
Nov 29 08:36:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1735074097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:16 compute-2 ceph-mon[77138]: pgmap v3008: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 490 B/s wr, 99 op/s
Nov 29 08:36:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:16.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:17 compute-2 nova_compute[232428]: 2025-11-29 08:36:17.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:17 compute-2 ceph-mon[77138]: pgmap v3009: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 89 op/s
Nov 29 08:36:17 compute-2 nova_compute[232428]: 2025-11-29 08:36:17.801 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3111071294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:19 compute-2 ceph-mon[77138]: pgmap v3010: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 84 op/s
Nov 29 08:36:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3692800489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.238 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.239 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:36:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 08:36:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:36:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/810202153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.696 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:36:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.854 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.855 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4229MB free_disk=20.930076599121094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.856 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.856 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.940 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.940 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:36:20 compute-2 nova_compute[232428]: 2025-11-29 08:36:20.972 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:36:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:36:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2167834307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:21 compute-2 nova_compute[232428]: 2025-11-29 08:36:21.405 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:36:21 compute-2 nova_compute[232428]: 2025-11-29 08:36:21.412 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:36:21 compute-2 nova_compute[232428]: 2025-11-29 08:36:21.438 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:36:21 compute-2 nova_compute[232428]: 2025-11-29 08:36:21.439 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:36:21 compute-2 nova_compute[232428]: 2025-11-29 08:36:21.440 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:36:21 compute-2 podman[312064]: 2025-11-29 08:36:21.679666866 +0000 UTC m=+0.081799697 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:36:21 compute-2 ceph-mon[77138]: osdmap e406: 3 total, 3 up, 3 in
Nov 29 08:36:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/810202153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:21 compute-2 ceph-mon[77138]: pgmap v3012: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Nov 29 08:36:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1326930679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2167834307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:22 compute-2 nova_compute[232428]: 2025-11-29 08:36:22.014 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2567423183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1283798638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1127947561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:36:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:22 compute-2 nova_compute[232428]: 2025-11-29 08:36:22.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:24 compute-2 nova_compute[232428]: 2025-11-29 08:36:24.439 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:24 compute-2 nova_compute[232428]: 2025-11-29 08:36:24.440 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:36:24 compute-2 ceph-mon[77138]: pgmap v3013: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Nov 29 08:36:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/55526148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:25 compute-2 sudo[312085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:25 compute-2 sudo[312085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:25 compute-2 sudo[312085]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:25 compute-2 sudo[312110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:25 compute-2 sudo[312110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:25 compute-2 sudo[312110]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/224494438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:26 compute-2 ceph-mon[77138]: pgmap v3014: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Nov 29 08:36:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:27 compute-2 nova_compute[232428]: 2025-11-29 08:36:27.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:27 compute-2 ceph-mon[77138]: pgmap v3015: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 29 08:36:27 compute-2 nova_compute[232428]: 2025-11-29 08:36:27.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1570808786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:36:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1570808786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:36:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 08:36:29 compute-2 ceph-mon[77138]: pgmap v3016: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 877 KiB/s wr, 182 op/s
Nov 29 08:36:29 compute-2 ceph-mon[77138]: osdmap e407: 3 total, 3 up, 3 in
Nov 29 08:36:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:30 compute-2 podman[312138]: 2025-11-29 08:36:30.677655239 +0000 UTC m=+0.075702877 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd)
Nov 29 08:36:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:30.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:31.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:31 compute-2 ceph-mon[77138]: pgmap v3018: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 182 op/s
Nov 29 08:36:32 compute-2 nova_compute[232428]: 2025-11-29 08:36:32.019 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:32.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:32 compute-2 nova_compute[232428]: 2025-11-29 08:36:32.809 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:33 compute-2 ceph-mon[77138]: pgmap v3019: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 182 op/s
Nov 29 08:36:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:34.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:35.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:35 compute-2 ceph-mon[77138]: pgmap v3020: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 18 KiB/s wr, 159 op/s
Nov 29 08:36:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:37 compute-2 nova_compute[232428]: 2025-11-29 08:36:37.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:37 compute-2 nova_compute[232428]: 2025-11-29 08:36:37.811 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:37 compute-2 ceph-mon[77138]: pgmap v3021: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 19 KiB/s wr, 122 op/s
Nov 29 08:36:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:38.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:39 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 08:36:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:40 compute-2 ceph-mon[77138]: pgmap v3022: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 309 KiB/s wr, 91 op/s
Nov 29 08:36:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:41.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:42 compute-2 nova_compute[232428]: 2025-11-29 08:36:42.024 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:42 compute-2 ceph-mon[77138]: pgmap v3023: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 128 op/s
Nov 29 08:36:42 compute-2 podman[312164]: 2025-11-29 08:36:42.733847047 +0000 UTC m=+0.133248868 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:36:42 compute-2 nova_compute[232428]: 2025-11-29 08:36:42.812 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:42.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:43.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:44 compute-2 ceph-mon[77138]: pgmap v3024: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 814 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Nov 29 08:36:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:45 compute-2 sudo[312192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:45 compute-2 sudo[312192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:45 compute-2 sudo[312192]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:45 compute-2 sudo[312217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:45 compute-2 sudo[312217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:45 compute-2 sudo[312217]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:36:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:45.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:36:46 compute-2 ceph-mon[77138]: pgmap v3025: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 902 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Nov 29 08:36:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:47 compute-2 nova_compute[232428]: 2025-11-29 08:36:47.026 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:47 compute-2 nova_compute[232428]: 2025-11-29 08:36:47.814 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:48 compute-2 ceph-mon[77138]: pgmap v3026: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Nov 29 08:36:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1048150539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:36:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:50 compute-2 ceph-mon[77138]: pgmap v3027: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 08:36:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:52 compute-2 nova_compute[232428]: 2025-11-29 08:36:52.027 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:52 compute-2 ceph-mon[77138]: pgmap v3028: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 1.9 MiB/s wr, 91 op/s
Nov 29 08:36:52 compute-2 podman[312246]: 2025-11-29 08:36:52.674820817 +0000 UTC m=+0.073097466 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:36:52 compute-2 nova_compute[232428]: 2025-11-29 08:36:52.817 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:52.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:53 compute-2 ceph-mon[77138]: pgmap v3029: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 646 KiB/s wr, 48 op/s
Nov 29 08:36:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:54.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:55 compute-2 sudo[312266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:55 compute-2 sudo[312266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:55 compute-2 sudo[312266]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:55 compute-2 sudo[312292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:36:55 compute-2 sudo[312292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:55 compute-2 sudo[312292]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:55 compute-2 ceph-mon[77138]: pgmap v3030: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 645 KiB/s wr, 48 op/s
Nov 29 08:36:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:55 compute-2 sudo[312317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:55 compute-2 sudo[312317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:55 compute-2 sudo[312317]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:56 compute-2 sudo[312342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:36:56 compute-2 sudo[312342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:56 compute-2 nova_compute[232428]: 2025-11-29 08:36:56.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:36:56 compute-2 sudo[312342]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:56 compute-2 sudo[312387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:56 compute-2 sudo[312387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:56 compute-2 sudo[312387]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:56 compute-2 sudo[312412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:36:56 compute-2 sudo[312412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:56 compute-2 sudo[312412]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:56 compute-2 sudo[312437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:36:56 compute-2 sudo[312437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:56 compute-2 sudo[312437]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:56 compute-2 sudo[312462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:36:56 compute-2 sudo[312462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:36:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:36:57 compute-2 nova_compute[232428]: 2025-11-29 08:36:57.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:57 compute-2 sudo[312462]: pam_unix(sudo:session): session closed for user root
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:36:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:36:57 compute-2 nova_compute[232428]: 2025-11-29 08:36:57.819 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:58 compute-2 ceph-mon[77138]: pgmap v3031: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 08:36:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:36:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:36:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:36:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:59.298 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:36:59 compute-2 nova_compute[232428]: 2025-11-29 08:36:59.299 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:36:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:36:59.300 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:36:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:36:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:36:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:00 compute-2 ceph-mon[77138]: pgmap v3032: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 27 op/s
Nov 29 08:37:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:00.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:01 compute-2 podman[312522]: 2025-11-29 08:37:01.683169073 +0000 UTC m=+0.077708740 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:37:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:02 compute-2 nova_compute[232428]: 2025-11-29 08:37:02.032 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:02 compute-2 nova_compute[232428]: 2025-11-29 08:37:02.229 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:02 compute-2 ceph-mon[77138]: pgmap v3033: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 10 op/s
Nov 29 08:37:02 compute-2 nova_compute[232428]: 2025-11-29 08:37:02.822 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:03.341 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:03.341 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:03 compute-2 sudo[312544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:03 compute-2 sudo[312544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:03 compute-2 sudo[312544]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:37:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250441141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:03 compute-2 sudo[312569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:37:03 compute-2 sudo[312569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:03 compute-2 sudo[312569]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:03.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:04 compute-2 ceph-mon[77138]: pgmap v3034: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 08:37:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:37:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:37:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2250441141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:04 compute-2 sshd-session[312595]: Invalid user solv from 45.148.10.240 port 56644
Nov 29 08:37:04 compute-2 sshd-session[312595]: Connection closed by invalid user solv 45.148.10.240 port 56644 [preauth]
Nov 29 08:37:05 compute-2 sudo[312597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:05 compute-2 sudo[312597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:05 compute-2 sudo[312597]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:05 compute-2 sudo[312622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:05 compute-2 sudo[312622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:05 compute-2 sudo[312622]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:05.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:06 compute-2 ceph-mon[77138]: pgmap v3035: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 08:37:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:07 compute-2 nova_compute[232428]: 2025-11-29 08:37:07.035 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:07 compute-2 nova_compute[232428]: 2025-11-29 08:37:07.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:07 compute-2 nova_compute[232428]: 2025-11-29 08:37:07.825 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:07.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:08 compute-2 ceph-mon[77138]: pgmap v3036: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 2 op/s
Nov 29 08:37:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:09.301 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:09 compute-2 ceph-mon[77138]: pgmap v3037: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 2 op/s
Nov 29 08:37:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:09.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:10 compute-2 nova_compute[232428]: 2025-11-29 08:37:10.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:10 compute-2 nova_compute[232428]: 2025-11-29 08:37:10.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:37:10 compute-2 nova_compute[232428]: 2025-11-29 08:37:10.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:37:10 compute-2 nova_compute[232428]: 2025-11-29 08:37:10.370 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:37:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:11.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:12 compute-2 nova_compute[232428]: 2025-11-29 08:37:12.038 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:12 compute-2 nova_compute[232428]: 2025-11-29 08:37:12.827 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:13 compute-2 nova_compute[232428]: 2025-11-29 08:37:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:13 compute-2 ceph-mon[77138]: pgmap v3038: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Nov 29 08:37:13 compute-2 podman[312651]: 2025-11-29 08:37:13.769890942 +0000 UTC m=+0.159591878 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 08:37:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:13.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:14 compute-2 nova_compute[232428]: 2025-11-29 08:37:14.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:14 compute-2 nova_compute[232428]: 2025-11-29 08:37:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:14 compute-2 ceph-mon[77138]: pgmap v3039: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Nov 29 08:37:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:15.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:16 compute-2 ceph-mon[77138]: pgmap v3040: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Nov 29 08:37:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:17 compute-2 nova_compute[232428]: 2025-11-29 08:37:17.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2854436147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:17 compute-2 nova_compute[232428]: 2025-11-29 08:37:17.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:17 compute-2 nova_compute[232428]: 2025-11-29 08:37:17.968 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:17 compute-2 nova_compute[232428]: 2025-11-29 08:37:17.968 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:17.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:18 compute-2 nova_compute[232428]: 2025-11-29 08:37:18.324 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:37:18 compute-2 nova_compute[232428]: 2025-11-29 08:37:18.811 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:18 compute-2 nova_compute[232428]: 2025-11-29 08:37:18.813 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:18 compute-2 nova_compute[232428]: 2025-11-29 08:37:18.824 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:37:18 compute-2 nova_compute[232428]: 2025-11-29 08:37:18.824 232432 INFO nova.compute.claims [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:37:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:18 compute-2 ceph-mon[77138]: pgmap v3041: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 4.0 KiB/s wr, 3 op/s
Nov 29 08:37:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1638903916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:19 compute-2 nova_compute[232428]: 2025-11-29 08:37:19.214 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:37:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3092068088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:19 compute-2 nova_compute[232428]: 2025-11-29 08:37:19.682 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:19 compute-2 nova_compute[232428]: 2025-11-29 08:37:19.691 232432 DEBUG nova.compute.provider_tree [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:37:19 compute-2 nova_compute[232428]: 2025-11-29 08:37:19.912 232432 DEBUG nova.scheduler.client.report [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:37:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:19.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:20 compute-2 ceph-mon[77138]: pgmap v3042: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Nov 29 08:37:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3092068088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.208 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.209 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.315 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.316 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.425 232432 INFO nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:37:20 compute-2 nova_compute[232428]: 2025-11-29 08:37:20.659 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:37:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:20.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.003 232432 DEBUG nova.policy [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.514 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.515 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.515 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.515 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.516 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.733 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.735 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.736 232432 INFO nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Creating image(s)
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.764 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.796 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.824 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.829 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.935 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.937 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.939 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.939 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:37:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/70457623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:21.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.980 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:21 compute-2 nova_compute[232428]: 2025-11-29 08:37:21.985 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.028 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.041 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.207 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.209 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4169MB free_disk=20.965614318847656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.210 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.210 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:22 compute-2 ceph-mon[77138]: pgmap v3043: 305 pgs: 305 active+clean; 156 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.6 KiB/s wr, 23 op/s
Nov 29 08:37:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/70457623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.406 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.504 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 0fa70e65-aaae-493a-9c8c-db89fe6658e6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.505 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.505 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.512 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.575 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.661 232432 DEBUG nova.objects.instance [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.684 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.685 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Ensure instance console log exists: /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.686 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.686 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.687 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:22 compute-2 nova_compute[232428]: 2025-11-29 08:37:22.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:37:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1139634117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.029 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.035 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.100 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.148 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.149 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/303534460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1139634117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:23 compute-2 nova_compute[232428]: 2025-11-29 08:37:23.596 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Successfully created port: ebf4feb2-0247-40b6-a431-2f55b2f4c237 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:37:23 compute-2 podman[312914]: 2025-11-29 08:37:23.645076592 +0000 UTC m=+0.045173646 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:37:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:24 compute-2 nova_compute[232428]: 2025-11-29 08:37:24.140 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:24 compute-2 nova_compute[232428]: 2025-11-29 08:37:24.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:24 compute-2 nova_compute[232428]: 2025-11-29 08:37:24.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:37:24 compute-2 ceph-mon[77138]: pgmap v3044: 305 pgs: 305 active+clean; 156 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.9 KiB/s wr, 23 op/s
Nov 29 08:37:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1220308300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:24.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:25 compute-2 ovn_controller[134375]: 2025-11-29T08:37:25Z|00833|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:37:25 compute-2 sudo[312934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:25 compute-2 sudo[312934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:25 compute-2 sudo[312934]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:25 compute-2 sudo[312959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:25 compute-2 sudo[312959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:25 compute-2 sudo[312959]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:25 compute-2 ceph-mon[77138]: pgmap v3045: 305 pgs: 305 active+clean; 153 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 29 08:37:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:26.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:27 compute-2 nova_compute[232428]: 2025-11-29 08:37:27.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:27 compute-2 nova_compute[232428]: 2025-11-29 08:37:27.658 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:27 compute-2 nova_compute[232428]: 2025-11-29 08:37:27.834 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.013 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Successfully updated port: ebf4feb2-0247-40b6-a431-2f55b2f4c237 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.419 232432 DEBUG nova.compute.manager [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.420 232432 DEBUG nova.compute.manager [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing instance network info cache due to event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.420 232432 DEBUG oslo_concurrency.lockutils [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.420 232432 DEBUG oslo_concurrency.lockutils [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.420 232432 DEBUG nova.network.neutron [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:37:28 compute-2 nova_compute[232428]: 2025-11-29 08:37:28.579 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:37:28 compute-2 ceph-mon[77138]: pgmap v3046: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 29 08:37:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/512438549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:37:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3778855845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:37:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3778855845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:37:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:29 compute-2 nova_compute[232428]: 2025-11-29 08:37:29.231 232432 DEBUG nova.network.neutron [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:37:29 compute-2 nova_compute[232428]: 2025-11-29 08:37:29.918 232432 DEBUG nova.network.neutron [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:37:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:30 compute-2 ceph-mon[77138]: pgmap v3047: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 29 08:37:30 compute-2 nova_compute[232428]: 2025-11-29 08:37:30.238 232432 DEBUG oslo_concurrency.lockutils [req-be40db8c-7470-4c3d-abfd-fec1f028943c req-580c1b1e-fc56-4562-a555-f1da0538135b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:37:30 compute-2 nova_compute[232428]: 2025-11-29 08:37:30.239 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:37:30 compute-2 nova_compute[232428]: 2025-11-29 08:37:30.239 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:37:30 compute-2 nova_compute[232428]: 2025-11-29 08:37:30.744 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:37:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:32 compute-2 ceph-mon[77138]: pgmap v3048: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.293 232432 DEBUG nova.network.neutron [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:37:32 compute-2 podman[312988]: 2025-11-29 08:37:32.665586736 +0000 UTC m=+0.067912985 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.836 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.883 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.884 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance network_info: |[{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.886 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Start _get_guest_xml network_info=[{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.891 232432 WARNING nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.897 232432 DEBUG nova.virt.libvirt.host [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.897 232432 DEBUG nova.virt.libvirt.host [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.900 232432 DEBUG nova.virt.libvirt.host [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.901 232432 DEBUG nova.virt.libvirt.host [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.902 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.902 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.903 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.903 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.903 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.903 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.903 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.904 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.904 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.904 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.904 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.905 232432 DEBUG nova.virt.hardware [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:37:32 compute-2 nova_compute[232428]: 2025-11-29 08:37:32.908 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:37:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3173595527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:34 compute-2 nova_compute[232428]: 2025-11-29 08:37:34.030 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:34 compute-2 ceph-mon[77138]: pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 08:37:34 compute-2 nova_compute[232428]: 2025-11-29 08:37:34.601 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:34 compute-2 nova_compute[232428]: 2025-11-29 08:37:34.607 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.106772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455106910, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1654, "num_deletes": 257, "total_data_size": 3737189, "memory_usage": 3797136, "flush_reason": "Manual Compaction"}
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455568791, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2453955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64979, "largest_seqno": 66628, "table_properties": {"data_size": 2447023, "index_size": 4002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14745, "raw_average_key_size": 20, "raw_value_size": 2432972, "raw_average_value_size": 3301, "num_data_blocks": 176, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405317, "oldest_key_time": 1764405317, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 462209 microseconds, and 10055 cpu microseconds.
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:37:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:37:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3412052419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.568971) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2453955 bytes OK
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.569015) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.908456) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.908484) EVENT_LOG_v1 {"time_micros": 1764405455908477, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.908506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3729650, prev total WAL file size 3730415, number of live WAL files 2.
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.909960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2396KB)], [129(9558KB)]
Nov 29 08:37:35 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455909999, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 12241897, "oldest_snapshot_seqno": -1}
Nov 29 08:37:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9355 keys, 12080871 bytes, temperature: kUnknown
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405456293934, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 12080871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12019977, "index_size": 36397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247328, "raw_average_key_size": 26, "raw_value_size": 11855179, "raw_average_value_size": 1267, "num_data_blocks": 1389, "num_entries": 9355, "num_filter_entries": 9355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:37:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3173595527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.294958) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 12080871 bytes
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.335681) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.9 rd, 31.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.3 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 9888, records dropped: 533 output_compression: NoCompression
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.335747) EVENT_LOG_v1 {"time_micros": 1764405456335721, "job": 82, "event": "compaction_finished", "compaction_time_micros": 384056, "compaction_time_cpu_micros": 27138, "output_level": 6, "num_output_files": 1, "total_output_size": 12080871, "num_input_records": 9888, "num_output_records": 9355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405456336789, "job": 82, "event": "table_file_deletion", "file_number": 131}
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405456340001, "job": 82, "event": "table_file_deletion", "file_number": 129}
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:35.909831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.340073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.340082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.340086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.340089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:37:36.340092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.385 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.389 232432 DEBUG nova.virt.libvirt.vif [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:21Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.390 232432 DEBUG nova.network.os_vif_util [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.392 232432 DEBUG nova.network.os_vif_util [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.395 232432 DEBUG nova.objects.instance [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.601 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <uuid>0fa70e65-aaae-493a-9c8c-db89fe6658e6</uuid>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <name>instance-000000b3</name>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-523561219</nova:name>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:37:32</nova:creationTime>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <nova:port uuid="ebf4feb2-0247-40b6-a431-2f55b2f4c237">
Nov 29 08:37:36 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <system>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="serial">0fa70e65-aaae-493a-9c8c-db89fe6658e6</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="uuid">0fa70e65-aaae-493a-9c8c-db89fe6658e6</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </system>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <os>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </os>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <features>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </features>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk">
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </source>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config">
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </source>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:37:36 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:7a:9d:09"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <target dev="tapebf4feb2-02"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/console.log" append="off"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <video>
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </video>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:37:36 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:37:36 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:37:36 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:37:36 compute-2 nova_compute[232428]: </domain>
Nov 29 08:37:36 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.602 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Preparing to wait for external event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.603 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.604 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.605 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.606 232432 DEBUG nova.virt.libvirt.vif [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:21Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.606 232432 DEBUG nova.network.os_vif_util [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.608 232432 DEBUG nova.network.os_vif_util [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.608 232432 DEBUG os_vif [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.609 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.610 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.611 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.617 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.618 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebf4feb2-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.618 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebf4feb2-02, col_values=(('external_ids', {'iface-id': 'ebf4feb2-0247-40b6-a431-2f55b2f4c237', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:9d:09', 'vm-uuid': '0fa70e65-aaae-493a-9c8c-db89fe6658e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.621 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:36 compute-2 NetworkManager[48993]: <info>  [1764405456.6227] manager: (tapebf4feb2-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.624 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.634 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:36 compute-2 nova_compute[232428]: 2025-11-29 08:37:36.635 232432 INFO os_vif [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02')
Nov 29 08:37:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:37 compute-2 ceph-mon[77138]: pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 08:37:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3412052419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.537 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.538 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.538 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:7a:9d:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.539 232432 INFO nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Using config drive
Nov 29 08:37:37 compute-2 nova_compute[232428]: 2025-11-29 08:37:37.569 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:37:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1394452805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:37:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:37:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1394452805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:37:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.214 232432 INFO nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Creating config drive at /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.224 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2v3qrwpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:38 compute-2 ceph-mon[77138]: pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 474 KiB/s wr, 15 op/s
Nov 29 08:37:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1394452805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:37:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1394452805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.371 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2v3qrwpm" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.399 232432 DEBUG nova.storage.rbd_utils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.402 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.818 232432 DEBUG oslo_concurrency.processutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config 0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.821 232432 INFO nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deleting local config drive /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/disk.config because it was imported into RBD.
Nov 29 08:37:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:38 compute-2 kernel: tapebf4feb2-02: entered promiscuous mode
Nov 29 08:37:38 compute-2 NetworkManager[48993]: <info>  [1764405458.8863] manager: (tapebf4feb2-02): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 08:37:38 compute-2 ovn_controller[134375]: 2025-11-29T08:37:38Z|00834|binding|INFO|Claiming lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 for this chassis.
Nov 29 08:37:38 compute-2 ovn_controller[134375]: 2025-11-29T08:37:38Z|00835|binding|INFO|ebf4feb2-0247-40b6-a431-2f55b2f4c237: Claiming fa:16:3e:7a:9d:09 10.100.0.14
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:38 compute-2 systemd-udevd[313144]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:37:38 compute-2 systemd-machined[194747]: New machine qemu-87-instance-000000b3.
Nov 29 08:37:38 compute-2 NetworkManager[48993]: <info>  [1764405458.9312] device (tapebf4feb2-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:37:38 compute-2 NetworkManager[48993]: <info>  [1764405458.9324] device (tapebf4feb2-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.947 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:9d:09 10.100.0.14'], port_security=['fa:16:3e:7a:9d:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0fa70e65-aaae-493a-9c8c-db89fe6658e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c01a89c-f496-44c3-afa3-4720950528b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c61e493b-5131-4681-b607-cad8a707cfcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4ed0086-1dab-4f89-9d5b-dbd6a6a8243e, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ebf4feb2-0247-40b6-a431-2f55b2f4c237) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.950 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ebf4feb2-0247-40b6-a431-2f55b2f4c237 in datapath 3c01a89c-f496-44c3-afa3-4720950528b6 bound to our chassis
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.951 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c01a89c-f496-44c3-afa3-4720950528b6
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:38 compute-2 systemd[1]: Started Virtual Machine qemu-87-instance-000000b3.
Nov 29 08:37:38 compute-2 ovn_controller[134375]: 2025-11-29T08:37:38Z|00836|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 ovn-installed in OVS
Nov 29 08:37:38 compute-2 ovn_controller[134375]: 2025-11-29T08:37:38Z|00837|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 up in Southbound
Nov 29 08:37:38 compute-2 nova_compute[232428]: 2025-11-29 08:37:38.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.969 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6b76d9-39e7-4e9e-a896-e6486af3a81b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.970 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c01a89c-f1 in ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.971 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c01a89c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.972 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[381ff13c-57ea-4beb-80a6-f52bef91ceb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.972 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4de2d5-4e2e-4119-a843-66d695f154a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:38.988 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a97f1a-d1c8-46c8-9dee-9baadaac4233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.013 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[435de248-57db-4410-9cbb-5d241dfadb9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.055 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c5759822-39c8-42af-9edf-a6057edd2fac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.060 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e5eecf-be8b-4a2b-b6f2-237fd3347978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 NetworkManager[48993]: <info>  [1764405459.0616] manager: (tap3c01a89c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.103 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[866ba250-6d81-4808-92bb-214c5cd213d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.107 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfbdaea-f9ab-4a96-be53-dad3f9c22e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 NetworkManager[48993]: <info>  [1764405459.1439] device (tap3c01a89c-f0): carrier: link connected
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.150 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a986a9-fcd4-4dbb-920f-e95a21e33e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.176 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fabb90-e0fe-40b7-a7eb-30aadafe17f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c01a89c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:9d:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842541, 'reachable_time': 43544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313178, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.201 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e23be697-3a01-46c1-8b56-3a33cdeae784]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:9d5d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842541, 'tstamp': 842541}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313179, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.227 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea96f99-9801-4ee8-85cc-5baa60fcb71b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c01a89c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:9d:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842541, 'reachable_time': 43544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313180, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.267 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5b6de3-2938-4bd1-a890-6df31c393e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[157f9004-c1c8-4c02-bc62-96c23d590627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.340 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c01a89c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.340 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.341 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c01a89c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:39 compute-2 NetworkManager[48993]: <info>  [1764405459.3437] manager: (tap3c01a89c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 08:37:39 compute-2 kernel: tap3c01a89c-f0: entered promiscuous mode
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.345 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c01a89c-f0, col_values=(('external_ids', {'iface-id': '2ae168e9-4618-4303-8f12-978250c78d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:39 compute-2 ovn_controller[134375]: 2025-11-29T08:37:39Z|00838|binding|INFO|Releasing lport 2ae168e9-4618-4303-8f12-978250c78d38 from this chassis (sb_readonly=0)
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.347 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.359 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.360 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.360 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f2c70d-6dda-4f01-ab0f-d64cc2060588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.361 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-3c01a89c-f496-44c3-afa3-4720950528b6
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 3c01a89c-f496-44c3-afa3-4720950528b6
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:37:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:39.362 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'env', 'PROCESS_TAG=haproxy-3c01a89c-f496-44c3-afa3-4720950528b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c01a89c-f496-44c3-afa3-4720950528b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.695 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405459.6949406, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.696 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Started (Lifecycle Event)
Nov 29 08:37:39 compute-2 podman[313254]: 2025-11-29 08:37:39.720910279 +0000 UTC m=+0.028674063 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.944 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.950 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405459.6951911, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:37:39 compute-2 nova_compute[232428]: 2025-11-29 08:37:39.951 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Paused (Lifecycle Event)
Nov 29 08:37:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:40.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:40 compute-2 nova_compute[232428]: 2025-11-29 08:37:40.110 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:37:40 compute-2 nova_compute[232428]: 2025-11-29 08:37:40.116 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:37:40 compute-2 podman[313254]: 2025-11-29 08:37:40.12706966 +0000 UTC m=+0.434833424 container create d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:37:40 compute-2 nova_compute[232428]: 2025-11-29 08:37:40.155 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:37:40 compute-2 systemd[1]: Started libpod-conmon-d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63.scope.
Nov 29 08:37:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:37:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82a54a1df1d6c831e704092446a885fe746363f81d85a94d7e95bd20fae6815/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:37:40 compute-2 ceph-mon[77138]: pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 08:37:40 compute-2 podman[313254]: 2025-11-29 08:37:40.660579513 +0000 UTC m=+0.968343317 container init d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:37:40 compute-2 podman[313254]: 2025-11-29 08:37:40.673402513 +0000 UTC m=+0.981166307 container start d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:37:40 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [NOTICE]   (313275) : New worker (313277) forked
Nov 29 08:37:40 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [NOTICE]   (313275) : Loading success.
Nov 29 08:37:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.435 232432 DEBUG nova.compute.manager [req-73479c72-2692-4c47-8c35-d7c982683323 req-14ef15fc-3a92-4165-951e-128d2e9cf528 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.437 232432 DEBUG oslo_concurrency.lockutils [req-73479c72-2692-4c47-8c35-d7c982683323 req-14ef15fc-3a92-4165-951e-128d2e9cf528 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.438 232432 DEBUG oslo_concurrency.lockutils [req-73479c72-2692-4c47-8c35-d7c982683323 req-14ef15fc-3a92-4165-951e-128d2e9cf528 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.438 232432 DEBUG oslo_concurrency.lockutils [req-73479c72-2692-4c47-8c35-d7c982683323 req-14ef15fc-3a92-4165-951e-128d2e9cf528 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.439 232432 DEBUG nova.compute.manager [req-73479c72-2692-4c47-8c35-d7c982683323 req-14ef15fc-3a92-4165-951e-128d2e9cf528 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Processing event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.440 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.447 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405461.4473429, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.448 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Resumed (Lifecycle Event)
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.451 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.456 232432 INFO nova.virt.libvirt.driver [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance spawned successfully.
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.457 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.622 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.757 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.765 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.769 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.770 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.771 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.771 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.772 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.772 232432 DEBUG nova.virt.libvirt.driver [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:37:41 compute-2 ceph-mon[77138]: pgmap v3053: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 29 08:37:41 compute-2 nova_compute[232428]: 2025-11-29 08:37:41.978 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:37:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.049 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.206 232432 INFO nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Took 20.47 seconds to spawn the instance on the hypervisor.
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.207 232432 DEBUG nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.596 232432 INFO nova.compute.manager [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Took 23.84 seconds to build instance.
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.846 232432 DEBUG oslo_concurrency.lockutils [None req-fcf6b5e0-9bdd-4d95-a41f-edbb37e9cbdd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:42.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:42 compute-2 nova_compute[232428]: 2025-11-29 08:37:42.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:42.968 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:37:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:42.970 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:37:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:37:42.971 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.535 232432 DEBUG nova.compute.manager [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.535 232432 DEBUG oslo_concurrency.lockutils [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.536 232432 DEBUG oslo_concurrency.lockutils [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.536 232432 DEBUG oslo_concurrency.lockutils [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.536 232432 DEBUG nova.compute.manager [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:37:43 compute-2 nova_compute[232428]: 2025-11-29 08:37:43.537 232432 WARNING nova.compute.manager [req-5e9c25ab-842d-4132-83fb-a6d995438503 req-35047214-0682-4117-a74c-82dbdd68f9e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state None.
Nov 29 08:37:43 compute-2 ceph-mon[77138]: pgmap v3054: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 29 08:37:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:44.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:44 compute-2 podman[313288]: 2025-11-29 08:37:44.680350766 +0000 UTC m=+0.088374052 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:37:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:45 compute-2 sudo[313315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:45 compute-2 sudo[313315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:45 compute-2 sudo[313315]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:45 compute-2 sudo[313340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:37:45 compute-2 sudo[313340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:37:45 compute-2 sudo[313340]: pam_unix(sudo:session): session closed for user root
Nov 29 08:37:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:46 compute-2 nova_compute[232428]: 2025-11-29 08:37:46.264 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:46 compute-2 nova_compute[232428]: 2025-11-29 08:37:46.264 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:37:46 compute-2 ceph-mon[77138]: pgmap v3055: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 72 op/s
Nov 29 08:37:46 compute-2 nova_compute[232428]: 2025-11-29 08:37:46.456 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:37:46 compute-2 nova_compute[232428]: 2025-11-29 08:37:46.625 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:46.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:47 compute-2 nova_compute[232428]: 2025-11-29 08:37:47.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:48 compute-2 ceph-mon[77138]: pgmap v3056: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 08:37:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:49 compute-2 NetworkManager[48993]: <info>  [1764405469.8523] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 08:37:49 compute-2 NetworkManager[48993]: <info>  [1764405469.8533] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 08:37:49 compute-2 nova_compute[232428]: 2025-11-29 08:37:49.852 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:49 compute-2 nova_compute[232428]: 2025-11-29 08:37:49.964 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:49 compute-2 ovn_controller[134375]: 2025-11-29T08:37:49Z|00839|binding|INFO|Releasing lport 2ae168e9-4618-4303-8f12-978250c78d38 from this chassis (sb_readonly=0)
Nov 29 08:37:49 compute-2 nova_compute[232428]: 2025-11-29 08:37:49.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:50 compute-2 ceph-mon[77138]: pgmap v3057: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 29 08:37:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.009 232432 DEBUG nova.compute.manager [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.009 232432 DEBUG nova.compute.manager [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing instance network info cache due to event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.009 232432 DEBUG oslo_concurrency.lockutils [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.009 232432 DEBUG oslo_concurrency.lockutils [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.009 232432 DEBUG nova.network.neutron [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:37:51 compute-2 nova_compute[232428]: 2025-11-29 08:37:51.628 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:52 compute-2 nova_compute[232428]: 2025-11-29 08:37:52.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:52 compute-2 ceph-mon[77138]: pgmap v3058: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 08:37:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:37:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:52.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:37:53 compute-2 nova_compute[232428]: 2025-11-29 08:37:53.771 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:37:53 compute-2 nova_compute[232428]: 2025-11-29 08:37:53.810 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:37:53 compute-2 nova_compute[232428]: 2025-11-29 08:37:53.810 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:37:53 compute-2 nova_compute[232428]: 2025-11-29 08:37:53.811 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:37:53 compute-2 nova_compute[232428]: 2025-11-29 08:37:53.839 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:37:54 compute-2 nova_compute[232428]: 2025-11-29 08:37:54.003 232432 DEBUG nova.network.neutron [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updated VIF entry in instance network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:37:54 compute-2 nova_compute[232428]: 2025-11-29 08:37:54.004 232432 DEBUG nova.network.neutron [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:37:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:54 compute-2 nova_compute[232428]: 2025-11-29 08:37:54.022 232432 DEBUG oslo_concurrency.lockutils [req-5af17521-e276-4cf8-a930-788740d31897 req-dc3f5081-81b9-43c9-b170-d1f6c3c29a62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:37:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:37:54 compute-2 ceph-mon[77138]: pgmap v3059: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 08:37:54 compute-2 podman[313372]: 2025-11-29 08:37:54.670185227 +0000 UTC m=+0.066247503 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:37:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:55 compute-2 ovn_controller[134375]: 2025-11-29T08:37:55Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:9d:09 10.100.0.14
Nov 29 08:37:55 compute-2 ovn_controller[134375]: 2025-11-29T08:37:55Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:9d:09 10.100.0.14
Nov 29 08:37:55 compute-2 ceph-mon[77138]: pgmap v3060: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 08:37:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:56 compute-2 ovn_controller[134375]: 2025-11-29T08:37:56Z|00840|binding|INFO|Releasing lport 2ae168e9-4618-4303-8f12-978250c78d38 from this chassis (sb_readonly=0)
Nov 29 08:37:56 compute-2 nova_compute[232428]: 2025-11-29 08:37:56.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:56 compute-2 nova_compute[232428]: 2025-11-29 08:37:56.629 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:57 compute-2 nova_compute[232428]: 2025-11-29 08:37:57.055 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:37:57 compute-2 ceph-mon[77138]: pgmap v3061: 305 pgs: 305 active+clean; 176 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 567 KiB/s rd, 784 KiB/s wr, 33 op/s
Nov 29 08:37:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:37:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:37:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:37:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:37:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:37:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:01 compute-2 ceph-mon[77138]: pgmap v3062: 305 pgs: 305 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 08:38:01 compute-2 nova_compute[232428]: 2025-11-29 08:38:01.631 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:02 compute-2 nova_compute[232428]: 2025-11-29 08:38:02.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:02 compute-2 ceph-mon[77138]: pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:38:03 compute-2 nova_compute[232428]: 2025-11-29 08:38:03.241 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:03.343 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:03 compute-2 nova_compute[232428]: 2025-11-29 08:38:03.408 232432 INFO nova.compute.manager [None req-e4fd2d64-c304-4fc8-a317-a2348923489e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Get console output
Nov 29 08:38:03 compute-2 nova_compute[232428]: 2025-11-29 08:38:03.413 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:38:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:03.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:03 compute-2 podman[313395]: 2025-11-29 08:38:03.657507627 +0000 UTC m=+0.064341813 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd)
Nov 29 08:38:03 compute-2 sudo[313415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:03 compute-2 sudo[313415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:03 compute-2 sudo[313415]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:03 compute-2 sudo[313441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:38:03 compute-2 sudo[313441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:03 compute-2 sudo[313441]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:04 compute-2 sudo[313466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:04 compute-2 sudo[313466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:04 compute-2 sudo[313466]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:04 compute-2 sudo[313491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:38:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:04 compute-2 sudo[313491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:04 compute-2 ceph-mon[77138]: pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:38:04 compute-2 sudo[313491]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:38:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:38:05 compute-2 sudo[313548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:05 compute-2 sudo[313548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:05 compute-2 sudo[313548]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:05 compute-2 sudo[313574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:05 compute-2 sudo[313574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:05 compute-2 sudo[313574]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:06 compute-2 nova_compute[232428]: 2025-11-29 08:38:06.275 232432 INFO nova.compute.manager [None req-92f45f4a-c3f2-4576-a9f4-f1646b4509a0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Get console output
Nov 29 08:38:06 compute-2 nova_compute[232428]: 2025-11-29 08:38:06.282 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:38:06 compute-2 ceph-mon[77138]: pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:38:06 compute-2 nova_compute[232428]: 2025-11-29 08:38:06.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:07 compute-2 nova_compute[232428]: 2025-11-29 08:38:07.062 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:07 compute-2 ceph-mon[77138]: pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:38:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:08 compute-2 nova_compute[232428]: 2025-11-29 08:38:08.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:10 compute-2 ceph-mon[77138]: pgmap v3067: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Nov 29 08:38:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:10 compute-2 nova_compute[232428]: 2025-11-29 08:38:10.503 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Check if temp file /var/lib/nova/instances/tmpmdmpjaw5 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Nov 29 08:38:10 compute-2 nova_compute[232428]: 2025-11-29 08:38:10.503 232432 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Nov 29 08:38:10 compute-2 nova_compute[232428]: 2025-11-29 08:38:10.778 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:11 compute-2 nova_compute[232428]: 2025-11-29 08:38:11.635 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:11 compute-2 sudo[313601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:11 compute-2 sudo[313601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:11 compute-2 sudo[313601]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:11 compute-2 sudo[313627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:38:11 compute-2 sudo[313627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:11 compute-2 sudo[313627]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.067 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.219 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.219 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.219 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:38:12 compute-2 nova_compute[232428]: 2025-11-29 08:38:12.220 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:38:12 compute-2 ceph-mon[77138]: pgmap v3068: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 30 KiB/s wr, 4 op/s
Nov 29 08:38:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:38:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:38:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:13.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:13 compute-2 nova_compute[232428]: 2025-11-29 08:38:13.964 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:38:13 compute-2 nova_compute[232428]: 2025-11-29 08:38:13.985 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:38:13 compute-2 nova_compute[232428]: 2025-11-29 08:38:13.985 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:38:13 compute-2 nova_compute[232428]: 2025-11-29 08:38:13.985 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:14 compute-2 ceph-mon[77138]: pgmap v3069: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.636 232432 DEBUG nova.compute.manager [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.637 232432 DEBUG oslo_concurrency.lockutils [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.637 232432 DEBUG oslo_concurrency.lockutils [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.637 232432 DEBUG oslo_concurrency.lockutils [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.637 232432 DEBUG nova.compute.manager [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:14 compute-2 nova_compute[232428]: 2025-11-29 08:38:14.638 232432 DEBUG nova.compute.manager [req-31de2728-4bf2-466c-b992-1a4056f79ce9 req-e33bbb94-f46c-4eb8-858e-468b2841f0e1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.286 232432 INFO nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Took 3.09 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.286 232432 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.310 232432 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(95ccb129-d2dd-4ef1-89d6-364eb16926cc),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.315 232432 DEBUG nova.objects.instance [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'migration_context' on Instance uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.317 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.319 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.319 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.341 232432 DEBUG nova.virt.libvirt.vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:42Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.341 232432 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.343 232432 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.343 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 08:38:15 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:7a:9d:09"/>
Nov 29 08:38:15 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:38:15 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:38:15 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:38:15 compute-2 nova_compute[232428]:   <target dev="tapebf4feb2-02"/>
Nov 29 08:38:15 compute-2 nova_compute[232428]: </interface>
Nov 29 08:38:15 compute-2 nova_compute[232428]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.344 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Nov 29 08:38:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:15.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.729 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:15 compute-2 podman[313654]: 2025-11-29 08:38:15.741520253 +0000 UTC m=+0.143273410 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.822 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.823 232432 INFO nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Increasing downtime to 50 ms after 0 sec elapsed time
Nov 29 08:38:15 compute-2 ceph-mon[77138]: pgmap v3070: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 08:38:15 compute-2 nova_compute[232428]: 2025-11-29 08:38:15.915 232432 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Nov 29 08:38:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:38:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.419 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.420 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.818 232432 DEBUG nova.compute.manager [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.819 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.819 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.819 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.820 232432 DEBUG nova.compute.manager [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.820 232432 WARNING nova.compute.manager [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state migrating.
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.820 232432 DEBUG nova.compute.manager [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.820 232432 DEBUG nova.compute.manager [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing instance network info cache due to event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.820 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.821 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.821 232432 DEBUG nova.network.neutron [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.924 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Nov 29 08:38:16 compute-2 nova_compute[232428]: 2025-11-29 08:38:16.924 232432 DEBUG nova.virt.libvirt.migration [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.041 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405497.0410693, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.041 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Paused (Lifecycle Event)
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.066 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.069 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.072 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.098 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] During sync_power_state the instance has a pending task (migrating). Skip.
Nov 29 08:38:17 compute-2 kernel: tapebf4feb2-02 (unregistering): left promiscuous mode
Nov 29 08:38:17 compute-2 NetworkManager[48993]: <info>  [1764405497.3200] device (tapebf4feb2-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:38:17 compute-2 ovn_controller[134375]: 2025-11-29T08:38:17Z|00841|binding|INFO|Releasing lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 from this chassis (sb_readonly=0)
Nov 29 08:38:17 compute-2 ovn_controller[134375]: 2025-11-29T08:38:17Z|00842|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 down in Southbound
Nov 29 08:38:17 compute-2 ovn_controller[134375]: 2025-11-29T08:38:17Z|00843|binding|INFO|Removing iface tapebf4feb2-02 ovn-installed in OVS
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.349 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:9d:09 10.100.0.14'], port_security=['fa:16:3e:7a:9d:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '011fdddc-8681-4ece-b276-7e821dffaec6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0fa70e65-aaae-493a-9c8c-db89fe6658e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c01a89c-f496-44c3-afa3-4720950528b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c61e493b-5131-4681-b607-cad8a707cfcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4ed0086-1dab-4f89-9d5b-dbd6a6a8243e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=ebf4feb2-0247-40b6-a431-2f55b2f4c237) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.352 143801 INFO neutron.agent.ovn.metadata.agent [-] Port ebf4feb2-0247-40b6-a431-2f55b2f4c237 in datapath 3c01a89c-f496-44c3-afa3-4720950528b6 unbound from our chassis
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.355 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c01a89c-f496-44c3-afa3-4720950528b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.357 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[db28694f-10fc-4afc-9732-6383bf366e7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.358 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 namespace which is not needed anymore
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:17 compute-2 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Nov 29 08:38:17 compute-2 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b3.scope: Consumed 15.457s CPU time.
Nov 29 08:38:17 compute-2 systemd-machined[194747]: Machine qemu-87-instance-000000b3 terminated.
Nov 29 08:38:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:17.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [NOTICE]   (313275) : haproxy version is 2.8.14-c23fe91
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [NOTICE]   (313275) : path to executable is /usr/sbin/haproxy
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [WARNING]  (313275) : Exiting Master process...
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [WARNING]  (313275) : Exiting Master process...
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [ALERT]    (313275) : Current worker (313277) exited with code 143 (Terminated)
Nov 29 08:38:17 compute-2 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[313271]: [WARNING]  (313275) : All workers exited. Exiting... (0)
Nov 29 08:38:17 compute-2 systemd[1]: libpod-d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63.scope: Deactivated successfully.
Nov 29 08:38:17 compute-2 podman[313707]: 2025-11-29 08:38:17.525117361 +0000 UTC m=+0.058264994 container died d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:38:17 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63-userdata-shm.mount: Deactivated successfully.
Nov 29 08:38:17 compute-2 systemd[1]: var-lib-containers-storage-overlay-b82a54a1df1d6c831e704092446a885fe746363f81d85a94d7e95bd20fae6815-merged.mount: Deactivated successfully.
Nov 29 08:38:17 compute-2 podman[313707]: 2025-11-29 08:38:17.565403445 +0000 UTC m=+0.098551068 container cleanup d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:38:17 compute-2 systemd[1]: libpod-conmon-d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63.scope: Deactivated successfully.
Nov 29 08:38:17 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk: No such file or directory
Nov 29 08:38:17 compute-2 virtqemud[231977]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/0fa70e65-aaae-493a-9c8c-db89fe6658e6_disk: No such file or directory
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.621 232432 DEBUG nova.virt.libvirt.guest [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.621 232432 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migration operation has completed
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.622 232432 INFO nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] _post_live_migration() is started..
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.629 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.630 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.630 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Nov 29 08:38:17 compute-2 podman[313736]: 2025-11-29 08:38:17.64685895 +0000 UTC m=+0.056025084 container remove d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.654 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[44cc56cf-6d08-4d3d-9e0b-23ca7e4a27d4]: (4, ('Sat Nov 29 08:38:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 (d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63)\nd930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63\nSat Nov 29 08:38:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 (d930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63)\nd930c9f6e3bc205dbfaefb556d8ab667f69b8e0ae4f8fcd178b08538e5ad9f63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.656 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[574f201b-516a-4c85-93ab-95f213482eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.657 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c01a89c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.658 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:17 compute-2 kernel: tap3c01a89c-f0: left promiscuous mode
Nov 29 08:38:17 compute-2 nova_compute[232428]: 2025-11-29 08:38:17.680 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.684 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa50e1f-358b-4e28-ace2-9b21257cbbf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.698 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e75e70b-7910-4a8b-b429-bace13cf2688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.700 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[48b8906f-4854-4ad1-8aa9-930d117c8d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.722 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[75a65ebd-2ad1-4fcc-affe-a9f1d7b968cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842531, 'reachable_time': 16672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313765, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.725 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:38:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:17.725 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[8010f36d-aedd-42c0-9a14-459b9fd80d1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:38:17 compute-2 systemd[1]: run-netns-ovnmeta\x2d3c01a89c\x2df496\x2d44c3\x2dafa3\x2d4720950528b6.mount: Deactivated successfully.
Nov 29 08:38:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:18 compute-2 ceph-mon[77138]: pgmap v3071: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.233 232432 DEBUG nova.compute.manager [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.234 232432 DEBUG oslo_concurrency.lockutils [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.234 232432 DEBUG oslo_concurrency.lockutils [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.234 232432 DEBUG oslo_concurrency.lockutils [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.235 232432 DEBUG nova.compute.manager [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:18 compute-2 nova_compute[232428]: 2025-11-29 08:38:18.235 232432 DEBUG nova.compute.manager [req-43acac66-dcde-458f-953c-7cf26bbeeb4e req-45a8709f-9fe9-4639-a7d7-15971abd5523 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:38:19 compute-2 nova_compute[232428]: 2025-11-29 08:38:19.031 232432 DEBUG nova.network.neutron [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updated VIF entry in instance network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:38:19 compute-2 nova_compute[232428]: 2025-11-29 08:38:19.032 232432 DEBUG nova.network.neutron [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:38:19 compute-2 nova_compute[232428]: 2025-11-29 08:38:19.063 232432 DEBUG oslo_concurrency.lockutils [req-6e136b11-464e-4b45-b7a7-d5644bfa06ad req-48a0aaad-9940-4b54-bb8d-582ad78ac52c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:38:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/679710500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/572325142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:19.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.042 232432 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Activated binding for port ebf4feb2-0247-40b6-a431-2f55b2f4c237 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.043 232432 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.044 232432 DEBUG nova.virt.libvirt.vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:08Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.044 232432 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.045 232432 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.046 232432 DEBUG os_vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.048 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebf4feb2-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:38:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.055 232432 INFO os_vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02')
Nov 29 08:38:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.055 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.056 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.056 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.056 232432 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.056 232432 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deleting instance files /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6_del
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.057 232432 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deletion of /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6_del complete
Nov 29 08:38:20 compute-2 ceph-mon[77138]: pgmap v3072: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 2.0 KiB/s wr, 2 op/s
Nov 29 08:38:20 compute-2 nova_compute[232428]: 2025-11-29 08:38:20.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:21.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.073 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.297 232432 DEBUG nova.compute.manager [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.298 232432 DEBUG oslo_concurrency.lockutils [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.298 232432 DEBUG oslo_concurrency.lockutils [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.298 232432 DEBUG oslo_concurrency.lockutils [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.299 232432 DEBUG nova.compute.manager [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:22 compute-2 nova_compute[232428]: 2025-11-29 08:38:22.299 232432 WARNING nova.compute.manager [req-f1721594-6d8e-4612-930e-ebbec2a05310 req-f07bb15a-29cb-4f3f-b78c-c608e182c7e6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state migrating.
Nov 29 08:38:22 compute-2 ceph-mon[77138]: pgmap v3073: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.231 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.232 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:38:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:38:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:38:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:38:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3165303465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:23 compute-2 ceph-mon[77138]: pgmap v3074: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.735 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.939 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.940 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4222MB free_disk=20.942729949951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.940 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:23 compute-2 nova_compute[232428]: 2025-11-29 08:38:23.941 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.004 232432 INFO nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating resource usage from migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.047 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.048 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.048 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:38:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.090 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:38:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.479 232432 DEBUG nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.481 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.482 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.483 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.483 232432 DEBUG nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.484 232432 WARNING nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state migrating.
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.485 232432 DEBUG nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.485 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.486 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.486 232432 DEBUG oslo_concurrency.lockutils [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.487 232432 DEBUG nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.488 232432 WARNING nova.compute.manager [req-a2a02b5c-1afb-4f2e-b390-e236f511acc8 req-639abb02-1f54-4d63-8baa-dbb7e40cd5bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state migrating.
Nov 29 08:38:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:38:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3656114688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.561 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.569 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.591 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.623 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:38:24 compute-2 nova_compute[232428]: 2025-11-29 08:38:24.624 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3165303465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3132769282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3656114688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:25 compute-2 nova_compute[232428]: 2025-11-29 08:38:25.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:25.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:25 compute-2 nova_compute[232428]: 2025-11-29 08:38:25.625 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:38:25 compute-2 nova_compute[232428]: 2025-11-29 08:38:25.625 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:38:25 compute-2 podman[313816]: 2025-11-29 08:38:25.691337936 +0000 UTC m=+0.093373806 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:38:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4100923533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:25 compute-2 ceph-mon[77138]: pgmap v3075: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 08:38:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:26 compute-2 sudo[313837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:26 compute-2 sudo[313837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:26 compute-2 sudo[313837]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:26 compute-2 sudo[313862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:26 compute-2 sudo[313862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:26 compute-2 sudo[313862]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.426 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.427 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.427 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.445 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.445 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.446 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.446 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.446 232432 DEBUG oslo_concurrency.processutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:38:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3097264608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:38:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4142350775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:26 compute-2 nova_compute[232428]: 2025-11-29 08:38:26.917 232432 DEBUG oslo_concurrency.processutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.077 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.093 232432 WARNING nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.094 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4228MB free_disk=20.942729949951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.094 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.095 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.137 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Migration for instance 0fa70e65-aaae-493a-9c8c-db89fe6658e6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.156 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.180 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.181 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.181 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.219 232432 DEBUG oslo_concurrency.processutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:38:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:27.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:38:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1727451957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.661 232432 DEBUG oslo_concurrency.processutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.670 232432 DEBUG nova.compute.provider_tree [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.690 232432 DEBUG nova.scheduler.client.report [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.717 232432 DEBUG nova.compute.resource_tracker [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.718 232432 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.727 232432 INFO nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Nov 29 08:38:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4142350775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:27 compute-2 ceph-mon[77138]: pgmap v3076: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Nov 29 08:38:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1727451957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.861 232432 INFO nova.scheduler.client.report [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Deleted allocation for migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc
Nov 29 08:38:27 compute-2 nova_compute[232428]: 2025-11-29 08:38:27.861 232432 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Nov 29 08:38:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:38:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1001907222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:38:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:38:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1001907222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:38:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1001907222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:38:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1001907222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:38:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:29.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:30 compute-2 nova_compute[232428]: 2025-11-29 08:38:30.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:30 compute-2 ceph-mon[77138]: pgmap v3077: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 08:38:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3756476679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:38:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:31.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3583594181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:38:31 compute-2 ceph-mon[77138]: pgmap v3078: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 08:38:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:32 compute-2 nova_compute[232428]: 2025-11-29 08:38:32.078 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:32 compute-2 nova_compute[232428]: 2025-11-29 08:38:32.622 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405497.6198747, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:38:32 compute-2 nova_compute[232428]: 2025-11-29 08:38:32.622 232432 INFO nova.compute.manager [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Stopped (Lifecycle Event)
Nov 29 08:38:32 compute-2 nova_compute[232428]: 2025-11-29 08:38:32.648 232432 DEBUG nova.compute.manager [None req-c52d6ed9-4cbe-448b-970e-85f779bf0e64 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:38:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:38:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:33.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:38:33 compute-2 ceph-mon[77138]: pgmap v3079: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 08:38:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:34 compute-2 podman[313935]: 2025-11-29 08:38:34.691134877 +0000 UTC m=+0.088093853 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:38:35 compute-2 nova_compute[232428]: 2025-11-29 08:38:35.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:35.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:36 compute-2 ceph-mon[77138]: pgmap v3080: 305 pgs: 305 active+clean; 199 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 08:38:37 compute-2 nova_compute[232428]: 2025-11-29 08:38:37.080 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:38 compute-2 ceph-mon[77138]: pgmap v3081: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 151 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 08:38:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:40 compute-2 nova_compute[232428]: 2025-11-29 08:38:40.061 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:40 compute-2 ceph-mon[77138]: pgmap v3082: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 29 08:38:40 compute-2 nova_compute[232428]: 2025-11-29 08:38:40.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:40 compute-2 nova_compute[232428]: 2025-11-29 08:38:40.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:42 compute-2 nova_compute[232428]: 2025-11-29 08:38:42.083 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:42 compute-2 ceph-mon[77138]: pgmap v3083: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 08:38:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:43.588 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:38:43 compute-2 nova_compute[232428]: 2025-11-29 08:38:43.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:43.590 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:38:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:44 compute-2 ceph-mon[77138]: pgmap v3084: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 08:38:45 compute-2 nova_compute[232428]: 2025-11-29 08:38:45.064 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:46 compute-2 sudo[313962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:46 compute-2 sudo[313962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:46 compute-2 sudo[313962]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:46 compute-2 ceph-mon[77138]: pgmap v3085: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 08:38:46 compute-2 sudo[313994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:38:46 compute-2 sudo[313994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:38:46 compute-2 sudo[313994]: pam_unix(sudo:session): session closed for user root
Nov 29 08:38:46 compute-2 podman[313986]: 2025-11-29 08:38:46.452879024 +0000 UTC m=+0.146470770 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:38:47 compute-2 nova_compute[232428]: 2025-11-29 08:38:47.088 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:48 compute-2 ceph-mon[77138]: pgmap v3086: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Nov 29 08:38:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:50 compute-2 nova_compute[232428]: 2025-11-29 08:38:50.069 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:50 compute-2 ceph-mon[77138]: pgmap v3087: 305 pgs: 305 active+clean; 188 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 106 op/s
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.398002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531398913, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 981, "num_deletes": 251, "total_data_size": 2019558, "memory_usage": 2043312, "flush_reason": "Manual Compaction"}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531415972, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1332192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66633, "largest_seqno": 67609, "table_properties": {"data_size": 1327755, "index_size": 2088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9879, "raw_average_key_size": 19, "raw_value_size": 1318860, "raw_average_value_size": 2632, "num_data_blocks": 93, "num_entries": 501, "num_filter_entries": 501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405455, "oldest_key_time": 1764405455, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 18069 microseconds, and 8332 cpu microseconds.
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.416064) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1332192 bytes OK
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.416104) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.418212) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.418237) EVENT_LOG_v1 {"time_micros": 1764405531418229, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.418264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2014708, prev total WAL file size 2014708, number of live WAL files 2.
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.419622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1300KB)], [132(11MB)]
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531419715, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 13413063, "oldest_snapshot_seqno": -1}
Nov 29 08:38:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9341 keys, 11503306 bytes, temperature: kUnknown
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531543455, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 11503306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11443196, "index_size": 35677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 247745, "raw_average_key_size": 26, "raw_value_size": 11279221, "raw_average_value_size": 1207, "num_data_blocks": 1354, "num_entries": 9341, "num_filter_entries": 9341, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.543770) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 11503306 bytes
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.545350) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.3 rd, 92.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.5 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(18.7) write-amplify(8.6) OK, records in: 9856, records dropped: 515 output_compression: NoCompression
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.545405) EVENT_LOG_v1 {"time_micros": 1764405531545385, "job": 84, "event": "compaction_finished", "compaction_time_micros": 123825, "compaction_time_cpu_micros": 56318, "output_level": 6, "num_output_files": 1, "total_output_size": 11503306, "num_input_records": 9856, "num_output_records": 9341, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531545934, "job": 84, "event": "table_file_deletion", "file_number": 134}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531548829, "job": 84, "event": "table_file_deletion", "file_number": 132}
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.419436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.549065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.549075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.549078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.549081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:38:51.549085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:38:52 compute-2 nova_compute[232428]: 2025-11-29 08:38:52.091 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:38:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:38:52 compute-2 ceph-mon[77138]: pgmap v3088: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 08:38:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:38:52.592 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:38:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:53.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:54 compute-2 ceph-mon[77138]: pgmap v3089: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 08:38:55 compute-2 nova_compute[232428]: 2025-11-29 08:38:55.072 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:38:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:38:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:56 compute-2 ceph-mon[77138]: pgmap v3090: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 08:38:56 compute-2 podman[314045]: 2025-11-29 08:38:56.64529455 +0000 UTC m=+0.055517058 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:38:57 compute-2 nova_compute[232428]: 2025-11-29 08:38:57.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:38:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:57.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:38:58 compute-2 ceph-mon[77138]: pgmap v3091: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:38:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1116186033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:38:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:38:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:38:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:38:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:00 compute-2 nova_compute[232428]: 2025-11-29 08:39:00.074 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:00 compute-2 ceph-mon[77138]: pgmap v3092: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 08:39:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:01.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:01 compute-2 ceph-mon[77138]: pgmap v3093: 305 pgs: 305 active+clean; 225 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Nov 29 08:39:02 compute-2 nova_compute[232428]: 2025-11-29 08:39:02.096 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1944985667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2557532631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:03.342 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:03.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:03 compute-2 ceph-mon[77138]: pgmap v3094: 305 pgs: 305 active+clean; 225 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 869 KiB/s wr, 3 op/s
Nov 29 08:39:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:05 compute-2 nova_compute[232428]: 2025-11-29 08:39:05.077 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:05 compute-2 nova_compute[232428]: 2025-11-29 08:39:05.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:05 compute-2 podman[314068]: 2025-11-29 08:39:05.698925653 +0000 UTC m=+0.089899999 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:39:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:06 compute-2 ceph-mon[77138]: pgmap v3095: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 08:39:06 compute-2 sudo[314089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:06 compute-2 sudo[314089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:06 compute-2 sudo[314089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:06 compute-2 sudo[314114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:06 compute-2 sudo[314114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:06 compute-2 sudo[314114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:07 compute-2 nova_compute[232428]: 2025-11-29 08:39:07.099 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:08 compute-2 ceph-mon[77138]: pgmap v3096: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 08:39:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:09 compute-2 nova_compute[232428]: 2025-11-29 08:39:09.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:09 compute-2 sshd-session[314140]: Invalid user solv from 45.148.10.240 port 38756
Nov 29 08:39:09 compute-2 sshd-session[314140]: Connection closed by invalid user solv 45.148.10.240 port 38756 [preauth]
Nov 29 08:39:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.080 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:10 compute-2 ceph-mon[77138]: pgmap v3097: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.586 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.587 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.603 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.687 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.687 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.696 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.696 232432 INFO nova.compute.claims [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:39:10 compute-2 nova_compute[232428]: 2025-11-29 08:39:10.818 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:39:11 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2210203806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.277 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.283 232432 DEBUG nova.compute.provider_tree [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.301 232432 DEBUG nova.scheduler.client.report [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.327 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.328 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.411 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.411 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.450 232432 INFO nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.470 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:39:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.572 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.574 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.575 232432 INFO nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Creating image(s)
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.605 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.632 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.663 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.666 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.699 232432 DEBUG nova.policy [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '02ca6537c3444698b6f9f44f760fa337', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9414b14debe34aef968a821a9866ef08', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.737 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.738 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.738 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.739 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.762 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:11 compute-2 nova_compute[232428]: 2025-11-29 08:39:11.765 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf afaf71ac-b6be-4353-b440-5774aa20e99f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:11 compute-2 sudo[314260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:11 compute-2 sudo[314260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:11 compute-2 sudo[314260]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:12 compute-2 sudo[314285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:39:12 compute-2 sudo[314285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:12 compute-2 sudo[314285]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.062 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf afaf71ac-b6be-4353-b440-5774aa20e99f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:12 compute-2 sudo[314310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:12 compute-2 sudo[314310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:12 compute-2 sudo[314310]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.134 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.145 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] resizing rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:39:12 compute-2 ceph-mon[77138]: pgmap v3098: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 29 08:39:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2210203806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:12 compute-2 sudo[314371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:39:12 compute-2 sudo[314371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.246 232432 DEBUG nova.objects.instance [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lazy-loading 'migration_context' on Instance uuid afaf71ac-b6be-4353-b440-5774aa20e99f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.282 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.282 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Ensure instance console log exists: /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.283 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.284 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:12 compute-2 nova_compute[232428]: 2025-11-29 08:39:12.284 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:12 compute-2 sudo[314371]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:39:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:39:13 compute-2 nova_compute[232428]: 2025-11-29 08:39:13.227 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Successfully created port: f330d12c-9123-4064-9d72-7a4e4d80aba2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:39:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:13 compute-2 nova_compute[232428]: 2025-11-29 08:39:13.927 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Successfully updated port: f330d12c-9123-4064-9d72-7a4e4d80aba2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:39:13 compute-2 nova_compute[232428]: 2025-11-29 08:39:13.940 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:39:13 compute-2 nova_compute[232428]: 2025-11-29 08:39:13.940 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquired lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:39:13 compute-2 nova_compute[232428]: 2025-11-29 08:39:13.940 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.027 232432 DEBUG nova.compute.manager [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.027 232432 DEBUG nova.compute.manager [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing instance network info cache due to event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.027 232432 DEBUG oslo_concurrency.lockutils [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.112 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:39:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:14 compute-2 ceph-mon[77138]: pgmap v3099: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 972 KiB/s wr, 33 op/s
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.218 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:39:14 compute-2 nova_compute[232428]: 2025-11-29 08:39:14.219 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:39:15 compute-2 nova_compute[232428]: 2025-11-29 08:39:15.084 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:15 compute-2 nova_compute[232428]: 2025-11-29 08:39:15.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:15 compute-2 nova_compute[232428]: 2025-11-29 08:39:15.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.139 232432 DEBUG nova.network.neutron [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updating instance_info_cache with network_info: [{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.167 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Releasing lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.167 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Instance network_info: |[{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.167 232432 DEBUG oslo_concurrency.lockutils [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.168 232432 DEBUG nova.network.neutron [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.171 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Start _get_guest_xml network_info=[{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.176 232432 WARNING nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.182 232432 DEBUG nova.virt.libvirt.host [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.183 232432 DEBUG nova.virt.libvirt.host [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.186 232432 DEBUG nova.virt.libvirt.host [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.186 232432 DEBUG nova.virt.libvirt.host [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.188 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.188 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.189 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.189 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.190 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.190 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.190 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.190 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.191 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.191 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.191 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.192 232432 DEBUG nova.virt.hardware [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.195 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:16 compute-2 ceph-mon[77138]: pgmap v3100: 305 pgs: 305 active+clean; 274 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 86 op/s
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.239 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:39:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4215760688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.632 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.662 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:16 compute-2 nova_compute[232428]: 2025-11-29 08:39:16.669 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:16 compute-2 podman[314486]: 2025-11-29 08:39:16.698785729 +0000 UTC m=+0.100067705 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.102 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:39:17 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4042340585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.149 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.151 232432 DEBUG nova.virt.libvirt.vif [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-227744933-acc',id=182,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNRTg3l8L1dHfpPOjwWHH/Hu9t32/78vaH8rmEVHSvPCojauNqaAKCs+W9LsAvsvaWjBIvZcAhkYKv4Ah6TMZs7KVrS6xUBl81JWHqT17+U7D9jhHpsOXTetaASnG8QLqg==',key_name='tempest-TestSecurityGroupsBasicOps-802028870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9414b14debe34aef968a821a9866ef08',ramdisk_id='',reservation_id='r-a02ict5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-227744933',owner_user_name='tempest-TestSecurityGroupsBasicOps-227744933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:11Z,user_data=None,user_id='02ca6537c3444698b6f9f44f760fa337',uuid=afaf71ac-b6be-4353-b440-5774aa20e99f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.151 232432 DEBUG nova.network.os_vif_util [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converting VIF {"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.152 232432 DEBUG nova.network.os_vif_util [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.153 232432 DEBUG nova.objects.instance [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lazy-loading 'pci_devices' on Instance uuid afaf71ac-b6be-4353-b440-5774aa20e99f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.178 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <uuid>afaf71ac-b6be-4353-b440-5774aa20e99f</uuid>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <name>instance-000000b6</name>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094</nova:name>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:39:16</nova:creationTime>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:user uuid="02ca6537c3444698b6f9f44f760fa337">tempest-TestSecurityGroupsBasicOps-227744933-project-member</nova:user>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:project uuid="9414b14debe34aef968a821a9866ef08">tempest-TestSecurityGroupsBasicOps-227744933</nova:project>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <nova:port uuid="f330d12c-9123-4064-9d72-7a4e4d80aba2">
Nov 29 08:39:17 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <system>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="serial">afaf71ac-b6be-4353-b440-5774aa20e99f</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="uuid">afaf71ac-b6be-4353-b440-5774aa20e99f</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </system>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <os>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </os>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <features>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </features>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/afaf71ac-b6be-4353-b440-5774aa20e99f_disk">
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </source>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config">
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </source>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:39:17 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:bb:ab:25"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <target dev="tapf330d12c-91"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/console.log" append="off"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <video>
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </video>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:39:17 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:39:17 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:39:17 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:39:17 compute-2 nova_compute[232428]: </domain>
Nov 29 08:39:17 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.179 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Preparing to wait for external event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.180 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.180 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.181 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.181 232432 DEBUG nova.virt.libvirt.vif [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-227744933-acc',id=182,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNRTg3l8L1dHfpPOjwWHH/Hu9t32/78vaH8rmEVHSvPCojauNqaAKCs+W9LsAvsvaWjBIvZcAhkYKv4Ah6TMZs7KVrS6xUBl81JWHqT17+U7D9jhHpsOXTetaASnG8QLqg==',key_name='tempest-TestSecurityGroupsBasicOps-802028870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9414b14debe34aef968a821a9866ef08',ramdisk_id='',reservation_id='r-a02ict5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-227744933',owner_user_name='tempest-TestSecurityGroupsBasicOps-227744933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:11Z,user_data=None,user_id='02ca6537c3444698b6f9f44f760fa337',uuid=afaf71ac-b6be-4353-b440-5774aa20e99f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.182 232432 DEBUG nova.network.os_vif_util [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converting VIF {"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.183 232432 DEBUG nova.network.os_vif_util [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.183 232432 DEBUG os_vif [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.184 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.184 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.185 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.190 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf330d12c-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.191 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf330d12c-91, col_values=(('external_ids', {'iface-id': 'f330d12c-9123-4064-9d72-7a4e4d80aba2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:ab:25', 'vm-uuid': 'afaf71ac-b6be-4353-b440-5774aa20e99f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.192 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:17 compute-2 NetworkManager[48993]: <info>  [1764405557.1941] manager: (tapf330d12c-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.194 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.204 232432 INFO os_vif [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91')
Nov 29 08:39:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4215760688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4042340585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.259 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.260 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.260 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] No VIF found with MAC fa:16:3e:bb:ab:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.261 232432 INFO nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Using config drive
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.290 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:17.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.572 232432 DEBUG nova.network.neutron [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updated VIF entry in instance network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.573 232432 DEBUG nova.network.neutron [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updating instance_info_cache with network_info: [{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.588 232432 DEBUG oslo_concurrency.lockutils [req-8f206102-cbe4-4db1-ba6a-24266b8ca52b req-23479fd3-4b16-4859-89a7-1b74414c3f68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.620 232432 INFO nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Creating config drive at /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.625 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqubufkni execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.763 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqubufkni" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.791 232432 DEBUG nova.storage.rbd_utils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] rbd image afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.796 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.964 232432 DEBUG oslo_concurrency.processutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config afaf71ac-b6be-4353-b440-5774aa20e99f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:17 compute-2 nova_compute[232428]: 2025-11-29 08:39:17.966 232432 INFO nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Deleting local config drive /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f/disk.config because it was imported into RBD.
Nov 29 08:39:18 compute-2 kernel: tapf330d12c-91: entered promiscuous mode
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.0404] manager: (tapf330d12c-91): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 08:39:18 compute-2 ovn_controller[134375]: 2025-11-29T08:39:18Z|00844|binding|INFO|Claiming lport f330d12c-9123-4064-9d72-7a4e4d80aba2 for this chassis.
Nov 29 08:39:18 compute-2 ovn_controller[134375]: 2025-11-29T08:39:18Z|00845|binding|INFO|f330d12c-9123-4064-9d72-7a4e4d80aba2: Claiming fa:16:3e:bb:ab:25 10.100.0.4
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.069 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:ab:25 10.100.0.4'], port_security=['fa:16:3e:bb:ab:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'afaf71ac-b6be-4353-b440-5774aa20e99f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9414b14debe34aef968a821a9866ef08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99801b98-6b01-4bc4-9e51-7198233046e3 d1abda7c-cf5b-4664-89da-cbad37fde51d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b67cadd-01fa-4da3-a77e-31138559e78b, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=f330d12c-9123-4064-9d72-7a4e4d80aba2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.072 143801 INFO neutron.agent.ovn.metadata.agent [-] Port f330d12c-9123-4064-9d72-7a4e4d80aba2 in datapath 6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b bound to our chassis
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.076 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b
Nov 29 08:39:18 compute-2 systemd-udevd[314626]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:39:18 compute-2 systemd-machined[194747]: New machine qemu-88-instance-000000b6.
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.098 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5a102a29-5842-4101-945e-c1f78362e953]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.100 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d6fdfd7-f1 in ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.103 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d6fdfd7-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.103 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf3d635-e414-4b5b-91a6-4f6f7454b8ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.104 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[84328325-1344-4154-a1bb-179068331cee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.1055] device (tapf330d12c-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.1072] device (tapf330d12c-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.122 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[1490eb32-a1b3-4be5-9c3d-b40215db1605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 ovn_controller[134375]: 2025-11-29T08:39:18Z|00846|binding|INFO|Setting lport f330d12c-9123-4064-9d72-7a4e4d80aba2 ovn-installed in OVS
Nov 29 08:39:18 compute-2 ovn_controller[134375]: 2025-11-29T08:39:18Z|00847|binding|INFO|Setting lport f330d12c-9123-4064-9d72-7a4e4d80aba2 up in Southbound
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.127 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 systemd[1]: Started Virtual Machine qemu-88-instance-000000b6.
Nov 29 08:39:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.151 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b9087822-792a-454b-86c8-15b61ce4b871]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.192 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb2bd78-49c9-425a-8440-79751c6120c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.199 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7497b409-6d90-4b92-b979-e8a49f05f329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.2021] manager: (tap6d6fdfd7-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 08:39:18 compute-2 ceph-mon[77138]: pgmap v3101: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.243 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dc871337-7c84-4998-8f66-8bb1c51a1df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.247 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3595ef0b-f332-4b1f-8ba5-3c062b297ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.2751] device (tap6d6fdfd7-f0): carrier: link connected
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.278 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d97a2571-a72e-4a4a-9de7-5e74a072d497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.300 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbc0baa-333d-4157-bb58-c2107a09f9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d6fdfd7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:c2:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852454, 'reachable_time': 17885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314660, 'error': None, 'target': 'ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.315 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cb97b0f1-cfbf-466b-ba33-7ddfa33cbc47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:c2d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852454, 'tstamp': 852454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314661, 'error': None, 'target': 'ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.331 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0889127c-42c7-4f16-85d2-beaa2f146c0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d6fdfd7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:c2:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852454, 'reachable_time': 17885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314662, 'error': None, 'target': 'ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.354 232432 DEBUG nova.compute.manager [req-19b29338-695e-490a-a48c-2db7af4a9ef6 req-5e7bc470-1bac-4891-8927-30f0e9416d05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.354 232432 DEBUG oslo_concurrency.lockutils [req-19b29338-695e-490a-a48c-2db7af4a9ef6 req-5e7bc470-1bac-4891-8927-30f0e9416d05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.354 232432 DEBUG oslo_concurrency.lockutils [req-19b29338-695e-490a-a48c-2db7af4a9ef6 req-5e7bc470-1bac-4891-8927-30f0e9416d05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.355 232432 DEBUG oslo_concurrency.lockutils [req-19b29338-695e-490a-a48c-2db7af4a9ef6 req-5e7bc470-1bac-4891-8927-30f0e9416d05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.355 232432 DEBUG nova.compute.manager [req-19b29338-695e-490a-a48c-2db7af4a9ef6 req-5e7bc470-1bac-4891-8927-30f0e9416d05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Processing event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.367 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e79a3ad4-e4a6-45a1-a483-f035029860cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.434 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[08b5d609-48b4-49af-92ae-5029f1c24e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.436 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d6fdfd7-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.436 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.436 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d6fdfd7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.438 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 NetworkManager[48993]: <info>  [1764405558.4398] manager: (tap6d6fdfd7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 08:39:18 compute-2 kernel: tap6d6fdfd7-f0: entered promiscuous mode
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.442 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.443 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d6fdfd7-f0, col_values=(('external_ids', {'iface-id': 'e2336561-7e75-44b8-8f47-e158ca6a7116'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 ovn_controller[134375]: 2025-11-29T08:39:18Z|00848|binding|INFO|Releasing lport e2336561-7e75-44b8-8f47-e158ca6a7116 from this chassis (sb_readonly=0)
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.473 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.475 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[184fad40-7e4f-4536-8261-084f5e7646be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.476 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b.pid.haproxy
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:39:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:18.479 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'env', 'PROCESS_TAG=haproxy-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.739 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405558.738517, afaf71ac-b6be-4353-b440-5774aa20e99f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.739 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] VM Started (Lifecycle Event)
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.741 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.745 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.750 232432 INFO nova.virt.libvirt.driver [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Instance spawned successfully.
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.751 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.774 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.779 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.784 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.784 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.785 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.785 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.785 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.786 232432 DEBUG nova.virt.libvirt.driver [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.808 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.809 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405558.7395627, afaf71ac-b6be-4353-b440-5774aa20e99f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.809 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] VM Paused (Lifecycle Event)
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.832 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.835 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405558.7441552, afaf71ac-b6be-4353-b440-5774aa20e99f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.835 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] VM Resumed (Lifecycle Event)
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.862 232432 INFO nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Took 7.29 seconds to spawn the instance on the hypervisor.
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.862 232432 DEBUG nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.863 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:39:18 compute-2 podman[314736]: 2025-11-29 08:39:18.866995816 +0000 UTC m=+0.048199711 container create 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.874 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.908 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:39:18 compute-2 systemd[1]: Started libpod-conmon-08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070.scope.
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.925 232432 INFO nova.compute.manager [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Took 8.27 seconds to build instance.
Nov 29 08:39:18 compute-2 podman[314736]: 2025-11-29 08:39:18.842637589 +0000 UTC m=+0.023841504 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:39:18 compute-2 sudo[314749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:18 compute-2 nova_compute[232428]: 2025-11-29 08:39:18.943 232432 DEBUG oslo_concurrency.lockutils [None req-98e810e6-fda9-44f1-8abf-5c4bba798bdf 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:18 compute-2 sudo[314749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:18 compute-2 sudo[314749]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:18 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:39:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e4c87fbfc234d2939d890466fd3d4a960dcdcff5a3b391f60cd67eaef1002/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:39:18 compute-2 podman[314736]: 2025-11-29 08:39:18.98213385 +0000 UTC m=+0.163337765 container init 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:39:18 compute-2 podman[314736]: 2025-11-29 08:39:18.987663442 +0000 UTC m=+0.168867327 container start 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:39:19 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [NOTICE]   (314801) : New worker (314805) forked
Nov 29 08:39:19 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [NOTICE]   (314801) : Loading success.
Nov 29 08:39:19 compute-2 sudo[314779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:39:19 compute-2 sudo[314779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:19 compute-2 sudo[314779]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2077211369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:39:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:39:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:20 compute-2 ceph-mon[77138]: pgmap v3102: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 08:39:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1233993549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.451 232432 DEBUG nova.compute.manager [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.452 232432 DEBUG oslo_concurrency.lockutils [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.452 232432 DEBUG oslo_concurrency.lockutils [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.452 232432 DEBUG oslo_concurrency.lockutils [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.452 232432 DEBUG nova.compute.manager [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] No waiting events found dispatching network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:39:20 compute-2 nova_compute[232428]: 2025-11-29 08:39:20.453 232432 WARNING nova.compute.manager [req-e818a226-d945-4b55-89f7-6914353e06b0 req-57e65531-b96c-47ad-996f-4e8c50e4e5a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received unexpected event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 for instance with vm_state active and task_state None.
Nov 29 08:39:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:21.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:22 compute-2 nova_compute[232428]: 2025-11-29 08:39:22.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:22 compute-2 nova_compute[232428]: 2025-11-29 08:39:22.193 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:22 compute-2 ceph-mon[77138]: pgmap v3103: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 29 08:39:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:22.504 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:39:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:22.508 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:39:22 compute-2 nova_compute[232428]: 2025-11-29 08:39:22.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.231 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.231 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:23.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:23 compute-2 NetworkManager[48993]: <info>  [1764405563.5901] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 08:39:23 compute-2 NetworkManager[48993]: <info>  [1764405563.5927] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:39:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3744279951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.700 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:23 compute-2 ovn_controller[134375]: 2025-11-29T08:39:23Z|00849|binding|INFO|Releasing lport e2336561-7e75-44b8-8f47-e158ca6a7116 from this chassis (sb_readonly=0)
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.710 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.730 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.786 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.787 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.938 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.939 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4047MB free_disk=20.90093231201172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.940 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:23 compute-2 nova_compute[232428]: 2025-11-29 08:39:23.940 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.075 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance afaf71ac-b6be-4353-b440-5774aa20e99f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.078 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.078 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:39:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.168 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.239 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.240 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.256 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:39:24 compute-2 ceph-mon[77138]: pgmap v3104: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 29 08:39:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3744279951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.301 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.360 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.458 232432 DEBUG nova.compute.manager [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.459 232432 DEBUG nova.compute.manager [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing instance network info cache due to event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.459 232432 DEBUG oslo_concurrency.lockutils [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.460 232432 DEBUG oslo_concurrency.lockutils [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.460 232432 DEBUG nova.network.neutron [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:39:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:39:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/164262898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.794 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.800 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.815 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.835 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:39:24 compute-2 nova_compute[232428]: 2025-11-29 08:39:24.836 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/164262898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:25 compute-2 nova_compute[232428]: 2025-11-29 08:39:25.826 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:25 compute-2 nova_compute[232428]: 2025-11-29 08:39:25.858 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:39:25 compute-2 nova_compute[232428]: 2025-11-29 08:39:25.858 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:39:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:26 compute-2 ceph-mon[77138]: pgmap v3105: 305 pgs: 305 active+clean; 309 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 183 op/s
Nov 29 08:39:26 compute-2 nova_compute[232428]: 2025-11-29 08:39:26.485 232432 DEBUG nova.network.neutron [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updated VIF entry in instance network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:39:26 compute-2 nova_compute[232428]: 2025-11-29 08:39:26.486 232432 DEBUG nova.network.neutron [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updating instance_info_cache with network_info: [{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:39:26 compute-2 nova_compute[232428]: 2025-11-29 08:39:26.527 232432 DEBUG oslo_concurrency.lockutils [req-62e01155-8d06-4993-9219-1cb1e600f30e req-7b101122-9a4c-45c6-a0f5-13c6b4910eae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:39:26 compute-2 sudo[314866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:26 compute-2 sudo[314866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:26 compute-2 sudo[314866]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:26 compute-2 sudo[314897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:26 compute-2 sudo[314897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:26 compute-2 podman[314890]: 2025-11-29 08:39:26.831154524 +0000 UTC m=+0.076798520 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 08:39:26 compute-2 sudo[314897]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:27 compute-2 nova_compute[232428]: 2025-11-29 08:39:27.105 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:27 compute-2 nova_compute[232428]: 2025-11-29 08:39:27.195 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2715970553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:39:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1890873065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:39:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:39:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1890873065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:39:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:28 compute-2 ceph-mon[77138]: pgmap v3106: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Nov 29 08:39:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/707917604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1890873065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:39:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1890873065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:39:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:29.512 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:30 compute-2 ceph-mon[77138]: pgmap v3107: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 29 08:39:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:32 compute-2 nova_compute[232428]: 2025-11-29 08:39:32.113 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:32 compute-2 nova_compute[232428]: 2025-11-29 08:39:32.196 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:32 compute-2 ovn_controller[134375]: 2025-11-29T08:39:32Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:ab:25 10.100.0.4
Nov 29 08:39:32 compute-2 ovn_controller[134375]: 2025-11-29T08:39:32Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:ab:25 10.100.0.4
Nov 29 08:39:32 compute-2 ceph-mon[77138]: pgmap v3108: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 29 08:39:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:33 compute-2 ceph-mon[77138]: pgmap v3109: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 08:39:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:35.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:36 compute-2 ceph-mon[77138]: pgmap v3110: 305 pgs: 305 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Nov 29 08:39:36 compute-2 podman[314940]: 2025-11-29 08:39:36.725848925 +0000 UTC m=+0.105132772 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:39:37 compute-2 nova_compute[232428]: 2025-11-29 08:39:37.116 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:37 compute-2 nova_compute[232428]: 2025-11-29 08:39:37.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:38 compute-2 ceph-mon[77138]: pgmap v3111: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 3.0 MiB/s wr, 106 op/s
Nov 29 08:39:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:40 compute-2 ceph-mon[77138]: pgmap v3112: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Nov 29 08:39:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:41.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:42 compute-2 nova_compute[232428]: 2025-11-29 08:39:42.119 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:42 compute-2 nova_compute[232428]: 2025-11-29 08:39:42.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:42 compute-2 ceph-mon[77138]: pgmap v3113: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 214 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Nov 29 08:39:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:43.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:44 compute-2 ceph-mon[77138]: pgmap v3114: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 213 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 29 08:39:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2838854134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:45.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.603 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.603 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.603 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.603 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.604 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.605 232432 INFO nova.compute.manager [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Terminating instance
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.606 232432 DEBUG nova.compute.manager [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:39:45 compute-2 kernel: tapf330d12c-91 (unregistering): left promiscuous mode
Nov 29 08:39:45 compute-2 NetworkManager[48993]: <info>  [1764405585.6808] device (tapf330d12c-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.690 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:45 compute-2 ovn_controller[134375]: 2025-11-29T08:39:45Z|00850|binding|INFO|Releasing lport f330d12c-9123-4064-9d72-7a4e4d80aba2 from this chassis (sb_readonly=0)
Nov 29 08:39:45 compute-2 ovn_controller[134375]: 2025-11-29T08:39:45Z|00851|binding|INFO|Setting lport f330d12c-9123-4064-9d72-7a4e4d80aba2 down in Southbound
Nov 29 08:39:45 compute-2 ovn_controller[134375]: 2025-11-29T08:39:45Z|00852|binding|INFO|Removing iface tapf330d12c-91 ovn-installed in OVS
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.694 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:45.699 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:ab:25 10.100.0.4'], port_security=['fa:16:3e:bb:ab:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'afaf71ac-b6be-4353-b440-5774aa20e99f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9414b14debe34aef968a821a9866ef08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99801b98-6b01-4bc4-9e51-7198233046e3 d1abda7c-cf5b-4664-89da-cbad37fde51d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b67cadd-01fa-4da3-a77e-31138559e78b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=f330d12c-9123-4064-9d72-7a4e4d80aba2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:39:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:45.700 143801 INFO neutron.agent.ovn.metadata.agent [-] Port f330d12c-9123-4064-9d72-7a4e4d80aba2 in datapath 6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b unbound from our chassis
Nov 29 08:39:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:45.702 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:39:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:45.704 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36a1a981-11a1-4247-a04e-c7a71784cba5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:45.705 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b namespace which is not needed anymore
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.727 232432 DEBUG nova.compute.manager [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.728 232432 DEBUG nova.compute.manager [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing instance network info cache due to event network-changed-f330d12c-9123-4064-9d72-7a4e4d80aba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.728 232432 DEBUG oslo_concurrency.lockutils [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.729 232432 DEBUG oslo_concurrency.lockutils [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.729 232432 DEBUG nova.network.neutron [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Refreshing network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:45 compute-2 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Nov 29 08:39:45 compute-2 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b6.scope: Consumed 14.630s CPU time.
Nov 29 08:39:45 compute-2 systemd-machined[194747]: Machine qemu-88-instance-000000b6 terminated.
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.849 232432 INFO nova.virt.libvirt.driver [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Instance destroyed successfully.
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.849 232432 DEBUG nova.objects.instance [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lazy-loading 'resources' on Instance uuid afaf71ac-b6be-4353-b440-5774aa20e99f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.865 232432 DEBUG nova.virt.libvirt.vif [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-227744933-access_point-1086270094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-227744933-acc',id=182,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNRTg3l8L1dHfpPOjwWHH/Hu9t32/78vaH8rmEVHSvPCojauNqaAKCs+W9LsAvsvaWjBIvZcAhkYKv4Ah6TMZs7KVrS6xUBl81JWHqT17+U7D9jhHpsOXTetaASnG8QLqg==',key_name='tempest-TestSecurityGroupsBasicOps-802028870',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9414b14debe34aef968a821a9866ef08',ramdisk_id='',reservation_id='r-a02ict5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-227744933',owner_user_name='tempest-TestSecurityGroupsBasicOps-227744933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:18Z,user_data=None,user_id='02ca6537c3444698b6f9f44f760fa337',uuid=afaf71ac-b6be-4353-b440-5774aa20e99f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.865 232432 DEBUG nova.network.os_vif_util [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converting VIF {"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.866 232432 DEBUG nova.network.os_vif_util [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.866 232432 DEBUG os_vif [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:45 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [NOTICE]   (314801) : haproxy version is 2.8.14-c23fe91
Nov 29 08:39:45 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [NOTICE]   (314801) : path to executable is /usr/sbin/haproxy
Nov 29 08:39:45 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [ALERT]    (314801) : Current worker (314805) exited with code 143 (Terminated)
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.869 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf330d12c-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:45 compute-2 neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b[314774]: [WARNING]  (314801) : All workers exited. Exiting... (0)
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:39:45 compute-2 systemd[1]: libpod-08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070.scope: Deactivated successfully.
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.875 232432 INFO os_vif [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:ab:25,bridge_name='br-int',has_traffic_filtering=True,id=f330d12c-9123-4064-9d72-7a4e4d80aba2,network=Network(6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf330d12c-91')
Nov 29 08:39:45 compute-2 podman[314988]: 2025-11-29 08:39:45.879665979 +0000 UTC m=+0.053812605 container died 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:39:45 compute-2 systemd[1]: var-lib-containers-storage-overlay-f04e4c87fbfc234d2939d890466fd3d4a960dcdcff5a3b391f60cd67eaef1002-merged.mount: Deactivated successfully.
Nov 29 08:39:45 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070-userdata-shm.mount: Deactivated successfully.
Nov 29 08:39:45 compute-2 podman[314988]: 2025-11-29 08:39:45.92531597 +0000 UTC m=+0.099462596 container cleanup 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:39:45 compute-2 systemd[1]: libpod-conmon-08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070.scope: Deactivated successfully.
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.965 232432 DEBUG nova.compute.manager [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-unplugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.966 232432 DEBUG oslo_concurrency.lockutils [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.966 232432 DEBUG oslo_concurrency.lockutils [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.966 232432 DEBUG oslo_concurrency.lockutils [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.967 232432 DEBUG nova.compute.manager [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] No waiting events found dispatching network-vif-unplugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:39:45 compute-2 nova_compute[232428]: 2025-11-29 08:39:45.967 232432 DEBUG nova.compute.manager [req-2264272a-f40a-42af-804f-f937692adb1d req-dc8153f7-6f94-4e31-93b9-e64cd1fd3ab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-unplugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:39:46 compute-2 podman[315042]: 2025-11-29 08:39:46.01818437 +0000 UTC m=+0.060921696 container remove 08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.026 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bd28eb57-95aa-42a9-a362-bc47a6f18d32]: (4, ('Sat Nov 29 08:39:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b (08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070)\n08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070\nSat Nov 29 08:39:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b (08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070)\n08cc7efb16632bced7ddca641b4bfb836d2a822d2ba487b9f27d57053bec7070\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.028 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d670959-644e-4b0f-8b6e-75272ffe2843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.030 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d6fdfd7-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.031 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:46 compute-2 kernel: tap6d6fdfd7-f0: left promiscuous mode
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.046 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.050 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7b18b319-6db9-435d-a4c6-d93e413cec06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.068 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a3775fdc-03d1-45e8-a130-4c564bea1b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.070 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f9470625-473c-40ca-9164-35e8640c2007]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.089 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad61252-3e9c-40c5-9b43-ab4d40c7e96c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852445, 'reachable_time': 29386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315061, 'error': None, 'target': 'ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 systemd[1]: run-netns-ovnmeta\x2d6d6fdfd7\x2df417\x2d4c7d\x2d9e9e\x2da27ac776d11b.mount: Deactivated successfully.
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.096 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:39:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:39:46.097 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e4c501-7bd0-4cb6-9b3d-3e0e06acbf11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:39:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:46 compute-2 ceph-mon[77138]: pgmap v3115: 305 pgs: 305 active+clean; 314 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.375 232432 INFO nova.virt.libvirt.driver [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Deleting instance files /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f_del
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.376 232432 INFO nova.virt.libvirt.driver [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Deletion of /var/lib/nova/instances/afaf71ac-b6be-4353-b440-5774aa20e99f_del complete
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.437 232432 INFO nova.compute.manager [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.438 232432 DEBUG oslo.service.loopingcall [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.438 232432 DEBUG nova.compute.manager [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.438 232432 DEBUG nova.network.neutron [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.803 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:46 compute-2 nova_compute[232428]: 2025-11-29 08:39:46.990 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:46 compute-2 sudo[315063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:46 compute-2 sudo[315063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:47 compute-2 sudo[315063]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:47 compute-2 sudo[315089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:39:47 compute-2 sudo[315089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:39:47 compute-2 sudo[315089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:47 compute-2 podman[315087]: 2025-11-29 08:39:47.132434798 +0000 UTC m=+0.117046414 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.426 232432 DEBUG nova.network.neutron [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updated VIF entry in instance network info cache for port f330d12c-9123-4064-9d72-7a4e4d80aba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.427 232432 DEBUG nova.network.neutron [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updating instance_info_cache with network_info: [{"id": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "address": "fa:16:3e:bb:ab:25", "network": {"id": "6d6fdfd7-f417-4c7d-9e9e-a27ac776d11b", "bridge": "br-int", "label": "tempest-network-smoke--1952644977", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9414b14debe34aef968a821a9866ef08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf330d12c-91", "ovs_interfaceid": "f330d12c-9123-4064-9d72-7a4e4d80aba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.457 232432 DEBUG nova.network.neutron [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.475 232432 DEBUG oslo_concurrency.lockutils [req-1b149635-fa11-4cb3-96cb-858833b86d9f req-a189541a-1b24-4dec-b36d-f423a7ec518a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-afaf71ac-b6be-4353-b440-5774aa20e99f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.487 232432 INFO nova.compute.manager [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Took 1.05 seconds to deallocate network for instance.
Nov 29 08:39:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:47.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.561 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.562 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.637 232432 DEBUG oslo_concurrency.processutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:39:47 compute-2 nova_compute[232428]: 2025-11-29 08:39:47.833 232432 DEBUG nova.compute.manager [req-efa3e09f-9416-4cd0-b575-5ce2e8e6f16b req-d6920b8b-03db-42c4-a6c5-6c9532cb055a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-deleted-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:39:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1866632336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.197 232432 DEBUG oslo_concurrency.processutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.202 232432 DEBUG nova.compute.manager [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.204 232432 DEBUG oslo_concurrency.lockutils [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.205 232432 DEBUG oslo_concurrency.lockutils [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.205 232432 DEBUG oslo_concurrency.lockutils [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.206 232432 DEBUG nova.compute.manager [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] No waiting events found dispatching network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.206 232432 WARNING nova.compute.manager [req-72c8f8c5-7ff6-4c8e-a55f-fa46e51ce0ac req-44e66ac2-36d2-428a-aa0c-0ef14846e830 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Received unexpected event network-vif-plugged-f330d12c-9123-4064-9d72-7a4e4d80aba2 for instance with vm_state deleted and task_state None.
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.214 232432 DEBUG nova.compute.provider_tree [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.247 232432 DEBUG nova.scheduler.client.report [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.264 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.319 232432 INFO nova.scheduler.client.report [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Deleted allocations for instance afaf71ac-b6be-4353-b440-5774aa20e99f
Nov 29 08:39:48 compute-2 ceph-mon[77138]: pgmap v3116: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 761 KiB/s wr, 64 op/s
Nov 29 08:39:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1866632336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:39:48 compute-2 nova_compute[232428]: 2025-11-29 08:39:48.397 232432 DEBUG oslo_concurrency.lockutils [None req-82caa458-2daa-45c9-8cb0-0a9c7633212c 02ca6537c3444698b6f9f44f760fa337 9414b14debe34aef968a821a9866ef08 - - default default] Lock "afaf71ac-b6be-4353-b440-5774aa20e99f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:39:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:49.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:50 compute-2 ceph-mon[77138]: pgmap v3117: 305 pgs: 305 active+clean; 233 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 43 op/s
Nov 29 08:39:50 compute-2 nova_compute[232428]: 2025-11-29 08:39:50.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:52 compute-2 nova_compute[232428]: 2025-11-29 08:39:52.123 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:52 compute-2 ceph-mon[77138]: pgmap v3118: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 08:39:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:53.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:39:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:39:54 compute-2 ceph-mon[77138]: pgmap v3119: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 08:39:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:55 compute-2 nova_compute[232428]: 2025-11-29 08:39:55.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:56 compute-2 ceph-mon[77138]: pgmap v3120: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 08:39:57 compute-2 nova_compute[232428]: 2025-11-29 08:39:57.125 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:39:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:57 compute-2 podman[315166]: 2025-11-29 08:39:57.679403427 +0000 UTC m=+0.075597903 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:39:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:39:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:39:58 compute-2 ceph-mon[77138]: pgmap v3121: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 3.5 KiB/s wr, 38 op/s
Nov 29 08:39:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:39:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:39:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:39:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:39:59 compute-2 ceph-mon[77138]: pgmap v3122: 305 pgs: 305 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Nov 29 08:40:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:01 compute-2 nova_compute[232428]: 2025-11-29 08:40:01.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:01 compute-2 nova_compute[232428]: 2025-11-29 08:40:01.133 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405585.8464348, afaf71ac-b6be-4353-b440-5774aa20e99f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:40:01 compute-2 nova_compute[232428]: 2025-11-29 08:40:01.133 232432 INFO nova.compute.manager [-] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] VM Stopped (Lifecycle Event)
Nov 29 08:40:01 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:40:01 compute-2 nova_compute[232428]: 2025-11-29 08:40:01.193 232432 DEBUG nova.compute.manager [None req-5df3b3d8-7bce-4073-9eef-e363b4a51bba - - - - - -] [instance: afaf71ac-b6be-4353-b440-5774aa20e99f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:40:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:02 compute-2 nova_compute[232428]: 2025-11-29 08:40:02.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:02 compute-2 ceph-mon[77138]: pgmap v3123: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 29 08:40:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2634890051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:03.343 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:03.343 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:03.344 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:40:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:40:04 compute-2 ceph-mon[77138]: pgmap v3124: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 597 B/s wr, 17 op/s
Nov 29 08:40:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:06 compute-2 nova_compute[232428]: 2025-11-29 08:40:06.134 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:06 compute-2 nova_compute[232428]: 2025-11-29 08:40:06.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:06 compute-2 ceph-mon[77138]: pgmap v3125: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:40:07 compute-2 nova_compute[232428]: 2025-11-29 08:40:07.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:07 compute-2 sudo[315190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:07 compute-2 sudo[315190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:07 compute-2 sudo[315190]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:07 compute-2 sudo[315216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:07 compute-2 sudo[315216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:07 compute-2 sudo[315216]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:07 compute-2 podman[315214]: 2025-11-29 08:40:07.329426852 +0000 UTC m=+0.089856558 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:40:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:09 compute-2 ceph-mon[77138]: pgmap v3126: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:40:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:40:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:40:10 compute-2 ceph-mon[77138]: pgmap v3127: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:40:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/859808057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:11 compute-2 nova_compute[232428]: 2025-11-29 08:40:11.139 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:11 compute-2 nova_compute[232428]: 2025-11-29 08:40:11.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:12 compute-2 nova_compute[232428]: 2025-11-29 08:40:12.133 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:12 compute-2 ceph-mon[77138]: pgmap v3128: 305 pgs: 305 active+clean; 128 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 37 KiB/s wr, 24 op/s
Nov 29 08:40:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:14 compute-2 ceph-mon[77138]: pgmap v3129: 305 pgs: 305 active+clean; 128 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 37 KiB/s wr, 22 op/s
Nov 29 08:40:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:15.987 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:40:15 compute-2 nova_compute[232428]: 2025-11-29 08:40:15.988 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:15.989 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.200 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.200 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:40:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.293 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.293 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.293 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:16 compute-2 nova_compute[232428]: 2025-11-29 08:40:16.294 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:16 compute-2 ceph-mon[77138]: pgmap v3130: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:40:17 compute-2 nova_compute[232428]: 2025-11-29 08:40:17.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:17 compute-2 podman[315267]: 2025-11-29 08:40:17.716863036 +0000 UTC m=+0.113169412 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:40:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:18 compute-2 ceph-mon[77138]: pgmap v3131: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:40:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3196310453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4254585785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:19 compute-2 sudo[315294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:19 compute-2 sudo[315294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:19 compute-2 sudo[315294]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:19 compute-2 sudo[315319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:40:19 compute-2 sudo[315319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:19 compute-2 sudo[315319]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:19 compute-2 sudo[315344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:19 compute-2 sudo[315344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:19 compute-2 sudo[315344]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:19 compute-2 sudo[315369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:40:19 compute-2 sudo[315369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2402171271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:19 compute-2 ceph-mon[77138]: pgmap v3132: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:40:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4042749142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:19 compute-2 sudo[315369]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:40:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:40:21 compute-2 nova_compute[232428]: 2025-11-29 08:40:21.142 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:21 compute-2 nova_compute[232428]: 2025-11-29 08:40:21.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:40:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:40:22 compute-2 nova_compute[232428]: 2025-11-29 08:40:22.139 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:22 compute-2 ceph-mon[77138]: pgmap v3133: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:40:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.224 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.225 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.225 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:40:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3568787805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.676 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.853 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.855 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4243MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.856 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.856 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.957 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.958 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:40:23 compute-2 nova_compute[232428]: 2025-11-29 08:40:23.979 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:24 compute-2 ceph-mon[77138]: pgmap v3134: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Nov 29 08:40:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3568787805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:40:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/900581671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:24 compute-2 nova_compute[232428]: 2025-11-29 08:40:24.468 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:24 compute-2 nova_compute[232428]: 2025-11-29 08:40:24.475 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:40:24 compute-2 nova_compute[232428]: 2025-11-29 08:40:24.493 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:40:24 compute-2 nova_compute[232428]: 2025-11-29 08:40:24.526 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:40:24 compute-2 nova_compute[232428]: 2025-11-29 08:40:24.526 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:24.991 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/900581671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:25 compute-2 nova_compute[232428]: 2025-11-29 08:40:25.527 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:40:25 compute-2 nova_compute[232428]: 2025-11-29 08:40:25.528 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:40:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:26 compute-2 nova_compute[232428]: 2025-11-29 08:40:26.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:26 compute-2 sudo[315474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:26 compute-2 sudo[315474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:26 compute-2 sudo[315474]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:26 compute-2 ceph-mon[77138]: pgmap v3135: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1012 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 29 08:40:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:40:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:40:26 compute-2 sudo[315499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:40:26 compute-2 sudo[315499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:26 compute-2 sudo[315499]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:27 compute-2 nova_compute[232428]: 2025-11-29 08:40:27.141 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:27 compute-2 sudo[315524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:27 compute-2 sudo[315524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:27 compute-2 sudo[315524]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:27 compute-2 sudo[315549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:27 compute-2 sudo[315549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:27 compute-2 sudo[315549]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:28 compute-2 ceph-mon[77138]: pgmap v3136: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 56 op/s
Nov 29 08:40:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1871173452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:40:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1871173452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:40:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:28 compute-2 podman[315575]: 2025-11-29 08:40:28.706465891 +0000 UTC m=+0.100679874 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 08:40:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2920682199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:30 compute-2 ceph-mon[77138]: pgmap v3137: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:40:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3367454599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.580 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.580 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.629 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.718 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.718 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.725 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.725 232432 INFO nova.compute.claims [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:40:30 compute-2 nova_compute[232428]: 2025-11-29 08:40:30.816 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:40:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3777512852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.296 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.307 232432 DEBUG nova.compute.provider_tree [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.346 232432 DEBUG nova.scheduler.client.report [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.374 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.375 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.425 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.425 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.498 232432 INFO nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.530 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:40:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.696 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.697 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.698 232432 INFO nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Creating image(s)
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.736 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.770 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.807 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.812 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.894 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.896 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.896 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.897 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.930 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:31 compute-2 nova_compute[232428]: 2025-11-29 08:40:31.935 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a5fc606a-7d92-4b92-9936-790c922fe4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.146 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.167 232432 DEBUG nova.policy [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:40:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.255 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a5fc606a-7d92-4b92-9936-790c922fe4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:32 compute-2 ceph-mon[77138]: pgmap v3138: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:40:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3777512852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.356 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.465 232432 DEBUG nova.objects.instance [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid a5fc606a-7d92-4b92-9936-790c922fe4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.479 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.480 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Ensure instance console log exists: /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.480 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.481 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:32 compute-2 nova_compute[232428]: 2025-11-29 08:40:32.481 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:33 compute-2 nova_compute[232428]: 2025-11-29 08:40:33.205 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Successfully created port: 68b917f3-0d47-4f1f-a23e-8f5105b7214a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:40:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:34 compute-2 ceph-mon[77138]: pgmap v3139: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.259 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Successfully updated port: 68b917f3-0d47-4f1f-a23e-8f5105b7214a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.283 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.283 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.284 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.403 232432 DEBUG nova.compute.manager [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.403 232432 DEBUG nova.compute.manager [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing instance network info cache due to event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.403 232432 DEBUG oslo_concurrency.lockutils [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:40:35 compute-2 nova_compute[232428]: 2025-11-29 08:40:35.515 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:40:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.152 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:36 compute-2 ceph-mon[77138]: pgmap v3140: 305 pgs: 305 active+clean; 186 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 967 KiB/s wr, 88 op/s
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.823 232432 DEBUG nova.network.neutron [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.850 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.851 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Instance network_info: |[{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.852 232432 DEBUG oslo_concurrency.lockutils [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.853 232432 DEBUG nova.network.neutron [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.859 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Start _get_guest_xml network_info=[{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.867 232432 WARNING nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.879 232432 DEBUG nova.virt.libvirt.host [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.880 232432 DEBUG nova.virt.libvirt.host [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.885 232432 DEBUG nova.virt.libvirt.host [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.886 232432 DEBUG nova.virt.libvirt.host [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.888 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.888 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.889 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.890 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.890 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.891 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.891 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.892 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.892 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.893 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.893 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.894 232432 DEBUG nova.virt.hardware [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:40:36 compute-2 nova_compute[232428]: 2025-11-29 08:40:36.899 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:40:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1488181580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.371 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.411 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.416 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:37 compute-2 podman[315837]: 2025-11-29 08:40:37.703444933 +0000 UTC m=+0.096575546 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:40:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:40:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/733458377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.906 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.909 232432 DEBUG nova.virt.libvirt.vif [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=184,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-kn4mda4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:31Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=a5fc606a-7d92-4b92-9936-790c922fe4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.910 232432 DEBUG nova.network.os_vif_util [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.911 232432 DEBUG nova.network.os_vif_util [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:40:37 compute-2 nova_compute[232428]: 2025-11-29 08:40:37.914 232432 DEBUG nova.objects.instance [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5fc606a-7d92-4b92-9936-790c922fe4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.004 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <uuid>a5fc606a-7d92-4b92-9936-790c922fe4e2</uuid>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <name>instance-000000b8</name>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037</nova:name>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:40:36</nova:creationTime>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <nova:port uuid="68b917f3-0d47-4f1f-a23e-8f5105b7214a">
Nov 29 08:40:38 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <system>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="serial">a5fc606a-7d92-4b92-9936-790c922fe4e2</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="uuid">a5fc606a-7d92-4b92-9936-790c922fe4e2</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </system>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <os>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </os>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <features>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </features>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a5fc606a-7d92-4b92-9936-790c922fe4e2_disk">
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config">
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </source>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:40:38 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:e3:99:f1"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <target dev="tap68b917f3-0d"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/console.log" append="off"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <video>
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </video>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:40:38 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:40:38 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:40:38 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:40:38 compute-2 nova_compute[232428]: </domain>
Nov 29 08:40:38 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.006 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Preparing to wait for external event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.007 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.008 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.009 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.009 232432 DEBUG nova.virt.libvirt.vif [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=184,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-kn4mda4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:31Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=a5fc606a-7d92-4b92-9936-790c922fe4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.010 232432 DEBUG nova.network.os_vif_util [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.010 232432 DEBUG nova.network.os_vif_util [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.011 232432 DEBUG os_vif [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.012 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.012 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.013 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.018 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.018 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68b917f3-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.019 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68b917f3-0d, col_values=(('external_ids', {'iface-id': '68b917f3-0d47-4f1f-a23e-8f5105b7214a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:99:f1', 'vm-uuid': 'a5fc606a-7d92-4b92-9936-790c922fe4e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.021 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:38 compute-2 NetworkManager[48993]: <info>  [1764405638.0229] manager: (tap68b917f3-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.023 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.033 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.034 232432 INFO os_vif [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d')
Nov 29 08:40:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:38 compute-2 ovn_controller[134375]: 2025-11-29T08:40:38Z|00853|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:40:38 compute-2 ceph-mon[77138]: pgmap v3141: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1022 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 08:40:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1488181580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/733458377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.471 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.472 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.472 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:e3:99:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.473 232432 INFO nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Using config drive
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.512 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.738 232432 DEBUG nova.network.neutron [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updated VIF entry in instance network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.739 232432 DEBUG nova.network.neutron [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.761 232432 DEBUG oslo_concurrency.lockutils [req-18a777e5-739e-4601-8ffa-3a6fd864cb2c req-ad1eb76a-048b-402e-ba07-cdba55cb43d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.905 232432 INFO nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Creating config drive at /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config
Nov 29 08:40:38 compute-2 nova_compute[232428]: 2025-11-29 08:40:38.916 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo9gmda5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.082 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo9gmda5" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.114 232432 DEBUG nova.storage.rbd_utils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.120 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:40:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.336 232432 DEBUG oslo_concurrency.processutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config a5fc606a-7d92-4b92-9936-790c922fe4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.337 232432 INFO nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Deleting local config drive /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2/disk.config because it was imported into RBD.
Nov 29 08:40:39 compute-2 kernel: tap68b917f3-0d: entered promiscuous mode
Nov 29 08:40:39 compute-2 ovn_controller[134375]: 2025-11-29T08:40:39Z|00854|binding|INFO|Claiming lport 68b917f3-0d47-4f1f-a23e-8f5105b7214a for this chassis.
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.419 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.4217] manager: (tap68b917f3-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 08:40:39 compute-2 ovn_controller[134375]: 2025-11-29T08:40:39Z|00855|binding|INFO|68b917f3-0d47-4f1f-a23e-8f5105b7214a: Claiming fa:16:3e:e3:99:f1 10.100.0.11
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.438 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 systemd-udevd[315940]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.452 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:99:f1 10.100.0.11'], port_security=['fa:16:3e:e3:99:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a5fc606a-7d92-4b92-9936-790c922fe4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-827003dc-22a3-46f9-a129-d0a62483494f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bb5e8ca-a0e7-4207-80dc-a4c0defff68c d927c03d-544f-4cb2-a70a-354249bd42e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c3b4de-2976-48eb-9ac4-77655b9836b0, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=68b917f3-0d47-4f1f-a23e-8f5105b7214a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.454 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 68b917f3-0d47-4f1f-a23e-8f5105b7214a in datapath 827003dc-22a3-46f9-a129-d0a62483494f bound to our chassis
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.457 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 827003dc-22a3-46f9-a129-d0a62483494f
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.4772] device (tap68b917f3-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.4800] device (tap68b917f3-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:40:39 compute-2 systemd-machined[194747]: New machine qemu-89-instance-000000b8.
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.480 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[973758df-0228-420c-b21e-3f54b1c37bf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.484 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap827003dc-21 in ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.487 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap827003dc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.487 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfe339c-966a-4007-98c8-fc44b2d94d1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.488 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0935b98e-61b7-484f-894b-1aa3e9282cb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 systemd[1]: Started Virtual Machine qemu-89-instance-000000b8.
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.509 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf87df2-8a8c-42ff-87ec-a292e210e16e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_controller[134375]: 2025-11-29T08:40:39Z|00856|binding|INFO|Setting lport 68b917f3-0d47-4f1f-a23e-8f5105b7214a ovn-installed in OVS
Nov 29 08:40:39 compute-2 ovn_controller[134375]: 2025-11-29T08:40:39Z|00857|binding|INFO|Setting lport 68b917f3-0d47-4f1f-a23e-8f5105b7214a up in Southbound
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.517 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.535 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a6860c5a-b0bf-4f4b-be01-b7dadc8e68c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.579 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[887919cd-65fb-4cf6-8cbe-ab5f09c2615b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 systemd-udevd[315946]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.5883] manager: (tap827003dc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.587 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7aa53d-502f-4f9d-8304-61246abcecf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.634 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8c88ac5e-e3e8-422d-967f-f13485bb0350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.638 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb4f62e-034b-4f48-ae25-72808041def6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.6792] device (tap827003dc-20): carrier: link connected
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.689 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[30ca124e-93b7-45f2-a5f4-e8ac99d848cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.718 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[985d71a2-1671-465d-91ee-d3a9212d990c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap827003dc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:29:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860594, 'reachable_time': 37820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315976, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.746 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[73610c70-30ff-40d4-b0fc-c3fe131235b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:29a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 860594, 'tstamp': 860594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315977, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.776 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9850d2-5839-499c-9a73-242270a07ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap827003dc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:29:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860594, 'reachable_time': 37820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315978, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.827 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffc5c5f-b986-4e59-84aa-51107dd0efa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.914 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc2db5e-4cc0-4e9e-9a9b-b466c3b1d8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.917 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap827003dc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.918 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.919 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap827003dc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 NetworkManager[48993]: <info>  [1764405639.9225] manager: (tap827003dc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 08:40:39 compute-2 kernel: tap827003dc-20: entered promiscuous mode
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.926 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap827003dc-20, col_values=(('external_ids', {'iface-id': 'f6d5504f-44ca-4e58-bb7a-73fad975c4be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:40:39 compute-2 ovn_controller[134375]: 2025-11-29T08:40:39Z|00858|binding|INFO|Releasing lport f6d5504f-44ca-4e58-bb7a-73fad975c4be from this chassis (sb_readonly=0)
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 nova_compute[232428]: 2025-11-29 08:40:39.956 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.958 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.960 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[16305121-2634-40ff-84e3-b868d770c273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.961 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-827003dc-22a3-46f9-a129-d0a62483494f
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 827003dc-22a3-46f9-a129-d0a62483494f
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:40:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:40:39.963 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'env', 'PROCESS_TAG=haproxy-827003dc-22a3-46f9-a129-d0a62483494f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/827003dc-22a3-46f9-a129-d0a62483494f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:40:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:40.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.410 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405640.4104471, a5fc606a-7d92-4b92-9936-790c922fe4e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.411 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] VM Started (Lifecycle Event)
Nov 29 08:40:40 compute-2 podman[316052]: 2025-11-29 08:40:40.423367232 +0000 UTC m=+0.091184440 container create d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:40:40 compute-2 systemd[1]: Started libpod-conmon-d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d.scope.
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.461 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.466 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405640.4131799, a5fc606a-7d92-4b92-9936-790c922fe4e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.467 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] VM Paused (Lifecycle Event)
Nov 29 08:40:40 compute-2 podman[316052]: 2025-11-29 08:40:40.382360635 +0000 UTC m=+0.050177923 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:40:40 compute-2 ceph-mon[77138]: pgmap v3142: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Nov 29 08:40:40 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.488 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.493 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:40:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39089aa6181d76104cc61cb21b91b87559e85d473769113db851260b039b512/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:40:40 compute-2 podman[316052]: 2025-11-29 08:40:40.507691775 +0000 UTC m=+0.175509023 container init d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:40:40 compute-2 podman[316052]: 2025-11-29 08:40:40.520270027 +0000 UTC m=+0.188087255 container start d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:40:40 compute-2 nova_compute[232428]: 2025-11-29 08:40:40.520 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:40:40 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [NOTICE]   (316072) : New worker (316074) forked
Nov 29 08:40:40 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [NOTICE]   (316072) : Loading success.
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.096 232432 DEBUG nova.compute.manager [req-182d2d5f-2f89-46f8-a2b6-e5a2bb1548a8 req-da337fa1-f0e5-4978-b3c8-68d6f9148d2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.097 232432 DEBUG oslo_concurrency.lockutils [req-182d2d5f-2f89-46f8-a2b6-e5a2bb1548a8 req-da337fa1-f0e5-4978-b3c8-68d6f9148d2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.097 232432 DEBUG oslo_concurrency.lockutils [req-182d2d5f-2f89-46f8-a2b6-e5a2bb1548a8 req-da337fa1-f0e5-4978-b3c8-68d6f9148d2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.097 232432 DEBUG oslo_concurrency.lockutils [req-182d2d5f-2f89-46f8-a2b6-e5a2bb1548a8 req-da337fa1-f0e5-4978-b3c8-68d6f9148d2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.098 232432 DEBUG nova.compute.manager [req-182d2d5f-2f89-46f8-a2b6-e5a2bb1548a8 req-da337fa1-f0e5-4978-b3c8-68d6f9148d2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Processing event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.098 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.102 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405641.1022937, a5fc606a-7d92-4b92-9936-790c922fe4e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.103 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] VM Resumed (Lifecycle Event)
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.105 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.108 232432 INFO nova.virt.libvirt.driver [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Instance spawned successfully.
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.109 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.149 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.155 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.155 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.156 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.156 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.156 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.157 232432 DEBUG nova.virt.libvirt.driver [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.161 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.199 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.235 232432 INFO nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Took 9.54 seconds to spawn the instance on the hypervisor.
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.235 232432 DEBUG nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.309 232432 INFO nova.compute.manager [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Took 10.63 seconds to build instance.
Nov 29 08:40:41 compute-2 nova_compute[232428]: 2025-11-29 08:40:41.338 232432 DEBUG oslo_concurrency.lockutils [None req-c8d7629f-8a65-4285-a70e-ed3fec9e37e3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:42 compute-2 nova_compute[232428]: 2025-11-29 08:40:42.149 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:42 compute-2 ceph-mon[77138]: pgmap v3143: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.198 232432 DEBUG nova.compute.manager [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.198 232432 DEBUG oslo_concurrency.lockutils [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.198 232432 DEBUG oslo_concurrency.lockutils [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.199 232432 DEBUG oslo_concurrency.lockutils [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.199 232432 DEBUG nova.compute.manager [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] No waiting events found dispatching network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:40:43 compute-2 nova_compute[232428]: 2025-11-29 08:40:43.199 232432 WARNING nova.compute.manager [req-4432ace7-f349-4ccd-8f6b-85e5335ff6b1 req-17da0f7e-036b-4b8d-9998-689776284964 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received unexpected event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a for instance with vm_state active and task_state None.
Nov 29 08:40:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:43 compute-2 sshd-session[316085]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 08:40:43 compute-2 sshd-session[316085]: Connection reset by 45.140.17.97 port 52744
Nov 29 08:40:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:44 compute-2 ceph-mon[77138]: pgmap v3144: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 08:40:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:46.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:46 compute-2 nova_compute[232428]: 2025-11-29 08:40:46.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:46 compute-2 NetworkManager[48993]: <info>  [1764405646.5240] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 08:40:46 compute-2 NetworkManager[48993]: <info>  [1764405646.5261] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 08:40:46 compute-2 ceph-mon[77138]: pgmap v3145: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Nov 29 08:40:46 compute-2 nova_compute[232428]: 2025-11-29 08:40:46.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:46 compute-2 ovn_controller[134375]: 2025-11-29T08:40:46Z|00859|binding|INFO|Releasing lport f6d5504f-44ca-4e58-bb7a-73fad975c4be from this chassis (sb_readonly=0)
Nov 29 08:40:46 compute-2 nova_compute[232428]: 2025-11-29 08:40:46.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:47 compute-2 nova_compute[232428]: 2025-11-29 08:40:47.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:47 compute-2 sudo[316089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:47 compute-2 sudo[316089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:47 compute-2 sudo[316089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:47 compute-2 sudo[316114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:40:47 compute-2 sudo[316114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:40:47 compute-2 sudo[316114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.025 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.232 232432 DEBUG nova.compute.manager [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.233 232432 DEBUG nova.compute.manager [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing instance network info cache due to event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.233 232432 DEBUG oslo_concurrency.lockutils [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.234 232432 DEBUG oslo_concurrency.lockutils [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:40:48 compute-2 nova_compute[232428]: 2025-11-29 08:40:48.234 232432 DEBUG nova.network.neutron [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:40:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:48.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:48 compute-2 ceph-mon[77138]: pgmap v3146: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 148 op/s
Nov 29 08:40:48 compute-2 podman[316140]: 2025-11-29 08:40:48.7471195 +0000 UTC m=+0.143714413 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:40:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:50 compute-2 ceph-mon[77138]: pgmap v3147: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 119 op/s
Nov 29 08:40:51 compute-2 nova_compute[232428]: 2025-11-29 08:40:51.564 232432 DEBUG nova.network.neutron [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updated VIF entry in instance network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:40:51 compute-2 nova_compute[232428]: 2025-11-29 08:40:51.566 232432 DEBUG nova.network.neutron [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:40:51 compute-2 nova_compute[232428]: 2025-11-29 08:40:51.595 232432 DEBUG oslo_concurrency.lockutils [req-86765b59-db1f-4490-a7b2-7aeebe4ccaa2 req-d4824559-1e7c-41ea-ad9a-7aae3691b26e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:40:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:52 compute-2 nova_compute[232428]: 2025-11-29 08:40:52.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:40:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:52.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:40:52 compute-2 ceph-mon[77138]: pgmap v3148: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 68 KiB/s wr, 93 op/s
Nov 29 08:40:53 compute-2 nova_compute[232428]: 2025-11-29 08:40:53.027 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:53 compute-2 ovn_controller[134375]: 2025-11-29T08:40:53Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:99:f1 10.100.0.11
Nov 29 08:40:53 compute-2 ovn_controller[134375]: 2025-11-29T08:40:53Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:99:f1 10.100.0.11
Nov 29 08:40:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:54 compute-2 ceph-mon[77138]: pgmap v3149: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 84 op/s
Nov 29 08:40:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:55 compute-2 ceph-mon[77138]: pgmap v3150: 305 pgs: 305 active+clean; 256 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 616 KiB/s wr, 157 op/s
Nov 29 08:40:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:56.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:57 compute-2 nova_compute[232428]: 2025-11-29 08:40:57.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:40:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:57.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:40:58 compute-2 nova_compute[232428]: 2025-11-29 08:40:58.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:40:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:40:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:40:58 compute-2 ceph-mon[77138]: pgmap v3151: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 146 op/s
Nov 29 08:40:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:40:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:40:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:40:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:59.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:40:59 compute-2 podman[316174]: 2025-11-29 08:40:59.726254901 +0000 UTC m=+0.104378260 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:41:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:00.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:00 compute-2 ceph-mon[77138]: pgmap v3152: 305 pgs: 305 active+clean; 269 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 29 08:41:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:01.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:02 compute-2 nova_compute[232428]: 2025-11-29 08:41:02.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:02 compute-2 ceph-mon[77138]: pgmap v3153: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Nov 29 08:41:03 compute-2 nova_compute[232428]: 2025-11-29 08:41:03.033 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:03.344 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:03.345 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:03.346 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:04 compute-2 ceph-mon[77138]: pgmap v3154: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 29 08:41:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:05.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:06 compute-2 ceph-mon[77138]: pgmap v3155: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Nov 29 08:41:07 compute-2 nova_compute[232428]: 2025-11-29 08:41:07.161 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:07 compute-2 nova_compute[232428]: 2025-11-29 08:41:07.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:07.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:07 compute-2 sudo[316197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:07 compute-2 sudo[316197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:07 compute-2 sudo[316197]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:07 compute-2 sudo[316229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:07 compute-2 podman[316222]: 2025-11-29 08:41:07.910960463 +0000 UTC m=+0.064032813 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:41:07 compute-2 sudo[316229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:07 compute-2 sudo[316229]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:08 compute-2 nova_compute[232428]: 2025-11-29 08:41:08.035 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:08 compute-2 ceph-mon[77138]: pgmap v3156: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 1.6 MiB/s wr, 97 op/s
Nov 29 08:41:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:09.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:10 compute-2 ceph-mon[77138]: pgmap v3157: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 620 KiB/s wr, 68 op/s
Nov 29 08:41:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/934405617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:11.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:12 compute-2 nova_compute[232428]: 2025-11-29 08:41:12.163 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:12 compute-2 nova_compute[232428]: 2025-11-29 08:41:12.190 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:12 compute-2 ceph-mon[77138]: pgmap v3158: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 671 KiB/s wr, 69 op/s
Nov 29 08:41:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:13.022 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:41:13 compute-2 nova_compute[232428]: 2025-11-29 08:41:13.023 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:13.024 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:41:13 compute-2 nova_compute[232428]: 2025-11-29 08:41:13.037 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:13.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:13 compute-2 ceph-mon[77138]: pgmap v3159: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 532 KiB/s rd, 577 KiB/s wr, 48 op/s
Nov 29 08:41:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:15.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:16.026 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:41:16 compute-2 ceph-mon[77138]: pgmap v3160: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 1.3 MiB/s wr, 66 op/s
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:41:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.422 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.423 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.423 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:41:16 compute-2 nova_compute[232428]: 2025-11-29 08:41:16.423 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a5fc606a-7d92-4b92-9936-790c922fe4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:41:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2888382568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:17 compute-2 nova_compute[232428]: 2025-11-29 08:41:17.165 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:17 compute-2 sshd-session[316274]: Invalid user solv from 45.148.10.240 port 46562
Nov 29 08:41:17 compute-2 sshd-session[316274]: Connection closed by invalid user solv 45.148.10.240 port 46562 [preauth]
Nov 29 08:41:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:17.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:18 compute-2 ceph-mon[77138]: pgmap v3161: 305 pgs: 305 active+clean; 285 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 08:41:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3270536942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:41:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.493 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.514 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.515 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.516 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.517 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:18 compute-2 nova_compute[232428]: 2025-11-29 08:41:18.517 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4041124736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:41:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:19.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:19 compute-2 podman[316277]: 2025-11-29 08:41:19.7157944 +0000 UTC m=+0.111547632 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 08:41:20 compute-2 ceph-mon[77138]: pgmap v3162: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 08:41:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/768020349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2279126069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:22 compute-2 ceph-mon[77138]: pgmap v3163: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 08:41:22 compute-2 nova_compute[232428]: 2025-11-29 08:41:22.168 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:23 compute-2 nova_compute[232428]: 2025-11-29 08:41:23.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:23 compute-2 nova_compute[232428]: 2025-11-29 08:41:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:23.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:24 compute-2 ceph-mon[77138]: pgmap v3164: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Nov 29 08:41:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.216 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.216 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.217 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.241 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.242 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:41:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:41:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1272172844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.718 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.801 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:41:24 compute-2 nova_compute[232428]: 2025-11-29 08:41:24.802 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.021 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.022 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4041MB free_disk=20.921977996826172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.023 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.023 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1272172844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance a5fc606a-7d92-4b92-9936-790c922fe4e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.172 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.204 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:41:25 compute-2 ovn_controller[134375]: 2025-11-29T08:41:25Z|00860|binding|INFO|Releasing lport f6d5504f-44ca-4e58-bb7a-73fad975c4be from this chassis (sb_readonly=0)
Nov 29 08:41:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:41:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2495279299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.663 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.673 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:41:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:25.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.693 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.700 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.719 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:41:25 compute-2 nova_compute[232428]: 2025-11-29 08:41:25.720 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:26 compute-2 ceph-mon[77138]: pgmap v3165: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 1.2 MiB/s wr, 87 op/s
Nov 29 08:41:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2495279299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:26.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:26 compute-2 sudo[316351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:26 compute-2 sudo[316351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:26 compute-2 sudo[316351]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:26 compute-2 sudo[316376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:41:26 compute-2 sudo[316376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:26 compute-2 sudo[316376]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:26 compute-2 sudo[316401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:26 compute-2 sudo[316401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:26 compute-2 sudo[316401]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:26 compute-2 sudo[316426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:41:26 compute-2 sudo[316426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:27 compute-2 nova_compute[232428]: 2025-11-29 08:41:27.172 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:27 compute-2 sudo[316426]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:41:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261030720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:41:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:41:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261030720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:41:28 compute-2 sudo[316482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:28 compute-2 sudo[316482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:28 compute-2 sudo[316482]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:28 compute-2 nova_compute[232428]: 2025-11-29 08:41:28.045 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:28 compute-2 sudo[316507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:28 compute-2 sudo[316507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:28 compute-2 sudo[316507]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:28 compute-2 ceph-mon[77138]: pgmap v3166: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 108 op/s
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2261030720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2261030720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:30 compute-2 ceph-mon[77138]: pgmap v3167: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:41:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:41:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:30 compute-2 podman[316533]: 2025-11-29 08:41:30.69882653 +0000 UTC m=+0.085852123 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:41:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/707660244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:32 compute-2 nova_compute[232428]: 2025-11-29 08:41:32.175 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:41:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 13K writes, 69K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1631 writes, 7964 keys, 1631 commit groups, 1.0 writes per commit group, ingest: 16.10 MB, 0.03 MB/s
                                           Interval WAL: 1630 writes, 1630 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     42.5      1.94              0.39        42    0.046       0      0       0.0       0.0
                                             L6      1/0   10.97 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     82.2     70.1      5.91              1.46        41    0.144    299K    22K       0.0       0.0
                                            Sum      1/0   10.97 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     61.9     63.3      7.84              1.85        83    0.094    299K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4     49.6     48.4      1.60              0.28        12    0.134     58K   3137       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     82.2     70.1      5.91              1.46        41    0.144    299K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     44.4      1.85              0.39        41    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.080, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.09 MB/s write, 0.47 GB read, 0.09 MB/s read, 7.8 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 55.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000528 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3060,53.78 MB,17.6903%) FilterBlock(83,833.92 KB,0.267887%) IndexBlock(83,1.35 MB,0.443554%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:41:32 compute-2 ceph-mon[77138]: pgmap v3168: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 29 08:41:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/772360905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:32.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:33 compute-2 nova_compute[232428]: 2025-11-29 08:41:33.049 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:33 compute-2 nova_compute[232428]: 2025-11-29 08:41:33.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:34 compute-2 ceph-mon[77138]: pgmap v3169: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 29 08:41:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:35 compute-2 sudo[316555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:35 compute-2 sudo[316555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:35 compute-2 sudo[316555]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:35 compute-2 sudo[316580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:41:35 compute-2 sudo[316580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:35 compute-2 sudo[316580]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:41:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:41:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:35 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:41:35 compute-2 ceph-mon[77138]: pgmap v3170: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 626 KiB/s wr, 87 op/s
Nov 29 08:41:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:37 compute-2 nova_compute[232428]: 2025-11-29 08:41:37.178 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:37 compute-2 nova_compute[232428]: 2025-11-29 08:41:37.951 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:38 compute-2 ceph-mon[77138]: pgmap v3171: 305 pgs: 305 active+clean; 268 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 71 op/s
Nov 29 08:41:38 compute-2 nova_compute[232428]: 2025-11-29 08:41:38.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:38 compute-2 podman[316607]: 2025-11-29 08:41:38.707936167 +0000 UTC m=+0.098892278 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:41:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:39.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:40 compute-2 ceph-mon[77138]: pgmap v3172: 305 pgs: 305 active+clean; 274 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Nov 29 08:41:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:42 compute-2 ceph-mon[77138]: pgmap v3173: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:41:42 compute-2 nova_compute[232428]: 2025-11-29 08:41:42.181 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:43 compute-2 nova_compute[232428]: 2025-11-29 08:41:43.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:43.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:44 compute-2 ceph-mon[77138]: pgmap v3174: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:41:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/657979487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4173765393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.741 232432 DEBUG nova.compute.manager [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.742 232432 DEBUG nova.compute.manager [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing instance network info cache due to event network-changed-68b917f3-0d47-4f1f-a23e-8f5105b7214a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.743 232432 DEBUG oslo_concurrency.lockutils [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.743 232432 DEBUG oslo_concurrency.lockutils [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.744 232432 DEBUG nova.network.neutron [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Refreshing network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.792 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.793 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.793 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.794 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.794 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.797 232432 INFO nova.compute.manager [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Terminating instance
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.799 232432 DEBUG nova.compute.manager [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:41:45 compute-2 kernel: tap68b917f3-0d (unregistering): left promiscuous mode
Nov 29 08:41:45 compute-2 NetworkManager[48993]: <info>  [1764405705.8774] device (tap68b917f3-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:45 compute-2 ovn_controller[134375]: 2025-11-29T08:41:45Z|00861|binding|INFO|Releasing lport 68b917f3-0d47-4f1f-a23e-8f5105b7214a from this chassis (sb_readonly=0)
Nov 29 08:41:45 compute-2 ovn_controller[134375]: 2025-11-29T08:41:45Z|00862|binding|INFO|Setting lport 68b917f3-0d47-4f1f-a23e-8f5105b7214a down in Southbound
Nov 29 08:41:45 compute-2 ovn_controller[134375]: 2025-11-29T08:41:45Z|00863|binding|INFO|Removing iface tap68b917f3-0d ovn-installed in OVS
Nov 29 08:41:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:45.906 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:99:f1 10.100.0.11'], port_security=['fa:16:3e:e3:99:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a5fc606a-7d92-4b92-9936-790c922fe4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-827003dc-22a3-46f9-a129-d0a62483494f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bb5e8ca-a0e7-4207-80dc-a4c0defff68c d927c03d-544f-4cb2-a70a-354249bd42e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c3b4de-2976-48eb-9ac4-77655b9836b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=68b917f3-0d47-4f1f-a23e-8f5105b7214a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:41:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:45.909 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 68b917f3-0d47-4f1f-a23e-8f5105b7214a in datapath 827003dc-22a3-46f9-a129-d0a62483494f unbound from our chassis
Nov 29 08:41:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:45.912 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 827003dc-22a3-46f9-a129-d0a62483494f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:41:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:45.916 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ecebb5bf-6ea5-43cb-8891-95a37b33100e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:45 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:45.918 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f namespace which is not needed anymore
Nov 29 08:41:45 compute-2 nova_compute[232428]: 2025-11-29 08:41:45.941 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:45 compute-2 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Nov 29 08:41:45 compute-2 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b8.scope: Consumed 19.301s CPU time.
Nov 29 08:41:45 compute-2 systemd-machined[194747]: Machine qemu-89-instance-000000b8 terminated.
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.036 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.047 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.053 232432 INFO nova.virt.libvirt.driver [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Instance destroyed successfully.
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.054 232432 DEBUG nova.objects.instance [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid a5fc606a-7d92-4b92-9936-790c922fe4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.073 232432 DEBUG nova.virt.libvirt.vif [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-921324037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=184,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:40:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-kn4mda4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:40:41Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=a5fc606a-7d92-4b92-9936-790c922fe4e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.074 232432 DEBUG nova.network.os_vif_util [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.075 232432 DEBUG nova.network.os_vif_util [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.076 232432 DEBUG os_vif [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.077 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.078 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68b917f3-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.080 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.082 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.085 232432 INFO os_vif [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:99:f1,bridge_name='br-int',has_traffic_filtering=True,id=68b917f3-0d47-4f1f-a23e-8f5105b7214a,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b917f3-0d')
Nov 29 08:41:46 compute-2 ceph-mon[77138]: pgmap v3175: 305 pgs: 305 active+clean; 242 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 2.5 MiB/s wr, 80 op/s
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [NOTICE]   (316072) : haproxy version is 2.8.14-c23fe91
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [NOTICE]   (316072) : path to executable is /usr/sbin/haproxy
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [WARNING]  (316072) : Exiting Master process...
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [WARNING]  (316072) : Exiting Master process...
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [ALERT]    (316072) : Current worker (316074) exited with code 143 (Terminated)
Nov 29 08:41:46 compute-2 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[316068]: [WARNING]  (316072) : All workers exited. Exiting... (0)
Nov 29 08:41:46 compute-2 systemd[1]: libpod-d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d.scope: Deactivated successfully.
Nov 29 08:41:46 compute-2 podman[316664]: 2025-11-29 08:41:46.227245748 +0000 UTC m=+0.153936527 container died d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:41:46 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:41:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-f39089aa6181d76104cc61cb21b91b87559e85d473769113db851260b039b512-merged.mount: Deactivated successfully.
Nov 29 08:41:46 compute-2 podman[316664]: 2025-11-29 08:41:46.279782542 +0000 UTC m=+0.206473321 container cleanup d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:41:46 compute-2 systemd[1]: libpod-conmon-d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d.scope: Deactivated successfully.
Nov 29 08:41:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:46 compute-2 podman[316712]: 2025-11-29 08:41:46.372961751 +0000 UTC m=+0.053480234 container remove d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.385 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ffac6460-7cc8-4bac-a21e-8301ec0d3bf6]: (4, ('Sat Nov 29 08:41:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f (d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d)\nd1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d\nSat Nov 29 08:41:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f (d1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d)\nd1d6ec477d48f539cbe0fef84807a6c6d7502e1b6c12f25075ba1235e8862a3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.388 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[75eb332a-717f-485f-bc93-0af3b847b658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.389 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap827003dc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.392 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 kernel: tap827003dc-20: left promiscuous mode
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.418 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.427 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f5513b0f-03dd-4b42-b9b6-c3574379754a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.444 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e337a3cc-5484-4cb9-ada6-aec7509a366b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.447 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad7e7c-9065-4afe-be97-8c233cf04671]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.482 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c34a38e-118c-4a3d-acaf-72752b85f2c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860584, 'reachable_time': 36031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316727, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 systemd[1]: run-netns-ovnmeta\x2d827003dc\x2d22a3\x2d46f9\x2da129\x2dd0a62483494f.mount: Deactivated successfully.
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.491 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:41:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:41:46.492 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[1947cf4e-a872-492c-8cd8-d016f239cb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.656 232432 INFO nova.virt.libvirt.driver [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Deleting instance files /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2_del
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.660 232432 INFO nova.virt.libvirt.driver [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Deletion of /var/lib/nova/instances/a5fc606a-7d92-4b92-9936-790c922fe4e2_del complete
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.728 232432 INFO nova.compute.manager [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.729 232432 DEBUG oslo.service.loopingcall [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.730 232432 DEBUG nova.compute.manager [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:41:46 compute-2 nova_compute[232428]: 2025-11-29 08:41:46.730 232432 DEBUG nova.network.neutron [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:41:47 compute-2 nova_compute[232428]: 2025-11-29 08:41:47.184 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 08:41:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 08:41:47 compute-2 nova_compute[232428]: 2025-11-29 08:41:47.867 232432 DEBUG nova.network.neutron [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updated VIF entry in instance network info cache for port 68b917f3-0d47-4f1f-a23e-8f5105b7214a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:41:47 compute-2 nova_compute[232428]: 2025-11-29 08:41:47.868 232432 DEBUG nova.network.neutron [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [{"id": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "address": "fa:16:3e:e3:99:f1", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b917f3-0d", "ovs_interfaceid": "68b917f3-0d47-4f1f-a23e-8f5105b7214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:41:47 compute-2 nova_compute[232428]: 2025-11-29 08:41:47.893 232432 DEBUG oslo_concurrency.lockutils [req-c22bd7d7-b028-4965-bbf7-dbd7717bc87a req-6406340c-62f9-4a1a-bbc8-326d3dcf053e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a5fc606a-7d92-4b92-9936-790c922fe4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.073 232432 DEBUG nova.network.neutron [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.119 232432 INFO nova.compute.manager [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Took 1.39 seconds to deallocate network for instance.
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.166 232432 DEBUG nova.compute.manager [req-e9539437-30bd-4244-8ec9-3a10cf1fcdff req-3a18e8f6-e3f2-469f-a4ce-e795389ae9f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-vif-deleted-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.181 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.181 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:48 compute-2 ceph-mon[77138]: pgmap v3176: 305 pgs: 305 active+clean; 242 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 3.0 MiB/s wr, 95 op/s
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.245 232432 DEBUG oslo_concurrency.processutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:41:48 compute-2 sudo[316729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:48 compute-2 sudo[316729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:48 compute-2 sudo[316729]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:48 compute-2 sudo[316754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:41:48 compute-2 sudo[316754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:41:48 compute-2 sudo[316754]: pam_unix(sudo:session): session closed for user root
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.441 232432 DEBUG nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-vif-unplugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.444 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.446 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.447 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.448 232432 DEBUG nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] No waiting events found dispatching network-vif-unplugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.449 232432 WARNING nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received unexpected event network-vif-unplugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a for instance with vm_state deleted and task_state None.
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.450 232432 DEBUG nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.450 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.451 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.452 232432 DEBUG oslo_concurrency.lockutils [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.453 232432 DEBUG nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] No waiting events found dispatching network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.454 232432 WARNING nova.compute.manager [req-d3a47eaf-24b6-4941-8b72-2e4dd3d2ad8d req-840d91e9-9561-4c7d-94c3-f9f4762ca981 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Received unexpected event network-vif-plugged-68b917f3-0d47-4f1f-a23e-8f5105b7214a for instance with vm_state deleted and task_state None.
Nov 29 08:41:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:41:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3655509230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.766 232432 DEBUG oslo_concurrency.processutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.776 232432 DEBUG nova.compute.provider_tree [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.802 232432 DEBUG nova.scheduler.client.report [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.839 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.880 232432 INFO nova.scheduler.client.report [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance a5fc606a-7d92-4b92-9936-790c922fe4e2
Nov 29 08:41:48 compute-2 nova_compute[232428]: 2025-11-29 08:41:48.960 232432 DEBUG oslo_concurrency.lockutils [None req-9d2910bf-eb3d-45b7-9986-9bdf20a5ee75 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "a5fc606a-7d92-4b92-9936-790c922fe4e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:41:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3163031373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:41:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3655509230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:41:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2958087801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:41:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:50 compute-2 ceph-mon[77138]: pgmap v3177: 305 pgs: 305 active+clean; 233 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Nov 29 08:41:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:50.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:50 compute-2 podman[316803]: 2025-11-29 08:41:50.765891272 +0000 UTC m=+0.154855420 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:41:51 compute-2 nova_compute[232428]: 2025-11-29 08:41:51.081 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:52 compute-2 nova_compute[232428]: 2025-11-29 08:41:52.186 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:41:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:41:52 compute-2 ceph-mon[77138]: pgmap v3178: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 1.9 MiB/s wr, 107 op/s
Nov 29 08:41:52 compute-2 nova_compute[232428]: 2025-11-29 08:41:52.614 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:52 compute-2 nova_compute[232428]: 2025-11-29 08:41:52.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:41:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:41:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:54 compute-2 ceph-mon[77138]: pgmap v3179: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 29 08:41:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:56 compute-2 nova_compute[232428]: 2025-11-29 08:41:56.085 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:56 compute-2 nova_compute[232428]: 2025-11-29 08:41:56.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:41:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:56 compute-2 ceph-mon[77138]: pgmap v3180: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 29 08:41:57 compute-2 nova_compute[232428]: 2025-11-29 08:41:57.191 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:41:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:57.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:57 compute-2 ceph-mon[77138]: pgmap v3181: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Nov 29 08:41:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:41:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:41:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:41:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:41:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:59.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:00 compute-2 ceph-mon[77138]: pgmap v3182: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 112 op/s
Nov 29 08:42:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:01 compute-2 nova_compute[232428]: 2025-11-29 08:42:01.051 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405706.048728, a5fc606a-7d92-4b92-9936-790c922fe4e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:42:01 compute-2 nova_compute[232428]: 2025-11-29 08:42:01.052 232432 INFO nova.compute.manager [-] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] VM Stopped (Lifecycle Event)
Nov 29 08:42:01 compute-2 nova_compute[232428]: 2025-11-29 08:42:01.087 232432 DEBUG nova.compute.manager [None req-c91c9021-be9f-4c21-82b9-76134a6b0578 - - - - - -] [instance: a5fc606a-7d92-4b92-9936-790c922fe4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:01 compute-2 nova_compute[232428]: 2025-11-29 08:42:01.088 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:01 compute-2 podman[316835]: 2025-11-29 08:42:01.691367202 +0000 UTC m=+0.082457817 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:42:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:01.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:02 compute-2 ceph-mon[77138]: pgmap v3183: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 93 op/s
Nov 29 08:42:02 compute-2 nova_compute[232428]: 2025-11-29 08:42:02.194 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:03.345 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:03.346 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:03.346 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:03.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:04 compute-2 ceph-mon[77138]: pgmap v3184: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 70 op/s
Nov 29 08:42:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:06 compute-2 nova_compute[232428]: 2025-11-29 08:42:06.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:06 compute-2 ceph-mon[77138]: pgmap v3185: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 256 KiB/s wr, 75 op/s
Nov 29 08:42:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:06 compute-2 nova_compute[232428]: 2025-11-29 08:42:06.958 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:06 compute-2 nova_compute[232428]: 2025-11-29 08:42:06.959 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:06 compute-2 nova_compute[232428]: 2025-11-29 08:42:06.980 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.104 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.105 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.117 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.117 232432 INFO nova.compute.claims [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.196 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.215 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.259 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:42:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2123184444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.730 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.740 232432 DEBUG nova.compute.provider_tree [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.765 232432 DEBUG nova.scheduler.client.report [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.801 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.802 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.881 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.882 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.908 232432 INFO nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:42:07 compute-2 nova_compute[232428]: 2025-11-29 08:42:07.972 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.098 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.100 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.101 232432 INFO nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Creating image(s)
Nov 29 08:42:08 compute-2 ceph-mon[77138]: pgmap v3186: 305 pgs: 305 active+clean; 193 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 97 op/s
Nov 29 08:42:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2123184444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.147 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.187 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.226 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.232 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.284 232432 DEBUG nova.policy [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.337 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.338 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.339 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.340 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.388 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.394 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f506d897-b24f-45c5-8af9-49e145bdabd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:08 compute-2 sudo[316956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:08 compute-2 sudo[316956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:08 compute-2 sudo[316956]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:08 compute-2 sudo[316997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:08 compute-2 sudo[316997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:08 compute-2 sudo[316997]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.762 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f506d897-b24f-45c5-8af9-49e145bdabd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:08 compute-2 nova_compute[232428]: 2025-11-29 08:42:08.882 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.028 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Successfully created port: 34c81def-f498-4c4f-81b4-2fe4e5a49109 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:42:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.039 232432 DEBUG nova.objects.instance [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid f506d897-b24f-45c5-8af9-49e145bdabd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.058 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.058 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Ensure instance console log exists: /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.058 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.059 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:09 compute-2 nova_compute[232428]: 2025-11-29 08:42:09.059 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:09 compute-2 podman[317097]: 2025-11-29 08:42:09.684781181 +0000 UTC m=+0.081099585 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:42:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:10 compute-2 ceph-mon[77138]: pgmap v3187: 305 pgs: 305 active+clean; 197 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 08:42:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.097 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.511 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Successfully updated port: 34c81def-f498-4c4f-81b4-2fe4e5a49109 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.534 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.534 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.534 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.612 232432 DEBUG nova.compute.manager [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.613 232432 DEBUG nova.compute.manager [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing instance network info cache due to event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:42:11 compute-2 nova_compute[232428]: 2025-11-29 08:42:11.613 232432 DEBUG oslo_concurrency.lockutils [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:42:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:12 compute-2 ceph-mon[77138]: pgmap v3188: 305 pgs: 305 active+clean; 232 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.4 MiB/s wr, 88 op/s
Nov 29 08:42:12 compute-2 nova_compute[232428]: 2025-11-29 08:42:12.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:12 compute-2 nova_compute[232428]: 2025-11-29 08:42:12.511 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.394 232432 DEBUG nova.network.neutron [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updating instance_info_cache with network_info: [{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.428 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.429 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Instance network_info: |[{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.430 232432 DEBUG oslo_concurrency.lockutils [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.430 232432 DEBUG nova.network.neutron [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.435 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Start _get_guest_xml network_info=[{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.444 232432 WARNING nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.450 232432 DEBUG nova.virt.libvirt.host [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.451 232432 DEBUG nova.virt.libvirt.host [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.460 232432 DEBUG nova.virt.libvirt.host [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.461 232432 DEBUG nova.virt.libvirt.host [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.463 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.463 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.464 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.465 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.466 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.466 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.467 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.467 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.468 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.469 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.469 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.470 232432 DEBUG nova.virt.hardware [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.476 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:13.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:42:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2176246260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:13 compute-2 nova_compute[232428]: 2025-11-29 08:42:13.968 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.014 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.021 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:14 compute-2 ceph-mon[77138]: pgmap v3189: 305 pgs: 305 active+clean; 232 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.4 MiB/s wr, 88 op/s
Nov 29 08:42:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2176246260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:42:14 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3779548634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.479 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.481 232432 DEBUG nova.virt.libvirt.vif [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=187,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/7uGf5YOmiF3yBMOxVOMYJ5tSITnhIHM9lRMNh8qvT2Pa0/eqtrZ8XAbOaTJhL/Ro7+calPSQbw8ZciodMqvm8pu2EN59hYG6S3DDEVp0Ck0+lGuS63aLo4tG6bWHdoQ==',key_name='tempest-TestSecurityGroupsBasicOps-152032726',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-nsnde6gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:08Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=f506d897-b24f-45c5-8af9-49e145bdabd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.481 232432 DEBUG nova.network.os_vif_util [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.482 232432 DEBUG nova.network.os_vif_util [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.483 232432 DEBUG nova.objects.instance [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid f506d897-b24f-45c5-8af9-49e145bdabd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.502 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <uuid>f506d897-b24f-45c5-8af9-49e145bdabd8</uuid>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <name>instance-000000bb</name>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275</nova:name>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:42:13</nova:creationTime>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <nova:port uuid="34c81def-f498-4c4f-81b4-2fe4e5a49109">
Nov 29 08:42:14 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <system>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="serial">f506d897-b24f-45c5-8af9-49e145bdabd8</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="uuid">f506d897-b24f-45c5-8af9-49e145bdabd8</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </system>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <os>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </os>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <features>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </features>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f506d897-b24f-45c5-8af9-49e145bdabd8_disk">
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config">
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:42:14 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:7f:55:ce"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <target dev="tap34c81def-f4"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/console.log" append="off"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <video>
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </video>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:42:14 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:42:14 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:42:14 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:42:14 compute-2 nova_compute[232428]: </domain>
Nov 29 08:42:14 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.503 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Preparing to wait for external event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.503 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.503 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.504 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.504 232432 DEBUG nova.virt.libvirt.vif [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=187,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/7uGf5YOmiF3yBMOxVOMYJ5tSITnhIHM9lRMNh8qvT2Pa0/eqtrZ8XAbOaTJhL/Ro7+calPSQbw8ZciodMqvm8pu2EN59hYG6S3DDEVp0Ck0+lGuS63aLo4tG6bWHdoQ==',key_name='tempest-TestSecurityGroupsBasicOps-152032726',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-nsnde6gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:08Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=f506d897-b24f-45c5-8af9-49e145bdabd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.504 232432 DEBUG nova.network.os_vif_util [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.505 232432 DEBUG nova.network.os_vif_util [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.505 232432 DEBUG os_vif [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.506 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.506 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.510 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34c81def-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.511 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34c81def-f4, col_values=(('external_ids', {'iface-id': '34c81def-f498-4c4f-81b4-2fe4e5a49109', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:55:ce', 'vm-uuid': 'f506d897-b24f-45c5-8af9-49e145bdabd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.512 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:14 compute-2 NetworkManager[48993]: <info>  [1764405734.5133] manager: (tap34c81def-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.517 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.522 232432 INFO os_vif [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4')
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.636 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.636 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.637 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:7f:55:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.637 232432 INFO nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Using config drive
Nov 29 08:42:14 compute-2 nova_compute[232428]: 2025-11-29 08:42:14.670 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3779548634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:15.554 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:15.555 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.671 232432 INFO nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Creating config drive at /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.684 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13k2xkc2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:15.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.849 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13k2xkc2" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.905 232432 DEBUG nova.storage.rbd_utils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:42:15 compute-2 nova_compute[232428]: 2025-11-29 08:42:15.910 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.147 232432 DEBUG oslo_concurrency.processutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config f506d897-b24f-45c5-8af9-49e145bdabd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.149 232432 INFO nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Deleting local config drive /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8/disk.config because it was imported into RBD.
Nov 29 08:42:16 compute-2 ceph-mon[77138]: pgmap v3190: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 08:42:16 compute-2 kernel: tap34c81def-f4: entered promiscuous mode
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.2587] manager: (tap34c81def-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 08:42:16 compute-2 ovn_controller[134375]: 2025-11-29T08:42:16Z|00864|binding|INFO|Claiming lport 34c81def-f498-4c4f-81b4-2fe4e5a49109 for this chassis.
Nov 29 08:42:16 compute-2 ovn_controller[134375]: 2025-11-29T08:42:16Z|00865|binding|INFO|34c81def-f498-4c4f-81b4-2fe4e5a49109: Claiming fa:16:3e:7f:55:ce 10.100.0.4
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.283 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:55:ce 10.100.0.4'], port_security=['fa:16:3e:7f:55:ce 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f506d897-b24f-45c5-8af9-49e145bdabd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894063c7-36d0-4a71-9166-1615063ba970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13089a2f-b904-4042-b03b-fd0243f0a0e2 63acda08-4d1e-4bb2-8030-51c6806afd85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2fe17ed-0713-4d56-a338-f4471c6841c4, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=34c81def-f498-4c4f-81b4-2fe4e5a49109) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.285 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 34c81def-f498-4c4f-81b4-2fe4e5a49109 in datapath 894063c7-36d0-4a71-9166-1615063ba970 bound to our chassis
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.287 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894063c7-36d0-4a71-9166-1615063ba970
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.314 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb8d196-cc5a-4b8c-894c-0d92fe2d6762]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.315 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap894063c7-31 in ovnmeta-894063c7-36d0-4a71-9166-1615063ba970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:42:16 compute-2 systemd-udevd[317261]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:42:16 compute-2 systemd-machined[194747]: New machine qemu-90-instance-000000bb.
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.324 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap894063c7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.324 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[16b73eb6-562c-420f-aafb-b45571ee2fe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.327 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9c0168-89ea-49e3-93e2-046ecab4cc12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.3524] device (tap34c81def-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:42:16 compute-2 systemd[1]: Started Virtual Machine qemu-90-instance-000000bb.
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.3560] device (tap34c81def-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.357 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.357 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e4536-1dfd-4031-ae10-06526a7a7093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_controller[134375]: 2025-11-29T08:42:16Z|00866|binding|INFO|Setting lport 34c81def-f498-4c4f-81b4-2fe4e5a49109 ovn-installed in OVS
Nov 29 08:42:16 compute-2 ovn_controller[134375]: 2025-11-29T08:42:16Z|00867|binding|INFO|Setting lport 34c81def-f498-4c4f-81b4-2fe4e5a49109 up in Southbound
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbc418d-9fa7-47b7-919f-f25791f704e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.452 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[144a0a82-3952-464c-855e-7aa8d2c3bd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.467 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc427f6-2007-4d80-9c17-dd94861261cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.4680] manager: (tap894063c7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.522 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7a6936-9be8-45f7-9196-d39e634645f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.528 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9dfec8-f322-4ffb-9bcf-7bc52ffeb4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.5667] device (tap894063c7-30): carrier: link connected
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.578 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[50c3e838-095f-48d3-bf6a-81afeee8df5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.624 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[45d1bc4c-93fd-4206-97c6-108ba8280180]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894063c7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:df:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870283, 'reachable_time': 24947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317293, 'error': None, 'target': 'ovnmeta-894063c7-36d0-4a71-9166-1615063ba970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.653 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[31b65210-71bf-486b-822e-3d56134ab77e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:df30'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870283, 'tstamp': 870283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317294, 'error': None, 'target': 'ovnmeta-894063c7-36d0-4a71-9166-1615063ba970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.693 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8689422e-eba9-403a-9aa8-c66c21b90e06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894063c7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:df:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870283, 'reachable_time': 24947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317295, 'error': None, 'target': 'ovnmeta-894063c7-36d0-4a71-9166-1615063ba970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.738 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3564e4f0-b19e-4880-ba51-9cd603171f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.847 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed0910d-b262-4f5a-ac7f-4c870545ad2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.850 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894063c7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.850 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.851 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894063c7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 kernel: tap894063c7-30: entered promiscuous mode
Nov 29 08:42:16 compute-2 NetworkManager[48993]: <info>  [1764405736.8548] manager: (tap894063c7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.858 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.860 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894063c7-30, col_values=(('external_ids', {'iface-id': 'df3fe699-f7c9-49b6-8cf4-a3877c7d35e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 ovn_controller[134375]: 2025-11-29T08:42:16Z|00868|binding|INFO|Releasing lport df3fe699-f7c9-49b6-8cf4-a3877c7d35e8 from this chassis (sb_readonly=0)
Nov 29 08:42:16 compute-2 nova_compute[232428]: 2025-11-29 08:42:16.890 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.891 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/894063c7-36d0-4a71-9166-1615063ba970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/894063c7-36d0-4a71-9166-1615063ba970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.893 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[459f2b38-7ca8-47f6-ac16-7593dd08989c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.895 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-894063c7-36d0-4a71-9166-1615063ba970
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/894063c7-36d0-4a71-9166-1615063ba970.pid.haproxy
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 894063c7-36d0-4a71-9166-1615063ba970
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:42:16 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:16.897 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-894063c7-36d0-4a71-9166-1615063ba970', 'env', 'PROCESS_TAG=haproxy-894063c7-36d0-4a71-9166-1615063ba970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/894063c7-36d0-4a71-9166-1615063ba970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.204 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.231 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.279 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405737.2785585, f506d897-b24f-45c5-8af9-49e145bdabd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.281 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] VM Started (Lifecycle Event)
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.302 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.308 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405737.2788491, f506d897-b24f-45c5-8af9-49e145bdabd8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.309 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] VM Paused (Lifecycle Event)
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.331 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.337 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.362 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:42:17 compute-2 podman[317368]: 2025-11-29 08:42:17.379824363 +0000 UTC m=+0.059151442 container create e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:42:17 compute-2 systemd[1]: Started libpod-conmon-e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a.scope.
Nov 29 08:42:17 compute-2 podman[317368]: 2025-11-29 08:42:17.35209576 +0000 UTC m=+0.031422859 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:42:17 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:42:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1e488ec4520808d567cd2f5d053ef06bf486779fcfcb63f0bbe1bb9ea50fbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:42:17 compute-2 podman[317368]: 2025-11-29 08:42:17.489012281 +0000 UTC m=+0.168339400 container init e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:42:17 compute-2 podman[317368]: 2025-11-29 08:42:17.500723076 +0000 UTC m=+0.180050185 container start e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.512 232432 DEBUG nova.network.neutron [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updated VIF entry in instance network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.512 232432 DEBUG nova.network.neutron [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updating instance_info_cache with network_info: [{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.528 232432 DEBUG oslo_concurrency.lockutils [req-f388e590-2087-435b-9372-7ee8f983be06 req-1f7ee42e-cabc-4c33-9039-ac2158be0d9f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:42:17 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [NOTICE]   (317387) : New worker (317389) forked
Nov 29 08:42:17 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [NOTICE]   (317387) : Loading success.
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.560 232432 DEBUG nova.compute.manager [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.560 232432 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.561 232432 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.561 232432 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.562 232432 DEBUG nova.compute.manager [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Processing event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.563 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.570 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405737.5698173, f506d897-b24f-45c5-8af9-49e145bdabd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.570 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] VM Resumed (Lifecycle Event)
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.572 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.577 232432 INFO nova.virt.libvirt.driver [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Instance spawned successfully.
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.577 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.593 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.598 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.602 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.603 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.603 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.604 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.604 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.605 232432 DEBUG nova.virt.libvirt.driver [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.613 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.657 232432 INFO nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Took 9.56 seconds to spawn the instance on the hypervisor.
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.658 232432 DEBUG nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.722 232432 INFO nova.compute.manager [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Took 10.66 seconds to build instance.
Nov 29 08:42:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:17 compute-2 nova_compute[232428]: 2025-11-29 08:42:17.747 232432 DEBUG oslo_concurrency.lockutils [None req-64804dbf-31b0-4eb5-9f73-accb031be756 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:18 compute-2 nova_compute[232428]: 2025-11-29 08:42:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:18 compute-2 ceph-mon[77138]: pgmap v3191: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Nov 29 08:42:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/419823308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2380355446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:42:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:42:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 68K writes, 268K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.05 MB/s
                                           Cumulative WAL: 68K writes, 25K syncs, 2.70 writes per sync, written: 0.27 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5961 writes, 24K keys, 5961 commit groups, 1.0 writes per commit group, ingest: 24.14 MB, 0.04 MB/s
                                           Interval WAL: 5961 writes, 2313 syncs, 2.58 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:42:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.513 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:19.557 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.676 232432 DEBUG nova.compute.manager [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.677 232432 DEBUG oslo_concurrency.lockutils [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.677 232432 DEBUG oslo_concurrency.lockutils [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.678 232432 DEBUG oslo_concurrency.lockutils [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.679 232432 DEBUG nova.compute.manager [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] No waiting events found dispatching network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:42:19 compute-2 nova_compute[232428]: 2025-11-29 08:42:19.679 232432 WARNING nova.compute.manager [req-d6be2603-f850-4dc5-b881-e231f3549ccc req-bfc6c26e-1689-448f-b4a2-d22f7873a561 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received unexpected event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 for instance with vm_state active and task_state None.
Nov 29 08:42:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:20 compute-2 nova_compute[232428]: 2025-11-29 08:42:20.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:20 compute-2 ceph-mon[77138]: pgmap v3192: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Nov 29 08:42:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/413093287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.567 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:21 compute-2 NetworkManager[48993]: <info>  [1764405741.5712] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 08:42:21 compute-2 NetworkManager[48993]: <info>  [1764405741.5725] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.679 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:21 compute-2 ovn_controller[134375]: 2025-11-29T08:42:21Z|00869|binding|INFO|Releasing lport df3fe699-f7c9-49b6-8cf4-a3877c7d35e8 from this chassis (sb_readonly=0)
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:21.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:21 compute-2 podman[317400]: 2025-11-29 08:42:21.762615972 +0000 UTC m=+0.138967685 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.960 232432 DEBUG nova.compute.manager [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.961 232432 DEBUG nova.compute.manager [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing instance network info cache due to event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.962 232432 DEBUG oslo_concurrency.lockutils [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.963 232432 DEBUG oslo_concurrency.lockutils [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:42:21 compute-2 nova_compute[232428]: 2025-11-29 08:42:21.963 232432 DEBUG nova.network.neutron [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:42:22 compute-2 nova_compute[232428]: 2025-11-29 08:42:22.205 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:22 compute-2 ceph-mon[77138]: pgmap v3193: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 29 08:42:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3927432958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:23 compute-2 nova_compute[232428]: 2025-11-29 08:42:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:23.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.011 232432 DEBUG nova.network.neutron [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updated VIF entry in instance network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.012 232432 DEBUG nova.network.neutron [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updating instance_info_cache with network_info: [{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.043 232432 DEBUG oslo_concurrency.lockutils [req-c3b1b0d8-0976-4879-8cfe-f94ae703f7d3 req-4dad70eb-3a31-4da1-8ee0-82d97ff44987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:42:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.231 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.232 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.233 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.234 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.235 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:24 compute-2 ceph-mon[77138]: pgmap v3194: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Nov 29 08:42:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:42:24 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2527720565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.703 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.792 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:42:24 compute-2 nova_compute[232428]: 2025-11-29 08:42:24.793 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.039 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.041 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4077MB free_disk=20.946483612060547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.042 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.042 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.122 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance f506d897-b24f-45c5-8af9-49e145bdabd8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.122 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.123 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.172 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2527720565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:42:25 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3170314454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.667 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.676 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.698 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.717 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:42:25 compute-2 nova_compute[232428]: 2025-11-29 08:42:25.718 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:25.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:26 compute-2 ceph-mon[77138]: pgmap v3195: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 201 op/s
Nov 29 08:42:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3170314454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.335966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746336058, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 5758876, "memory_usage": 5828928, "flush_reason": "Manual Compaction"}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746357886, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3754644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67615, "largest_seqno": 69982, "table_properties": {"data_size": 3745151, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19677, "raw_average_key_size": 20, "raw_value_size": 3726216, "raw_average_value_size": 3857, "num_data_blocks": 262, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405531, "oldest_key_time": 1764405531, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 21969 microseconds, and 9163 cpu microseconds.
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.357937) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3754644 bytes OK
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.357964) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.359689) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.359715) EVENT_LOG_v1 {"time_micros": 1764405746359706, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.359738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5748554, prev total WAL file size 5748554, number of live WAL files 2.
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.361416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3666KB)], [135(10MB)]
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746361490, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 15257950, "oldest_snapshot_seqno": -1}
Nov 29 08:42:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9790 keys, 13330708 bytes, temperature: kUnknown
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746459493, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 13330708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13266161, "index_size": 38962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 257852, "raw_average_key_size": 26, "raw_value_size": 13093095, "raw_average_value_size": 1337, "num_data_blocks": 1489, "num_entries": 9790, "num_filter_entries": 9790, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.459741) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 13330708 bytes
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.461258) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.6 rd, 135.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.0 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 10307, records dropped: 517 output_compression: NoCompression
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.461284) EVENT_LOG_v1 {"time_micros": 1764405746461277, "job": 86, "event": "compaction_finished", "compaction_time_micros": 98080, "compaction_time_cpu_micros": 34399, "output_level": 6, "num_output_files": 1, "total_output_size": 13330708, "num_input_records": 10307, "num_output_records": 9790, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746461997, "job": 86, "event": "table_file_deletion", "file_number": 137}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746463970, "job": 86, "event": "table_file_deletion", "file_number": 135}
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.361286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.464020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.464024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.464026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.464027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:42:26.464028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:42:26 compute-2 nova_compute[232428]: 2025-11-29 08:42:26.719 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:26 compute-2 nova_compute[232428]: 2025-11-29 08:42:26.719 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:42:27 compute-2 nova_compute[232428]: 2025-11-29 08:42:27.206 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:27.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:42:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/225355033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:42:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:42:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/225355033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:42:28 compute-2 ceph-mon[77138]: pgmap v3196: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 205 op/s
Nov 29 08:42:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/225355033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:42:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/225355033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:42:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:28 compute-2 sudo[317477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:28 compute-2 sudo[317477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:28 compute-2 sudo[317477]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:28 compute-2 sudo[317502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:28 compute-2 sudo[317502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:28 compute-2 sudo[317502]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:29 compute-2 nova_compute[232428]: 2025-11-29 08:42:29.519 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:29.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:30 compute-2 ceph-mon[77138]: pgmap v3197: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Nov 29 08:42:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:30.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:31.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:32 compute-2 nova_compute[232428]: 2025-11-29 08:42:32.208 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:32 compute-2 ovn_controller[134375]: 2025-11-29T08:42:32Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:55:ce 10.100.0.4
Nov 29 08:42:32 compute-2 ovn_controller[134375]: 2025-11-29T08:42:32Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:55:ce 10.100.0.4
Nov 29 08:42:32 compute-2 ceph-mon[77138]: pgmap v3198: 305 pgs: 305 active+clean; 223 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 202 op/s
Nov 29 08:42:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3157097088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:32 compute-2 podman[317529]: 2025-11-29 08:42:32.722805402 +0000 UTC m=+0.100045704 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:42:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3395023748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:33.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:34.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:34 compute-2 ceph-mon[77138]: pgmap v3199: 305 pgs: 305 active+clean; 223 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 920 KiB/s wr, 73 op/s
Nov 29 08:42:34 compute-2 nova_compute[232428]: 2025-11-29 08:42:34.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:35 compute-2 sudo[317549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:35 compute-2 sudo[317549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:35 compute-2 sudo[317549]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:35 compute-2 sudo[317574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:42:35 compute-2 sudo[317574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:35 compute-2 sudo[317574]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:35 compute-2 sudo[317599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:35 compute-2 sudo[317599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:35 compute-2 sudo[317599]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:35.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:35 compute-2 sudo[317624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:42:35 compute-2 sudo[317624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:36 compute-2 ceph-mon[77138]: pgmap v3200: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Nov 29 08:42:36 compute-2 sudo[317624]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:36 compute-2 sudo[317682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:36 compute-2 sudo[317682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:36 compute-2 sudo[317682]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:36 compute-2 sudo[317707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:42:36 compute-2 sudo[317707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:36 compute-2 sudo[317707]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:36 compute-2 sudo[317732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:36 compute-2 sudo[317732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:36 compute-2 sudo[317732]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:36 compute-2 sudo[317757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 08:42:36 compute-2 sudo[317757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:37 compute-2 sudo[317757]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:37 compute-2 nova_compute[232428]: 2025-11-29 08:42:37.210 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:37.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:37 compute-2 ceph-mon[77138]: pgmap v3201: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 812 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:42:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:42:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:39 compute-2 nova_compute[232428]: 2025-11-29 08:42:39.523 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:39.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:40 compute-2 ceph-mon[77138]: pgmap v3202: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Nov 29 08:42:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:40.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:40 compute-2 podman[317802]: 2025-11-29 08:42:40.695048403 +0000 UTC m=+0.093182612 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:42:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:41.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:42 compute-2 ceph-mon[77138]: pgmap v3203: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.212 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.526 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.527 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.527 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.528 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.528 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.532 232432 INFO nova.compute.manager [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Terminating instance
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.534 232432 DEBUG nova.compute.manager [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.613 232432 DEBUG nova.compute.manager [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.614 232432 DEBUG nova.compute.manager [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing instance network info cache due to event network-changed-34c81def-f498-4c4f-81b4-2fe4e5a49109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.615 232432 DEBUG oslo_concurrency.lockutils [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.615 232432 DEBUG oslo_concurrency.lockutils [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.616 232432 DEBUG nova.network.neutron [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Refreshing network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:42:42 compute-2 kernel: tap34c81def-f4 (unregistering): left promiscuous mode
Nov 29 08:42:42 compute-2 NetworkManager[48993]: <info>  [1764405762.6349] device (tap34c81def-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:42:42 compute-2 ovn_controller[134375]: 2025-11-29T08:42:42Z|00870|binding|INFO|Releasing lport 34c81def-f498-4c4f-81b4-2fe4e5a49109 from this chassis (sb_readonly=0)
Nov 29 08:42:42 compute-2 ovn_controller[134375]: 2025-11-29T08:42:42Z|00871|binding|INFO|Setting lport 34c81def-f498-4c4f-81b4-2fe4e5a49109 down in Southbound
Nov 29 08:42:42 compute-2 ovn_controller[134375]: 2025-11-29T08:42:42Z|00872|binding|INFO|Removing iface tap34c81def-f4 ovn-installed in OVS
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.650 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.655 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:55:ce 10.100.0.4'], port_security=['fa:16:3e:7f:55:ce 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f506d897-b24f-45c5-8af9-49e145bdabd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894063c7-36d0-4a71-9166-1615063ba970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13089a2f-b904-4042-b03b-fd0243f0a0e2 63acda08-4d1e-4bb2-8030-51c6806afd85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2fe17ed-0713-4d56-a338-f4471c6841c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=34c81def-f498-4c4f-81b4-2fe4e5a49109) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.657 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 34c81def-f498-4c4f-81b4-2fe4e5a49109 in datapath 894063c7-36d0-4a71-9166-1615063ba970 unbound from our chassis
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.658 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 894063c7-36d0-4a71-9166-1615063ba970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.663 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2552b4-2281-48a4-bb57-c9bf7fc0790d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.664 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-894063c7-36d0-4a71-9166-1615063ba970 namespace which is not needed anymore
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.678 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Nov 29 08:42:42 compute-2 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000bb.scope: Consumed 15.793s CPU time.
Nov 29 08:42:42 compute-2 systemd-machined[194747]: Machine qemu-90-instance-000000bb terminated.
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.765 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.782 232432 INFO nova.virt.libvirt.driver [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Instance destroyed successfully.
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.782 232432 DEBUG nova.objects.instance [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid f506d897-b24f-45c5-8af9-49e145bdabd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.810 232432 DEBUG nova.virt.libvirt.vif [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-52242275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=187,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/7uGf5YOmiF3yBMOxVOMYJ5tSITnhIHM9lRMNh8qvT2Pa0/eqtrZ8XAbOaTJhL/Ro7+calPSQbw8ZciodMqvm8pu2EN59hYG6S3DDEVp0Ck0+lGuS63aLo4tG6bWHdoQ==',key_name='tempest-TestSecurityGroupsBasicOps-152032726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-nsnde6gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:17Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=f506d897-b24f-45c5-8af9-49e145bdabd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.811 232432 DEBUG nova.network.os_vif_util [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.812 232432 DEBUG nova.network.os_vif_util [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.812 232432 DEBUG os_vif [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.814 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.814 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34c81def-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.818 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.820 232432 INFO os_vif [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:55:ce,bridge_name='br-int',has_traffic_filtering=True,id=34c81def-f498-4c4f-81b4-2fe4e5a49109,network=Network(894063c7-36d0-4a71-9166-1615063ba970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c81def-f4')
Nov 29 08:42:42 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [NOTICE]   (317387) : haproxy version is 2.8.14-c23fe91
Nov 29 08:42:42 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [NOTICE]   (317387) : path to executable is /usr/sbin/haproxy
Nov 29 08:42:42 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [WARNING]  (317387) : Exiting Master process...
Nov 29 08:42:42 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [ALERT]    (317387) : Current worker (317389) exited with code 143 (Terminated)
Nov 29 08:42:42 compute-2 neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970[317383]: [WARNING]  (317387) : All workers exited. Exiting... (0)
Nov 29 08:42:42 compute-2 systemd[1]: libpod-e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a.scope: Deactivated successfully.
Nov 29 08:42:42 compute-2 podman[317856]: 2025-11-29 08:42:42.860543346 +0000 UTC m=+0.062760933 container died e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:42:42 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:42:42 compute-2 systemd[1]: var-lib-containers-storage-overlay-0b1e488ec4520808d567cd2f5d053ef06bf486779fcfcb63f0bbe1bb9ea50fbe-merged.mount: Deactivated successfully.
Nov 29 08:42:42 compute-2 podman[317856]: 2025-11-29 08:42:42.892417859 +0000 UTC m=+0.094635436 container cleanup e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:42:42 compute-2 systemd[1]: libpod-conmon-e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a.scope: Deactivated successfully.
Nov 29 08:42:42 compute-2 podman[317906]: 2025-11-29 08:42:42.956968098 +0000 UTC m=+0.044141765 container remove e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.965 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4672a500-ed2e-49e6-ae83-5bdc43491730]: (4, ('Sat Nov 29 08:42:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970 (e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a)\ne8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a\nSat Nov 29 08:42:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-894063c7-36d0-4a71-9166-1615063ba970 (e8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a)\ne8ad54b49f43620c5e63beab6a0938253270416a2484121f44cd983dba41269a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.967 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f996902c-5b1a-45e5-bcc8-30112765d0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.969 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894063c7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 kernel: tap894063c7-30: left promiscuous mode
Nov 29 08:42:42 compute-2 nova_compute[232428]: 2025-11-29 08:42:42.988 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:42.991 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0f499bf1-7457-43c7-8e88-bfef911bf526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:43.008 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[52ab7f3d-d75e-4505-ae4b-fe6659b1e986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:43.009 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5e38a2-9a51-43e0-93b7-cddf0a2c7b88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:43.025 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7f0d5d-0916-4416-83bb-8c1bf5116710]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870270, 'reachable_time': 28735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317921, 'error': None, 'target': 'ovnmeta-894063c7-36d0-4a71-9166-1615063ba970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:43 compute-2 systemd[1]: run-netns-ovnmeta\x2d894063c7\x2d36d0\x2d4a71\x2d9166\x2d1615063ba970.mount: Deactivated successfully.
Nov 29 08:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:43.028 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-894063c7-36d0-4a71-9166-1615063ba970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:42:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:42:43.028 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9c2729-7f2f-4083-bdfd-7b1c5f133a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.251 232432 INFO nova.virt.libvirt.driver [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Deleting instance files /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8_del
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.253 232432 INFO nova.virt.libvirt.driver [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Deletion of /var/lib/nova/instances/f506d897-b24f-45c5-8af9-49e145bdabd8_del complete
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.324 232432 INFO nova.compute.manager [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.325 232432 DEBUG oslo.service.loopingcall [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.325 232432 DEBUG nova.compute.manager [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:42:43 compute-2 nova_compute[232428]: 2025-11-29 08:42:43.325 232432 DEBUG nova.network.neutron [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:42:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:43.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:44 compute-2 ceph-mon[77138]: pgmap v3204: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 3.4 MiB/s wr, 114 op/s
Nov 29 08:42:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:42:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:44 compute-2 sudo[317924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:44 compute-2 sudo[317924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:44 compute-2 sudo[317924]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.543 232432 DEBUG nova.network.neutron [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:42:44 compute-2 sudo[317949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:42:44 compute-2 sudo[317949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:44 compute-2 sudo[317949]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.579 232432 INFO nova.compute.manager [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Took 1.25 seconds to deallocate network for instance.
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.649 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.650 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.658 232432 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-vif-unplugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.658 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.659 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.659 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.660 232432 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] No waiting events found dispatching network-vif-unplugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.661 232432 WARNING nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received unexpected event network-vif-unplugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 for instance with vm_state deleted and task_state None.
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.661 232432 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.662 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.662 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.663 232432 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.663 232432 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] No waiting events found dispatching network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.663 232432 WARNING nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received unexpected event network-vif-plugged-34c81def-f498-4c4f-81b4-2fe4e5a49109 for instance with vm_state deleted and task_state None.
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.701 232432 DEBUG nova.compute.manager [req-bf96f934-55a0-4f18-9fda-7dbf6ff1758f req-38f61165-d1da-403e-b445-d99aa1027966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Received event network-vif-deleted-34c81def-f498-4c4f-81b4-2fe4e5a49109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:42:44 compute-2 nova_compute[232428]: 2025-11-29 08:42:44.716 232432 DEBUG oslo_concurrency.processutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.019 232432 DEBUG nova.network.neutron [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updated VIF entry in instance network info cache for port 34c81def-f498-4c4f-81b4-2fe4e5a49109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.021 232432 DEBUG nova.network.neutron [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Updating instance_info_cache with network_info: [{"id": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "address": "fa:16:3e:7f:55:ce", "network": {"id": "894063c7-36d0-4a71-9166-1615063ba970", "bridge": "br-int", "label": "tempest-network-smoke--1388494047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c81def-f4", "ovs_interfaceid": "34c81def-f498-4c4f-81b4-2fe4e5a49109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.048 232432 DEBUG oslo_concurrency.lockutils [req-853d7696-8f7d-4163-a31a-987768ff9e91 req-5b3ee67e-b7ef-48df-ae20-d51d8c43f966 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f506d897-b24f-45c5-8af9-49e145bdabd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:42:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4164448671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:42:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2583305318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.236 232432 DEBUG oslo_concurrency.processutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.245 232432 DEBUG nova.compute.provider_tree [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.296 232432 DEBUG nova.scheduler.client.report [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.323 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.352 232432 INFO nova.scheduler.client.report [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance f506d897-b24f-45c5-8af9-49e145bdabd8
Nov 29 08:42:45 compute-2 nova_compute[232428]: 2025-11-29 08:42:45.448 232432 DEBUG oslo_concurrency.lockutils [None req-67144b58-fef6-49f5-9797-63094a82740c a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "f506d897-b24f-45c5-8af9-49e145bdabd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:42:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:46 compute-2 ceph-mon[77138]: pgmap v3205: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 3.4 MiB/s wr, 169 op/s
Nov 29 08:42:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2583305318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:42:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:47 compute-2 nova_compute[232428]: 2025-11-29 08:42:47.217 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:47 compute-2 nova_compute[232428]: 2025-11-29 08:42:47.699 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:47 compute-2 nova_compute[232428]: 2025-11-29 08:42:47.816 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:47 compute-2 nova_compute[232428]: 2025-11-29 08:42:47.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:48 compute-2 ceph-mon[77138]: pgmap v3206: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 243 KiB/s rd, 605 KiB/s wr, 84 op/s
Nov 29 08:42:48 compute-2 nova_compute[232428]: 2025-11-29 08:42:48.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:48 compute-2 nova_compute[232428]: 2025-11-29 08:42:48.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:42:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:42:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:48.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:42:48 compute-2 sudo[317999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:48 compute-2 sudo[317999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:48 compute-2 sudo[317999]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:48 compute-2 sudo[318024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:42:48 compute-2 sudo[318024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:42:48 compute-2 sudo[318024]: pam_unix(sudo:session): session closed for user root
Nov 29 08:42:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:50 compute-2 ceph-mon[77138]: pgmap v3207: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 56 op/s
Nov 29 08:42:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:50.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:51.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:51 compute-2 podman[318051]: 2025-11-29 08:42:51.982163677 +0000 UTC m=+0.156631525 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 08:42:52 compute-2 nova_compute[232428]: 2025-11-29 08:42:52.220 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:52 compute-2 ceph-mon[77138]: pgmap v3208: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 56 op/s
Nov 29 08:42:52 compute-2 nova_compute[232428]: 2025-11-29 08:42:52.819 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:53.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:54 compute-2 nova_compute[232428]: 2025-11-29 08:42:54.223 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:42:54 compute-2 nova_compute[232428]: 2025-11-29 08:42:54.224 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:42:54 compute-2 nova_compute[232428]: 2025-11-29 08:42:54.244 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:42:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:54.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:54 compute-2 ceph-mon[77138]: pgmap v3209: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Nov 29 08:42:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:55.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:56 compute-2 ceph-mon[77138]: pgmap v3210: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 2.3 KiB/s wr, 147 op/s
Nov 29 08:42:57 compute-2 nova_compute[232428]: 2025-11-29 08:42:57.222 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:57 compute-2 nova_compute[232428]: 2025-11-29 08:42:57.780 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405762.7788079, f506d897-b24f-45c5-8af9-49e145bdabd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:42:57 compute-2 nova_compute[232428]: 2025-11-29 08:42:57.780 232432 INFO nova.compute.manager [-] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] VM Stopped (Lifecycle Event)
Nov 29 08:42:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:42:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:57.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:42:57 compute-2 nova_compute[232428]: 2025-11-29 08:42:57.805 232432 DEBUG nova.compute.manager [None req-02cf8c2a-0dc0-4f5c-9b24-40a2e71daa4e - - - - - -] [instance: f506d897-b24f-45c5-8af9-49e145bdabd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:42:57 compute-2 nova_compute[232428]: 2025-11-29 08:42:57.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:42:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:42:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:42:58 compute-2 ceph-mon[77138]: pgmap v3211: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Nov 29 08:42:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:42:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:42:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.003000095s ======
Nov 29 08:42:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:59.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000095s
Nov 29 08:43:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:00 compute-2 ceph-mon[77138]: pgmap v3212: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Nov 29 08:43:01 compute-2 ceph-mon[77138]: pgmap v3213: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Nov 29 08:43:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:02 compute-2 nova_compute[232428]: 2025-11-29 08:43:02.224 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:02.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:02 compute-2 nova_compute[232428]: 2025-11-29 08:43:02.823 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.068 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.068 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.091 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.187 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.187 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.205 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.205 232432 INFO nova.compute.claims [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.338 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:03.346 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:03.347 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:03.347 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:03 compute-2 podman[318104]: 2025-11-29 08:43:03.709195364 +0000 UTC m=+0.092662195 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:43:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:43:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2149552916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.776 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.784 232432 DEBUG nova.compute.provider_tree [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:43:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:03.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.802 232432 DEBUG nova.scheduler.client.report [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.822 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.823 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.875 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.876 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.900 232432 INFO nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:43:03 compute-2 nova_compute[232428]: 2025-11-29 08:43:03.932 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.071 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.072 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.073 232432 INFO nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Creating image(s)
Nov 29 08:43:04 compute-2 ceph-mon[77138]: pgmap v3214: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Nov 29 08:43:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2149552916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.112 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.144 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.180 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.186 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.288 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.290 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.291 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.292 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.331 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.335 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:04.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.574 232432 DEBUG nova.policy [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.679 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.789 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.917 232432 DEBUG nova.objects.instance [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.935 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.935 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Ensure instance console log exists: /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.936 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.937 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:04 compute-2 nova_compute[232428]: 2025-11-29 08:43:04.937 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:05 compute-2 nova_compute[232428]: 2025-11-29 08:43:05.679 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Successfully created port: c30634d5-981b-440c-aaed-815b2591a3d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:43:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:06 compute-2 ceph-mon[77138]: pgmap v3215: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 554 KiB/s wr, 196 op/s
Nov 29 08:43:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.462 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Successfully updated port: c30634d5-981b-440c-aaed-815b2591a3d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.479 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.479 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.480 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.562 232432 DEBUG nova.compute.manager [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.562 232432 DEBUG nova.compute.manager [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing instance network info cache due to event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.563 232432 DEBUG oslo_concurrency.lockutils [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:06 compute-2 nova_compute[232428]: 2025-11-29 08:43:06.748 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.228 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.742 232432 DEBUG nova.network.neutron [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.767 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.767 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance network_info: |[{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.768 232432 DEBUG oslo_concurrency.lockutils [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.768 232432 DEBUG nova.network.neutron [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.776 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start _get_guest_xml network_info=[{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.784 232432 WARNING nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.796 232432 DEBUG nova.virt.libvirt.host [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.797 232432 DEBUG nova.virt.libvirt.host [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.801 232432 DEBUG nova.virt.libvirt.host [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.803 232432 DEBUG nova.virt.libvirt.host [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:43:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:07.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.806 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.806 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.807 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.808 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.809 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.809 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.810 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.811 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.812 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.812 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.813 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.814 232432 DEBUG nova.virt.hardware [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.821 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:07 compute-2 nova_compute[232428]: 2025-11-29 08:43:07.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:08 compute-2 ceph-mon[77138]: pgmap v3216: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 554 KiB/s wr, 105 op/s
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.222 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:43:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3251200322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.293 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.336 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.342 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:08.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:43:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2304662333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.828 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.831 232432 DEBUG nova.virt.libvirt.vif [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:03Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.832 232432 DEBUG nova.network.os_vif_util [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.834 232432 DEBUG nova.network.os_vif_util [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.836 232432 DEBUG nova.objects.instance [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.862 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <uuid>52de3669-ccbb-4d2c-948b-abc4aae3b8e4</uuid>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <name>instance-000000bc</name>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1344969912</nova:name>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:43:07</nova:creationTime>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <nova:port uuid="c30634d5-981b-440c-aaed-815b2591a3d4">
Nov 29 08:43:08 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <system>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="serial">52de3669-ccbb-4d2c-948b-abc4aae3b8e4</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="uuid">52de3669-ccbb-4d2c-948b-abc4aae3b8e4</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </system>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <os>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </os>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <features>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </features>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk">
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config">
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:43:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:2f:d2:47"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <target dev="tapc30634d5-98"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/console.log" append="off"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <video>
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </video>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:43:08 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:43:08 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:43:08 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:43:08 compute-2 nova_compute[232428]: </domain>
Nov 29 08:43:08 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.864 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Preparing to wait for external event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.865 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.865 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.866 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.868 232432 DEBUG nova.virt.libvirt.vif [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:03Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.869 232432 DEBUG nova.network.os_vif_util [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.870 232432 DEBUG nova.network.os_vif_util [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.871 232432 DEBUG os_vif [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.874 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.875 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.879 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.880 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc30634d5-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.881 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc30634d5-98, col_values=(('external_ids', {'iface-id': 'c30634d5-981b-440c-aaed-815b2591a3d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:d2:47', 'vm-uuid': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:08 compute-2 NetworkManager[48993]: <info>  [1764405788.8854] manager: (tapc30634d5-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.887 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.893 232432 INFO os_vif [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98')
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.960 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.961 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.961 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:2f:d2:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:43:08 compute-2 nova_compute[232428]: 2025-11-29 08:43:08.961 232432 INFO nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Using config drive
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.010 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:09 compute-2 sudo[318375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:09 compute-2 sudo[318375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:09 compute-2 sudo[318375]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3416482593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3251200322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2304662333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:09 compute-2 sudo[318402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:09 compute-2 sudo[318402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:09 compute-2 sudo[318402]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.619 232432 DEBUG nova.network.neutron [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated VIF entry in instance network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.620 232432 DEBUG nova.network.neutron [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.636 232432 DEBUG oslo_concurrency.lockutils [req-d048fe87-b13e-416d-ae63-f712a0ce3d59 req-e7893a59-efb2-4a36-be89-9f2b2c4125c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.725 232432 INFO nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Creating config drive at /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.736 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd56mkla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:09.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.885 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd56mkla" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.933 232432 DEBUG nova.storage.rbd_utils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:09 compute-2 nova_compute[232428]: 2025-11-29 08:43:09.939 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:10 compute-2 ceph-mon[77138]: pgmap v3217: 305 pgs: 305 active+clean; 147 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 1.0 MiB/s wr, 106 op/s
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.154 232432 DEBUG oslo_concurrency.processutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config 52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.155 232432 INFO nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Deleting local config drive /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/disk.config because it was imported into RBD.
Nov 29 08:43:10 compute-2 kernel: tapc30634d5-98: entered promiscuous mode
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 ovn_controller[134375]: 2025-11-29T08:43:10Z|00873|binding|INFO|Claiming lport c30634d5-981b-440c-aaed-815b2591a3d4 for this chassis.
Nov 29 08:43:10 compute-2 ovn_controller[134375]: 2025-11-29T08:43:10Z|00874|binding|INFO|c30634d5-981b-440c-aaed-815b2591a3d4: Claiming fa:16:3e:2f:d2:47 10.100.0.9
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.2442] manager: (tapc30634d5-98): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.267 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d2:47 10.100.0.9'], port_security=['fa:16:3e:2f:d2:47 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3aa44838-3538-48cc-aa78-ee7437a5a87d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f970837-a742-427c-bfc6-c51a824e5eec, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c30634d5-981b-440c-aaed-815b2591a3d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.268 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c30634d5-981b-440c-aaed-815b2591a3d4 in datapath 16035279-ee66-4ba0-b73b-de24bec8a7fe bound to our chassis
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.270 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16035279-ee66-4ba0-b73b-de24bec8a7fe
Nov 29 08:43:10 compute-2 systemd-udevd[318480]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.285 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7eae3461-b012-49cc-8aed-140d42348762]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.286 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16035279-e1 in ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.288 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16035279-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.288 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9a2319-8d7a-4108-9220-ca8bcc576a9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.289 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4a10af-368f-4c41-858f-2704802886fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 systemd-machined[194747]: New machine qemu-91-instance-000000bc.
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.2998] device (tapc30634d5-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.3011] device (tapc30634d5-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.303 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[55d94079-c296-419a-9d0d-3e6b9966965e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 systemd[1]: Started Virtual Machine qemu-91-instance-000000bc.
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.332 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e560f65a-66fd-4267-8187-77bcc3b87025]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_controller[134375]: 2025-11-29T08:43:10Z|00875|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 ovn-installed in OVS
Nov 29 08:43:10 compute-2 ovn_controller[134375]: 2025-11-29T08:43:10Z|00876|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 up in Southbound
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.378 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[456c6182-e3aa-4cf8-8778-b85d228ac2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.3860] manager: (tap16035279-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.384 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[860168b6-2565-4a36-99cd-05d97a013f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.428 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0ea5ce-7e2d-4a5c-8f67-d05341adc39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.432 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e5aecda9-c22d-4a9c-96f7-46caa031c39c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.4584] device (tap16035279-e0): carrier: link connected
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.469 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[442af126-e424-4683-8b24-36a0548e823a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[84d39296-3a7b-447d-a162-b5a858fb3110]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16035279-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875672, 'reachable_time': 36077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318514, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.512 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5f30ee-2d70-4cd5-ae56-66998c340dcf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:9a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 875672, 'tstamp': 875672}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318515, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.538 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8c282120-9c8e-4f4f-9443-4e361ecad386]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16035279-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875672, 'reachable_time': 36077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318516, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.588 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e9011068-5ab2-40c9-94c8-9acec3663027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.689 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4001d7e1-09a6-4138-beed-9e26251fb106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.692 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16035279-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.693 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.694 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16035279-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 kernel: tap16035279-e0: entered promiscuous mode
Nov 29 08:43:10 compute-2 NetworkManager[48993]: <info>  [1764405790.6982] manager: (tap16035279-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.704 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16035279-e0, col_values=(('external_ids', {'iface-id': '6d144850-0aa2-4d79-ba3f-ed60c65ed2f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.706 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 ovn_controller[134375]: 2025-11-29T08:43:10Z|00877|binding|INFO|Releasing lport 6d144850-0aa2-4d79-ba3f-ed60c65ed2f3 from this chassis (sb_readonly=0)
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.707 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.708 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.710 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2634482-a54a-4e31-b8a3-152b49d515ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.712 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-16035279-ee66-4ba0-b73b-de24bec8a7fe
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 16035279-ee66-4ba0-b73b-de24bec8a7fe
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:43:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:10.714 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'env', 'PROCESS_TAG=haproxy-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16035279-ee66-4ba0-b73b-de24bec8a7fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.720 232432 DEBUG nova.compute.manager [req-f662a6c9-1aa1-4cf3-8281-1baf74092e43 req-57fd1a30-694f-4088-bd16-b2f710edfa43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.720 232432 DEBUG oslo_concurrency.lockutils [req-f662a6c9-1aa1-4cf3-8281-1baf74092e43 req-57fd1a30-694f-4088-bd16-b2f710edfa43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.721 232432 DEBUG oslo_concurrency.lockutils [req-f662a6c9-1aa1-4cf3-8281-1baf74092e43 req-57fd1a30-694f-4088-bd16-b2f710edfa43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.721 232432 DEBUG oslo_concurrency.lockutils [req-f662a6c9-1aa1-4cf3-8281-1baf74092e43 req-57fd1a30-694f-4088-bd16-b2f710edfa43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.721 232432 DEBUG nova.compute.manager [req-f662a6c9-1aa1-4cf3-8281-1baf74092e43 req-57fd1a30-694f-4088-bd16-b2f710edfa43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Processing event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.721 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.858 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405790.8570693, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.858 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Started (Lifecycle Event)
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.861 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.874 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.880 232432 INFO nova.virt.libvirt.driver [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance spawned successfully.
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.881 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.886 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.902 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.908 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.909 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.909 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.910 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.910 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.911 232432 DEBUG nova.virt.libvirt.driver [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.922 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.922 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405790.857524, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.922 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Paused (Lifecycle Event)
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.955 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.967 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405790.8673477, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.967 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Resumed (Lifecycle Event)
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.993 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:10 compute-2 nova_compute[232428]: 2025-11-29 08:43:10.996 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:43:11 compute-2 nova_compute[232428]: 2025-11-29 08:43:11.000 232432 INFO nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Took 6.93 seconds to spawn the instance on the hypervisor.
Nov 29 08:43:11 compute-2 nova_compute[232428]: 2025-11-29 08:43:11.001 232432 DEBUG nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:11 compute-2 nova_compute[232428]: 2025-11-29 08:43:11.028 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:43:11 compute-2 nova_compute[232428]: 2025-11-29 08:43:11.121 232432 INFO nova.compute.manager [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Took 7.98 seconds to build instance.
Nov 29 08:43:11 compute-2 nova_compute[232428]: 2025-11-29 08:43:11.137 232432 DEBUG oslo_concurrency.lockutils [None req-d6de8cd4-b72f-49e0-b513-e4f7daf453de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:11 compute-2 podman[318591]: 2025-11-29 08:43:11.205836062 +0000 UTC m=+0.095797862 container create 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:43:11 compute-2 systemd[1]: Started libpod-conmon-2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade.scope.
Nov 29 08:43:11 compute-2 podman[318591]: 2025-11-29 08:43:11.164857837 +0000 UTC m=+0.054819707 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:43:11 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:43:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3926a8310b133bc9a3b7d26d1edd67f55a2b85872dc18b83387ecacbba5d02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:11 compute-2 podman[318591]: 2025-11-29 08:43:11.314963648 +0000 UTC m=+0.204925458 container init 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:43:11 compute-2 podman[318591]: 2025-11-29 08:43:11.320925284 +0000 UTC m=+0.210887094 container start 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:43:11 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [NOTICE]   (318627) : New worker (318632) forked
Nov 29 08:43:11 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [NOTICE]   (318627) : Loading success.
Nov 29 08:43:11 compute-2 podman[318604]: 2025-11-29 08:43:11.374850162 +0000 UTC m=+0.115954620 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:43:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:12 compute-2 ceph-mon[77138]: pgmap v3218: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.1 MiB/s wr, 63 op/s
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.873 232432 DEBUG nova.compute.manager [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.874 232432 DEBUG oslo_concurrency.lockutils [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.874 232432 DEBUG oslo_concurrency.lockutils [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.874 232432 DEBUG oslo_concurrency.lockutils [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.875 232432 DEBUG nova.compute.manager [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:12 compute-2 nova_compute[232428]: 2025-11-29 08:43:12.875 232432 WARNING nova.compute.manager [req-af83b331-8596-4e67-947a-2e6f3fb7cbe3 req-d1157a98-745f-4d74-9c2d-4e820858f279 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state None.
Nov 29 08:43:13 compute-2 nova_compute[232428]: 2025-11-29 08:43:13.690 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:13 compute-2 NetworkManager[48993]: <info>  [1764405793.6930] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 08:43:13 compute-2 NetworkManager[48993]: <info>  [1764405793.6940] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 08:43:13 compute-2 nova_compute[232428]: 2025-11-29 08:43:13.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:13 compute-2 ovn_controller[134375]: 2025-11-29T08:43:13Z|00878|binding|INFO|Releasing lport 6d144850-0aa2-4d79-ba3f-ed60c65ed2f3 from this chassis (sb_readonly=0)
Nov 29 08:43:13 compute-2 nova_compute[232428]: 2025-11-29 08:43:13.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:13.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:13 compute-2 nova_compute[232428]: 2025-11-29 08:43:13.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:14 compute-2 ceph-mon[77138]: pgmap v3219: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 51 op/s
Nov 29 08:43:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2297182486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1698676140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:15 compute-2 nova_compute[232428]: 2025-11-29 08:43:15.699 232432 DEBUG nova.compute.manager [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:15 compute-2 nova_compute[232428]: 2025-11-29 08:43:15.700 232432 DEBUG nova.compute.manager [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing instance network info cache due to event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:15 compute-2 nova_compute[232428]: 2025-11-29 08:43:15.701 232432 DEBUG oslo_concurrency.lockutils [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:15 compute-2 nova_compute[232428]: 2025-11-29 08:43:15.701 232432 DEBUG oslo_concurrency.lockutils [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:15 compute-2 nova_compute[232428]: 2025-11-29 08:43:15.701 232432 DEBUG nova.network.neutron [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:16 compute-2 nova_compute[232428]: 2025-11-29 08:43:16.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:16 compute-2 ceph-mon[77138]: pgmap v3220: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 29 08:43:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:17 compute-2 nova_compute[232428]: 2025-11-29 08:43:17.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:17 compute-2 nova_compute[232428]: 2025-11-29 08:43:17.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:43:17 compute-2 nova_compute[232428]: 2025-11-29 08:43:17.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:43:17 compute-2 nova_compute[232428]: 2025-11-29 08:43:17.234 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:18 compute-2 ceph-mon[77138]: pgmap v3221: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 103 op/s
Nov 29 08:43:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:18 compute-2 nova_compute[232428]: 2025-11-29 08:43:18.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:19 compute-2 nova_compute[232428]: 2025-11-29 08:43:19.072 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:19.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:20 compute-2 ceph-mon[77138]: pgmap v3222: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 108 op/s
Nov 29 08:43:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.555 232432 DEBUG nova.network.neutron [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated VIF entry in instance network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.556 232432 DEBUG nova.network.neutron [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.588 232432 DEBUG oslo_concurrency.lockutils [req-4e6c5580-7ed0-4071-8fe4-9a68c081374d req-d0ffb013-7e0c-4919-b34c-fe64a592b9b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.589 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.589 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:43:20 compute-2 nova_compute[232428]: 2025-11-29 08:43:20.589 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:21.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.133 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.157 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.157 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.157 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.157 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:22 compute-2 ceph-mon[77138]: pgmap v3223: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Nov 29 08:43:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:22 compute-2 podman[318648]: 2025-11-29 08:43:22.69613967 +0000 UTC m=+0.102481110 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:43:22 compute-2 nova_compute[232428]: 2025-11-29 08:43:22.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:22.850 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:43:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:22.852 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:43:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/495934207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:23 compute-2 nova_compute[232428]: 2025-11-29 08:43:23.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:24 compute-2 ceph-mon[77138]: pgmap v3224: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 505 KiB/s wr, 94 op/s
Nov 29 08:43:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/984546322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:25 compute-2 nova_compute[232428]: 2025-11-29 08:43:25.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:25.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:25 compute-2 ceph-mon[77138]: pgmap v3225: 305 pgs: 305 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 186 op/s
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.225 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.226 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:26.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:26 compute-2 ovn_controller[134375]: 2025-11-29T08:43:26Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:d2:47 10.100.0.9
Nov 29 08:43:26 compute-2 ovn_controller[134375]: 2025-11-29T08:43:26Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:d2:47 10.100.0.9
Nov 29 08:43:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:43:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2952308057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.695 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.767 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.767 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:43:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2952308057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.936 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.937 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4019MB free_disk=20.92789077758789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.937 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:26 compute-2 nova_compute[232428]: 2025-11-29 08:43:26.938 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.015 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.016 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.016 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.050 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.239 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:43:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1691782139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.498 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.504 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.519 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.546 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:43:27 compute-2 nova_compute[232428]: 2025-11-29 08:43:27.547 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:27.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:27 compute-2 ceph-mon[77138]: pgmap v3226: 305 pgs: 305 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 109 op/s
Nov 29 08:43:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1691782139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:43:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280722686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:43:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:43:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280722686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:43:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:28 compute-2 nova_compute[232428]: 2025-11-29 08:43:28.537 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:28 compute-2 nova_compute[232428]: 2025-11-29 08:43:28.560 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:43:28 compute-2 nova_compute[232428]: 2025-11-29 08:43:28.561 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:43:28 compute-2 nova_compute[232428]: 2025-11-29 08:43:28.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4280722686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:43:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4280722686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:43:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:29 compute-2 sudo[318724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:29 compute-2 sudo[318724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:29 compute-2 sudo[318724]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:29 compute-2 sudo[318749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:29 compute-2 sudo[318749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:29 compute-2 sudo[318749]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:29.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:29.854 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:29 compute-2 ceph-mon[77138]: pgmap v3227: 305 pgs: 305 active+clean; 238 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 29 08:43:30 compute-2 sshd-session[318774]: Invalid user solv from 45.148.10.240 port 43226
Nov 29 08:43:30 compute-2 sshd-session[318774]: Connection closed by invalid user solv 45.148.10.240 port 43226 [preauth]
Nov 29 08:43:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:31.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:32 compute-2 ceph-mon[77138]: pgmap v3228: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 29 08:43:32 compute-2 nova_compute[232428]: 2025-11-29 08:43:32.241 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:32.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:33 compute-2 nova_compute[232428]: 2025-11-29 08:43:33.496 232432 INFO nova.compute.manager [None req-a5ab8a0d-82aa-4df5-bb2b-11293c740c5e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Get console output
Nov 29 08:43:33 compute-2 nova_compute[232428]: 2025-11-29 08:43:33.505 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:43:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3387081859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:33 compute-2 nova_compute[232428]: 2025-11-29 08:43:33.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:34 compute-2 podman[318779]: 2025-11-29 08:43:34.711337854 +0000 UTC m=+0.088594178 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.757970) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814758044, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 953, "num_deletes": 252, "total_data_size": 1955570, "memory_usage": 1988920, "flush_reason": "Manual Compaction"}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814768549, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 851253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69987, "largest_seqno": 70935, "table_properties": {"data_size": 847495, "index_size": 1473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9909, "raw_average_key_size": 21, "raw_value_size": 839594, "raw_average_value_size": 1782, "num_data_blocks": 62, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405747, "oldest_key_time": 1764405747, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 10660 microseconds, and 6050 cpu microseconds.
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.768618) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 851253 bytes OK
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.768656) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.770596) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.770620) EVENT_LOG_v1 {"time_micros": 1764405814770612, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.770642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1950807, prev total WAL file size 1950807, number of live WAL files 2.
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.772111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323635' seq:72057594037927935, type:22 .. '6D6772737461740032353138' seq:0, type:0; will stop at (end)
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(831KB)], [138(12MB)]
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814772194, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 14181961, "oldest_snapshot_seqno": -1}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9768 keys, 10720034 bytes, temperature: kUnknown
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814836789, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10720034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10659569, "index_size": 34944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257597, "raw_average_key_size": 26, "raw_value_size": 10490593, "raw_average_value_size": 1073, "num_data_blocks": 1322, "num_entries": 9768, "num_filter_entries": 9768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.837254) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10720034 bytes
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.839212) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.9 rd, 165.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(29.3) write-amplify(12.6) OK, records in: 10261, records dropped: 493 output_compression: NoCompression
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.839244) EVENT_LOG_v1 {"time_micros": 1764405814839230, "job": 88, "event": "compaction_finished", "compaction_time_micros": 64791, "compaction_time_cpu_micros": 30116, "output_level": 6, "num_output_files": 1, "total_output_size": 10720034, "num_input_records": 10261, "num_output_records": 9768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814840157, "job": 88, "event": "table_file_deletion", "file_number": 140}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814845045, "job": 88, "event": "table_file_deletion", "file_number": 138}
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.772008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.845265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.845270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.845274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.845277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:34 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:43:34.845280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:43:35 compute-2 ceph-mon[77138]: pgmap v3229: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 29 08:43:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3845253938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:35.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:36 compute-2 ceph-mon[77138]: pgmap v3230: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 176 op/s
Nov 29 08:43:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:37 compute-2 nova_compute[232428]: 2025-11-29 08:43:37.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:37 compute-2 nova_compute[232428]: 2025-11-29 08:43:37.465 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:37 compute-2 nova_compute[232428]: 2025-11-29 08:43:37.465 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:37 compute-2 nova_compute[232428]: 2025-11-29 08:43:37.465 232432 DEBUG nova.network.neutron [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:43:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:38 compute-2 ceph-mon[77138]: pgmap v3231: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Nov 29 08:43:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/817634460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:38 compute-2 nova_compute[232428]: 2025-11-29 08:43:38.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:40 compute-2 ceph-mon[77138]: pgmap v3232: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 2.7 MiB/s wr, 86 op/s
Nov 29 08:43:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:40 compute-2 nova_compute[232428]: 2025-11-29 08:43:40.622 232432 DEBUG nova.network.neutron [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:40 compute-2 nova_compute[232428]: 2025-11-29 08:43:40.646 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:40 compute-2 nova_compute[232428]: 2025-11-29 08:43:40.796 232432 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Nov 29 08:43:40 compute-2 nova_compute[232428]: 2025-11-29 08:43:40.797 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Creating file /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/eb8f551ac59d435cb07f11d2a06f5f9e.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Nov 29 08:43:40 compute-2 nova_compute[232428]: 2025-11-29 08:43:40.798 232432 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/eb8f551ac59d435cb07f11d2a06f5f9e.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.224 232432 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/eb8f551ac59d435cb07f11d2a06f5f9e.tmp" returned: 1 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.226 232432 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/eb8f551ac59d435cb07f11d2a06f5f9e.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.227 232432 DEBUG nova.virt.libvirt.volume.remotefs [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Creating directory /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.228 232432 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.454 232432 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:41 compute-2 nova_compute[232428]: 2025-11-29 08:43:41.460 232432 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 29 08:43:41 compute-2 podman[318802]: 2025-11-29 08:43:41.663463037 +0000 UTC m=+0.065278893 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:43:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:42 compute-2 nova_compute[232428]: 2025-11-29 08:43:42.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:42 compute-2 ceph-mon[77138]: pgmap v3233: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Nov 29 08:43:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:43 compute-2 kernel: tapc30634d5-98 (unregistering): left promiscuous mode
Nov 29 08:43:43 compute-2 NetworkManager[48993]: <info>  [1764405823.7395] device (tapc30634d5-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 ovn_controller[134375]: 2025-11-29T08:43:43Z|00879|binding|INFO|Releasing lport c30634d5-981b-440c-aaed-815b2591a3d4 from this chassis (sb_readonly=0)
Nov 29 08:43:43 compute-2 ovn_controller[134375]: 2025-11-29T08:43:43Z|00880|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 down in Southbound
Nov 29 08:43:43 compute-2 ovn_controller[134375]: 2025-11-29T08:43:43Z|00881|binding|INFO|Removing iface tapc30634d5-98 ovn-installed in OVS
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.753 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:43.766 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d2:47 10.100.0.9'], port_security=['fa:16:3e:2f:d2:47 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3aa44838-3538-48cc-aa78-ee7437a5a87d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f970837-a742-427c-bfc6-c51a824e5eec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c30634d5-981b-440c-aaed-815b2591a3d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:43:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:43.767 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c30634d5-981b-440c-aaed-815b2591a3d4 in datapath 16035279-ee66-4ba0-b73b-de24bec8a7fe unbound from our chassis
Nov 29 08:43:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:43.769 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16035279-ee66-4ba0-b73b-de24bec8a7fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:43:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:43.770 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0642bf9d-2782-4f0b-8f4a-89285e01956f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:43.772 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe namespace which is not needed anymore
Nov 29 08:43:43 compute-2 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Nov 29 08:43:43 compute-2 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000bc.scope: Consumed 15.552s CPU time.
Nov 29 08:43:43 compute-2 systemd-machined[194747]: Machine qemu-91-instance-000000bc terminated.
Nov 29 08:43:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.899 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [NOTICE]   (318627) : haproxy version is 2.8.14-c23fe91
Nov 29 08:43:43 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [NOTICE]   (318627) : path to executable is /usr/sbin/haproxy
Nov 29 08:43:43 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [WARNING]  (318627) : Exiting Master process...
Nov 29 08:43:43 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [ALERT]    (318627) : Current worker (318632) exited with code 143 (Terminated)
Nov 29 08:43:43 compute-2 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[318607]: [WARNING]  (318627) : All workers exited. Exiting... (0)
Nov 29 08:43:43 compute-2 systemd[1]: libpod-2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade.scope: Deactivated successfully.
Nov 29 08:43:43 compute-2 podman[318850]: 2025-11-29 08:43:43.915527936 +0000 UTC m=+0.044396924 container died 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:43:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade-userdata-shm.mount: Deactivated successfully.
Nov 29 08:43:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-ba3926a8310b133bc9a3b7d26d1edd67f55a2b85872dc18b83387ecacbba5d02-merged.mount: Deactivated successfully.
Nov 29 08:43:43 compute-2 podman[318850]: 2025-11-29 08:43:43.949295537 +0000 UTC m=+0.078164525 container cleanup 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:43:43 compute-2 systemd[1]: libpod-conmon-2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade.scope: Deactivated successfully.
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.973 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.993 232432 DEBUG nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.994 232432 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.995 232432 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.995 232432 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.996 232432 DEBUG nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:43 compute-2 nova_compute[232428]: 2025-11-29 08:43:43.996 232432 WARNING nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_migrating.
Nov 29 08:43:44 compute-2 podman[318882]: 2025-11-29 08:43:44.022115042 +0000 UTC m=+0.049541823 container remove 2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.028 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb398b5-2f52-401a-b95a-4c3e83359a8d]: (4, ('Sat Nov 29 08:43:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe (2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade)\n2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade\nSat Nov 29 08:43:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe (2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade)\n2f1b4562412a49cf629286f4f9e17d8389709f060ed8e289cfbfcc20a5d3fade\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.030 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[278e48fe-6372-4138-8b6b-4344dfbc9a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.032 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16035279-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:44 compute-2 kernel: tap16035279-e0: left promiscuous mode
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.058 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.062 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c435b7b6-cfe1-4205-bbd5-fb3fbb5978a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.078 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[646eb10e-58ee-463c-97c5-8ab6ef6bb20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.079 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cf46cabf-61ea-4f37-a423-9aeef30e7ae4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.095 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e067e736-0e05-414b-965f-db885657090a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875663, 'reachable_time': 22070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318908, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.098 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:43:44 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:44.098 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[49cfe051-28de-4fdc-b174-1c6fa70c0347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:44 compute-2 systemd[1]: run-netns-ovnmeta\x2d16035279\x2dee66\x2d4ba0\x2db73b\x2dde24bec8a7fe.mount: Deactivated successfully.
Nov 29 08:43:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.478 232432 INFO nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance shutdown successfully after 3 seconds.
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.483 232432 INFO nova.virt.libvirt.driver [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance destroyed successfully.
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.484 232432 DEBUG nova.virt.libvirt.vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:36Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.485 232432 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.486 232432 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.487 232432 DEBUG os_vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.489 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.489 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc30634d5-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.492 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.495 232432 INFO os_vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98')
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.500 232432 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:43:44 compute-2 nova_compute[232428]: 2025-11-29 08:43:44.500 232432 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:43:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:44 compute-2 sudo[318909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:44 compute-2 sudo[318909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:44 compute-2 sudo[318909]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:44 compute-2 sudo[318934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:43:44 compute-2 sudo[318934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:44 compute-2 sudo[318934]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:44 compute-2 sudo[318959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:44 compute-2 sudo[318959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:44 compute-2 sudo[318959]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:44 compute-2 sudo[318984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:43:44 compute-2 sudo[318984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.237 232432 DEBUG neutronclient.v2_0.client [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port c30634d5-981b-440c-aaed-815b2591a3d4 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.398 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.399 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.399 232432 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:45 compute-2 sudo[318984]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:45 compute-2 sudo[319039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:45 compute-2 sudo[319039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:45 compute-2 sudo[319039]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:45 compute-2 sudo[319064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:43:45 compute-2 sudo[319064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:45 compute-2 sudo[319064]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.685 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.685 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:45 compute-2 sudo[319089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:45 compute-2 sudo[319089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.705 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:43:45 compute-2 sudo[319089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:45 compute-2 sudo[319114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 08:43:45 compute-2 sudo[319114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.786 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.787 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.797 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.797 232432 INFO nova.compute.claims [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:43:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:45.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:45 compute-2 nova_compute[232428]: 2025-11-29 08:43:45.925 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.190 232432 DEBUG nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.192 232432 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.193 232432 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.193 232432 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.193 232432 DEBUG nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.194 232432 WARNING nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_migrated.
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.195605845 +0000 UTC m=+0.056861271 container create 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:43:46 compute-2 systemd[1]: Started libpod-conmon-3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c.scope.
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.172109865 +0000 UTC m=+0.033365311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:43:46 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.301872262 +0000 UTC m=+0.163127768 container init 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.310507241 +0000 UTC m=+0.171762687 container start 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.313967399 +0000 UTC m=+0.175222855 container attach 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:43:46 compute-2 happy_lederberg[319217]: 167 167
Nov 29 08:43:46 compute-2 systemd[1]: libpod-3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c.scope: Deactivated successfully.
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.322209155 +0000 UTC m=+0.183464581 container died 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:43:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:43:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4146842364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-ff21ce59ee751f65dcc9531f49ccf6aeb1b1974c4d1be6780fb75074635f6327-merged.mount: Deactivated successfully.
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.349 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.360 232432 DEBUG nova.compute.provider_tree [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:43:46 compute-2 podman[319201]: 2025-11-29 08:43:46.367131623 +0000 UTC m=+0.228387039 container remove 3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:43:46 compute-2 systemd[1]: libpod-conmon-3a5c1b59860130064f451a63bb2b9b43d42590a52df35d51602ebfd4fd2fdb8c.scope: Deactivated successfully.
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.393 232432 DEBUG nova.scheduler.client.report [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.440 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.441 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.489 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.489 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:43:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:46.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.513 232432 INFO nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.531 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:43:46 compute-2 podman[319243]: 2025-11-29 08:43:46.553295087 +0000 UTC m=+0.055963723 container create c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:43:46 compute-2 systemd[1]: Started libpod-conmon-c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0.scope.
Nov 29 08:43:46 compute-2 podman[319243]: 2025-11-29 08:43:46.523732047 +0000 UTC m=+0.026400773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:43:46 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:43:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b501c485f2c62356d07650b652dc79ac59530b98dbbd810f36efc46f17d27c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b501c485f2c62356d07650b652dc79ac59530b98dbbd810f36efc46f17d27c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b501c485f2c62356d07650b652dc79ac59530b98dbbd810f36efc46f17d27c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b501c485f2c62356d07650b652dc79ac59530b98dbbd810f36efc46f17d27c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:46 compute-2 podman[319243]: 2025-11-29 08:43:46.660307077 +0000 UTC m=+0.162975773 container init c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:43:46 compute-2 podman[319243]: 2025-11-29 08:43:46.669459192 +0000 UTC m=+0.172127818 container start c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:43:46 compute-2 podman[319243]: 2025-11-29 08:43:46.673107606 +0000 UTC m=+0.175776232 container attach c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.681 232432 DEBUG nova.policy [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:43:46 compute-2 ceph-mon[77138]: pgmap v3234: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:43:46 compute-2 ceph-mon[77138]: pgmap v3235: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Nov 29 08:43:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4146842364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.697 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.699 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.699 232432 INFO nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Creating image(s)
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.730 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.763 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.804 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.810 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.899 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.900 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.901 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.902 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.976 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:46 compute-2 nova_compute[232428]: 2025-11-29 08:43:46.982 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 31ec469a-56ea-4a93-8238-dd69ee5665db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.163 232432 DEBUG nova.compute.manager [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.163 232432 DEBUG nova.compute.manager [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing instance network info cache due to event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.163 232432 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.164 232432 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.164 232432 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.321 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 31ec469a-56ea-4a93-8238-dd69ee5665db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.395 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.466 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Successfully created port: 9dbbf051-93f7-4b5a-8dac-a9943d954736 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.506 232432 DEBUG nova.objects.instance [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid 31ec469a-56ea-4a93-8238-dd69ee5665db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.518 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.518 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Ensure instance console log exists: /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.519 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.519 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:47 compute-2 nova_compute[232428]: 2025-11-29 08:43:47.519 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:43:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:47.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]: [
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:     {
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "available": false,
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "ceph_device": false,
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "lsm_data": {},
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "lvs": [],
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "path": "/dev/sr0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "rejected_reasons": [
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "Insufficient space (<5GB)",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "Has a FileSystem"
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         ],
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         "sys_api": {
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "actuators": null,
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "device_nodes": "sr0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "devname": "sr0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "human_readable_size": "482.00 KB",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "id_bus": "ata",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "model": "QEMU DVD-ROM",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "nr_requests": "2",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "parent": "/dev/sr0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "partitions": {},
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "path": "/dev/sr0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "removable": "1",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "rev": "2.5+",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "ro": "0",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "rotational": "1",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "sas_address": "",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "sas_device_handle": "",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "scheduler_mode": "mq-deadline",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "sectors": 0,
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "sectorsize": "2048",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "size": 493568.0,
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "support_discard": "2048",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "type": "disk",
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:             "vendor": "QEMU"
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:         }
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]:     }
Nov 29 08:43:47 compute-2 priceless_visvesvaraya[319259]: ]
Nov 29 08:43:47 compute-2 systemd[1]: libpod-c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0.scope: Deactivated successfully.
Nov 29 08:43:47 compute-2 podman[319243]: 2025-11-29 08:43:47.941541541 +0000 UTC m=+1.444210167 container died c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:43:47 compute-2 systemd[1]: libpod-c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0.scope: Consumed 1.278s CPU time.
Nov 29 08:43:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-33b501c485f2c62356d07650b652dc79ac59530b98dbbd810f36efc46f17d27c-merged.mount: Deactivated successfully.
Nov 29 08:43:48 compute-2 podman[319243]: 2025-11-29 08:43:48.003148308 +0000 UTC m=+1.505816924 container remove c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 08:43:48 compute-2 systemd[1]: libpod-conmon-c48a1fd88ffe98cedcdbb92b7eee9b02555aa71284af7b0c977cd5d12a30e3b0.scope: Deactivated successfully.
Nov 29 08:43:48 compute-2 sudo[319114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.139 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Successfully updated port: 9dbbf051-93f7-4b5a-8dac-a9943d954736 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.157 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.157 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.158 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.225 232432 DEBUG nova.compute.manager [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.225 232432 DEBUG nova.compute.manager [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing instance network info cache due to event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.226 232432 DEBUG oslo_concurrency.lockutils [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.294 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:43:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:48.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.577 232432 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated VIF entry in instance network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.577 232432 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:48 compute-2 nova_compute[232428]: 2025-11-29 08:43:48.594 232432 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:49 compute-2 sudo[320746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:49 compute-2 sudo[320746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:49 compute-2 sudo[320746]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:49 compute-2 sudo[320772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:43:49 compute-2 sudo[320772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:43:49 compute-2 sudo[320772]: pam_unix(sudo:session): session closed for user root
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.818 232432 DEBUG nova.network.neutron [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.852 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.853 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance network_info: |[{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.853 232432 DEBUG oslo_concurrency.lockutils [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.853 232432 DEBUG nova.network.neutron [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.857 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Start _get_guest_xml network_info=[{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.864 232432 WARNING nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.871 232432 DEBUG nova.virt.libvirt.host [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.872 232432 DEBUG nova.virt.libvirt.host [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.874 232432 DEBUG nova.virt.libvirt.host [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.875 232432 DEBUG nova.virt.libvirt.host [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.876 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.877 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.877 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.877 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.877 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.878 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.878 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.878 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.878 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.878 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.879 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.879 232432 DEBUG nova.virt.hardware [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:43:49 compute-2 nova_compute[232428]: 2025-11-29 08:43:49.881 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 08:43:50 compute-2 ceph-mon[77138]: pgmap v3236: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 41 KiB/s wr, 12 op/s
Nov 29 08:43:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:43:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2353362858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.357 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.395 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.399 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:43:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/440027293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.882 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.885 232432 DEBUG nova.virt.libvirt.vif [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=190,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-3a32aqcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:46Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=31ec469a-56ea-4a93-8238-dd69ee5665db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.886 232432 DEBUG nova.network.os_vif_util [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.887 232432 DEBUG nova.network.os_vif_util [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.890 232432 DEBUG nova.objects.instance [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 31ec469a-56ea-4a93-8238-dd69ee5665db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.913 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <uuid>31ec469a-56ea-4a93-8238-dd69ee5665db</uuid>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <name>instance-000000be</name>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028</nova:name>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:43:49</nova:creationTime>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <nova:port uuid="9dbbf051-93f7-4b5a-8dac-a9943d954736">
Nov 29 08:43:50 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <system>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="serial">31ec469a-56ea-4a93-8238-dd69ee5665db</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="uuid">31ec469a-56ea-4a93-8238-dd69ee5665db</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </system>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <os>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </os>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <features>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </features>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/31ec469a-56ea-4a93-8238-dd69ee5665db_disk">
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </source>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config">
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </source>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:43:50 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:fb:a5:77"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <target dev="tap9dbbf051-93"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/console.log" append="off"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <video>
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </video>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:43:50 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:43:50 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:43:50 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:43:50 compute-2 nova_compute[232428]: </domain>
Nov 29 08:43:50 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.915 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Preparing to wait for external event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.916 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.916 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.916 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.917 232432 DEBUG nova.virt.libvirt.vif [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=190,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-3a32aqcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:46Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=31ec469a-56ea-4a93-8238-dd69ee5665db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.917 232432 DEBUG nova.network.os_vif_util [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.918 232432 DEBUG nova.network.os_vif_util [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.918 232432 DEBUG os_vif [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.920 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.920 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.924 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dbbf051-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.925 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dbbf051-93, col_values=(('external_ids', {'iface-id': '9dbbf051-93f7-4b5a-8dac-a9943d954736', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:a5:77', 'vm-uuid': '31ec469a-56ea-4a93-8238-dd69ee5665db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.927 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:50 compute-2 NetworkManager[48993]: <info>  [1764405830.9295] manager: (tap9dbbf051-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.937 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.938 232432 INFO os_vif [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93')
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.997 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.998 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.998 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:fb:a5:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:43:50 compute-2 nova_compute[232428]: 2025-11-29 08:43:50.998 232432 INFO nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Using config drive
Nov 29 08:43:51 compute-2 nova_compute[232428]: 2025-11-29 08:43:51.027 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:51 compute-2 ceph-mon[77138]: pgmap v3237: 305 pgs: 305 active+clean; 286 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 263 KiB/s wr, 35 op/s
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: osdmap e408: 3 total, 3 up, 3 in
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2353362858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/700537020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/440027293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:51 compute-2 nova_compute[232428]: 2025-11-29 08:43:51.834 232432 INFO nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Creating config drive at /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config
Nov 29 08:43:51 compute-2 nova_compute[232428]: 2025-11-29 08:43:51.842 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhmb20o1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:51 compute-2 nova_compute[232428]: 2025-11-29 08:43:51.995 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhmb20o1" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.047 232432 DEBUG nova.storage.rbd_utils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.053 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config 31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:52 compute-2 ceph-mon[77138]: pgmap v3239: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 29 08:43:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1001391614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.251 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.267 232432 DEBUG oslo_concurrency.processutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config 31ec469a-56ea-4a93-8238-dd69ee5665db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.269 232432 INFO nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Deleting local config drive /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db/disk.config because it was imported into RBD.
Nov 29 08:43:52 compute-2 kernel: tap9dbbf051-93: entered promiscuous mode
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.3609] manager: (tap9dbbf051-93): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 08:43:52 compute-2 ovn_controller[134375]: 2025-11-29T08:43:52Z|00882|binding|INFO|Claiming lport 9dbbf051-93f7-4b5a-8dac-a9943d954736 for this chassis.
Nov 29 08:43:52 compute-2 ovn_controller[134375]: 2025-11-29T08:43:52Z|00883|binding|INFO|9dbbf051-93f7-4b5a-8dac-a9943d954736: Claiming fa:16:3e:fb:a5:77 10.100.0.6
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.385 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a5:77 10.100.0.6'], port_security=['fa:16:3e:fb:a5:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '31ec469a-56ea-4a93-8238-dd69ee5665db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a125bd8-3063-451d-9def-2dc2c28d61df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cdd894-e2ae-4700-83cb-f8f82b6152b9, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9dbbf051-93f7-4b5a-8dac-a9943d954736) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.386 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9dbbf051-93f7-4b5a-8dac-a9943d954736 in datapath 826294d5-f5eb-469a-9ec9-f18a05fdaa3c bound to our chassis
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.387 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 826294d5-f5eb-469a-9ec9-f18a05fdaa3c
Nov 29 08:43:52 compute-2 ovn_controller[134375]: 2025-11-29T08:43:52Z|00884|binding|INFO|Setting lport 9dbbf051-93f7-4b5a-8dac-a9943d954736 ovn-installed in OVS
Nov 29 08:43:52 compute-2 ovn_controller[134375]: 2025-11-29T08:43:52Z|00885|binding|INFO|Setting lport 9dbbf051-93f7-4b5a-8dac-a9943d954736 up in Southbound
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.399 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.402 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.409 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[95a0db25-d1e4-4f8d-adaf-65140010593a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.410 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap826294d5-f1 in ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.414 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap826294d5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:43:52 compute-2 systemd-udevd[320936]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.415 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c94b1a44-55a3-48ff-ac3d-fe7db832d10c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.417 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba855bb-3763-48f6-8d31-956a9c6cd469]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 systemd-machined[194747]: New machine qemu-92-instance-000000be.
Nov 29 08:43:52 compute-2 systemd[1]: Started Virtual Machine qemu-92-instance-000000be.
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.4427] device (tap9dbbf051-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.443 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[11d8e7d9-3f4d-44f0-9f41-138aa0e2af39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.4458] device (tap9dbbf051-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.481 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6484f8c7-7f76-429c-8afc-bcc4a66ad80d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:52.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.544 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e1dc15-46f6-4bf6-9db7-8d4fc03d6d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.5546] manager: (tap826294d5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.553 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4a92e4be-c4a7-482a-9465-d7f4de8f7105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.608 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c54473d-d513-429e-941b-73f8cfccd81d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.613 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b4f0ce-f5bb-4ca7-b804-65915188f300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.6560] device (tap826294d5-f0): carrier: link connected
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.664 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[30bceb42-bbf1-4463-8f72-1719be2a6d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.694 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09fce9c8-af53-42e0-b749-4568ad34c5b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap826294d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a9:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879892, 'reachable_time': 20383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320968, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.722 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5de6c709-a233-407e-9c9f-77557209410a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:a908'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 879892, 'tstamp': 879892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320969, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.750 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[14127a89-d1eb-4c98-9807-7853268511d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap826294d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a9:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879892, 'reachable_time': 20383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320970, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.803 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d81d5d7b-d648-442e-92a9-03efd58336ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.906 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f976aabb-c43e-4077-8f6e-a8a5b2d14ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.908 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap826294d5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.909 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.910 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap826294d5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 kernel: tap826294d5-f0: entered promiscuous mode
Nov 29 08:43:52 compute-2 NetworkManager[48993]: <info>  [1764405832.9167] manager: (tap826294d5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.916 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.918 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap826294d5-f0, col_values=(('external_ids', {'iface-id': '1e48b477-303e-412f-b368-d958453e1fe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 ovn_controller[134375]: 2025-11-29T08:43:52Z|00886|binding|INFO|Releasing lport 1e48b477-303e-412f-b368-d958453e1fe0 from this chassis (sb_readonly=0)
Nov 29 08:43:52 compute-2 nova_compute[232428]: 2025-11-29 08:43:52.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.955 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.956 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[393aa988-304f-4180-a96c-ae05b44889ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.957 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-826294d5-f5eb-469a-9ec9-f18a05fdaa3c
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 826294d5-f5eb-469a-9ec9-f18a05fdaa3c
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:43:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:43:52.959 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'env', 'PROCESS_TAG=haproxy-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.014 232432 DEBUG nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.021 232432 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.022 232432 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.022 232432 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.023 232432 DEBUG nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.023 232432 WARNING nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_finish.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.024 232432 DEBUG nova.compute.manager [req-9e235799-1458-4e40-b9c3-937d0be7f272 req-e421722d-7137-43d2-8dd3-c9551dc5f615 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.025 232432 DEBUG oslo_concurrency.lockutils [req-9e235799-1458-4e40-b9c3-937d0be7f272 req-e421722d-7137-43d2-8dd3-c9551dc5f615 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.025 232432 DEBUG oslo_concurrency.lockutils [req-9e235799-1458-4e40-b9c3-937d0be7f272 req-e421722d-7137-43d2-8dd3-c9551dc5f615 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.026 232432 DEBUG oslo_concurrency.lockutils [req-9e235799-1458-4e40-b9c3-937d0be7f272 req-e421722d-7137-43d2-8dd3-c9551dc5f615 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.027 232432 DEBUG nova.compute.manager [req-9e235799-1458-4e40-b9c3-937d0be7f272 req-e421722d-7137-43d2-8dd3-c9551dc5f615 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Processing event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.218 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405833.2179775, 31ec469a-56ea-4a93-8238-dd69ee5665db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.219 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] VM Started (Lifecycle Event)
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.222 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.226 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.230 232432 INFO nova.virt.libvirt.driver [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance spawned successfully.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.231 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.237 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.242 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.253 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.254 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.254 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.255 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.255 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.256 232432 DEBUG nova.virt.libvirt.driver [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.260 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.260 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405833.219679, 31ec469a-56ea-4a93-8238-dd69ee5665db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.260 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] VM Paused (Lifecycle Event)
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.293 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.298 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405833.2261653, 31ec469a-56ea-4a93-8238-dd69ee5665db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.299 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] VM Resumed (Lifecycle Event)
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.311 232432 INFO nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Took 6.61 seconds to spawn the instance on the hypervisor.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.312 232432 DEBUG nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.342 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.345 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.368 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.377 232432 INFO nova.compute.manager [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Took 7.62 seconds to build instance.
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.392 232432 DEBUG oslo_concurrency.lockutils [None req-05d50817-a4e9-4c98-815f-955a462ab366 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:53 compute-2 podman[321044]: 2025-11-29 08:43:53.402863257 +0000 UTC m=+0.065579902 container create b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:43:53 compute-2 systemd[1]: Started libpod-conmon-b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac.scope.
Nov 29 08:43:53 compute-2 podman[321044]: 2025-11-29 08:43:53.373057259 +0000 UTC m=+0.035773934 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:43:53 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:43:53 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163f5519a202aae6903a6d87306a615d41e298de66c3001b7555a9ecd52ee529/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:43:53 compute-2 podman[321044]: 2025-11-29 08:43:53.491296229 +0000 UTC m=+0.154012904 container init b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:43:53 compute-2 podman[321044]: 2025-11-29 08:43:53.496917454 +0000 UTC m=+0.159634099 container start b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:43:53 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [NOTICE]   (321081) : New worker (321088) forked
Nov 29 08:43:53 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [NOTICE]   (321081) : Loading success.
Nov 29 08:43:53 compute-2 podman[321057]: 2025-11-29 08:43:53.529299972 +0000 UTC m=+0.097272719 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:43:53 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.672 232432 DEBUG nova.network.neutron [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updated VIF entry in instance network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.672 232432 DEBUG nova.network.neutron [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:53 compute-2 nova_compute[232428]: 2025-11-29 08:43:53.699 232432 DEBUG oslo_concurrency.lockutils [req-47bac407-3735-49a0-b95d-3ea71baec3c2 req-d69722ab-6395-412e-a440-f2482842d181 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:54 compute-2 ceph-mon[77138]: pgmap v3240: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 29 08:43:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.066 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.067 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.067 232432 DEBUG nova.compute.manager [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.109 232432 DEBUG nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.110 232432 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.110 232432 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.111 232432 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.111 232432 DEBUG nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.112 232432 WARNING nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state resized and task_state None.
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.177 232432 DEBUG nova.compute.manager [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.177 232432 DEBUG oslo_concurrency.lockutils [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.178 232432 DEBUG oslo_concurrency.lockutils [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.178 232432 DEBUG oslo_concurrency.lockutils [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.179 232432 DEBUG nova.compute.manager [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] No waiting events found dispatching network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.179 232432 WARNING nova.compute.manager [req-a04e906c-6ff1-41e4-8647-c3d27793704d req-80da0aca-77a4-4083-ab60-f9c804f040d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received unexpected event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 for instance with vm_state active and task_state None.
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.775 232432 DEBUG neutronclient.v2_0.client [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port c30634d5-981b-440c-aaed-815b2591a3d4 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.777 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.777 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.778 232432 DEBUG nova.network.neutron [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.779 232432 DEBUG nova.objects.instance [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'info_cache' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:55 compute-2 nova_compute[232428]: 2025-11-29 08:43:55.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:56.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:56 compute-2 ceph-mon[77138]: pgmap v3241: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 194 op/s
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.072 232432 DEBUG nova.compute.manager [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.074 232432 DEBUG nova.compute.manager [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing instance network info cache due to event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.074 232432 DEBUG oslo_concurrency.lockutils [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.075 232432 DEBUG oslo_concurrency.lockutils [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.076 232432 DEBUG nova.network.neutron [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.254 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.282 232432 DEBUG nova.network.neutron [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.309 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.310 232432 DEBUG nova.objects.instance [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:43:57 compute-2 nova_compute[232428]: 2025-11-29 08:43:57.468 232432 DEBUG nova.storage.rbd_utils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] removing snapshot(nova-resize) on rbd image(52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:43:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:43:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:43:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 08:43:58 compute-2 ceph-mon[77138]: pgmap v3242: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 194 op/s
Nov 29 08:43:58 compute-2 nova_compute[232428]: 2025-11-29 08:43:58.983 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405823.9818804, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:43:58 compute-2 nova_compute[232428]: 2025-11-29 08:43:58.983 232432 INFO nova.compute.manager [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Stopped (Lifecycle Event)
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.014 232432 DEBUG nova.compute.manager [None req-95112fab-b9ef-47f9-8d4a-1db3a1d83f4f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.019 232432 DEBUG nova.compute.manager [None req-95112fab-b9ef-47f9-8d4a-1db3a1d83f4f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.065 232432 INFO nova.compute.manager [None req-95112fab-b9ef-47f9-8d4a-1db3a1d83f4f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Nov 29 08:43:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.231 232432 DEBUG nova.virt.libvirt.vif [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:53Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.232 232432 DEBUG nova.network.os_vif_util [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.234 232432 DEBUG nova.network.os_vif_util [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.235 232432 DEBUG os_vif [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.237 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.238 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc30634d5-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.239 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.242 232432 INFO os_vif [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98')
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.243 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.243 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.372 232432 DEBUG oslo_concurrency.processutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.650 232432 DEBUG nova.network.neutron [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updated VIF entry in instance network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.651 232432 DEBUG nova.network.neutron [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.670 232432 DEBUG oslo_concurrency.lockutils [req-980c2830-972d-4cda-a8c6-f83cda537324 req-5e765c4f-63bc-42d4-803b-1a47cbdcaf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.693 232432 DEBUG nova.compute.manager [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.693 232432 DEBUG nova.compute.manager [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing instance network info cache due to event network-changed-9dbbf051-93f7-4b5a-8dac-a9943d954736. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.694 232432 DEBUG oslo_concurrency.lockutils [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.694 232432 DEBUG oslo_concurrency.lockutils [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.694 232432 DEBUG nova.network.neutron [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Refreshing network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:43:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:43:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3740741687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:43:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:43:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:43:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:59.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.869 232432 DEBUG oslo_concurrency.processutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.876 232432 DEBUG nova.compute.provider_tree [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.901 232432 DEBUG nova.scheduler.client.report [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:43:59 compute-2 nova_compute[232428]: 2025-11-29 08:43:59.966 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:00 compute-2 nova_compute[232428]: 2025-11-29 08:44:00.102 232432 INFO nova.scheduler.client.report [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocation for migration e291ecb1-4385-42e6-9bb8-6d2095710e8e
Nov 29 08:44:00 compute-2 nova_compute[232428]: 2025-11-29 08:44:00.147 232432 DEBUG oslo_concurrency.lockutils [None req-2b47fd08-1e3d-4bb6-a3f5-dbbf094da622 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:00.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:00 compute-2 sudo[321160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:00 compute-2 sudo[321160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:00 compute-2 sudo[321160]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:00 compute-2 sudo[321185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:44:00 compute-2 sudo[321185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:00 compute-2 sudo[321185]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:00 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:44:00 compute-2 ceph-mon[77138]: osdmap e409: 3 total, 3 up, 3 in
Nov 29 08:44:00 compute-2 nova_compute[232428]: 2025-11-29 08:44:00.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:01 compute-2 nova_compute[232428]: 2025-11-29 08:44:01.810 232432 DEBUG nova.network.neutron [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updated VIF entry in instance network info cache for port 9dbbf051-93f7-4b5a-8dac-a9943d954736. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:44:01 compute-2 nova_compute[232428]: 2025-11-29 08:44:01.811 232432 DEBUG nova.network.neutron [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [{"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:44:01 compute-2 nova_compute[232428]: 2025-11-29 08:44:01.838 232432 DEBUG oslo_concurrency.lockutils [req-f8ece126-46f8-4db6-8865-7f1f24194d6f req-48bc6bda-e6ef-4659-a6cf-13276d6edb46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:44:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:01.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:02 compute-2 nova_compute[232428]: 2025-11-29 08:44:02.256 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:03 compute-2 ceph-mon[77138]: pgmap v3244: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 17 KiB/s wr, 200 op/s
Nov 29 08:44:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3740741687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:44:03 compute-2 ceph-mon[77138]: pgmap v3245: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Nov 29 08:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:03.347 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:03.348 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:03.349 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:03.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:05 compute-2 ceph-mon[77138]: pgmap v3246: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Nov 29 08:44:05 compute-2 podman[321212]: 2025-11-29 08:44:05.696987861 +0000 UTC m=+0.086923416 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:44:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:05.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:05 compute-2 nova_compute[232428]: 2025-11-29 08:44:05.941 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:07 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Nov 29 08:44:07 compute-2 ceph-mon[77138]: pgmap v3247: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Nov 29 08:44:07 compute-2 nova_compute[232428]: 2025-11-29 08:44:07.259 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 08:44:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:07.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:08 compute-2 ceph-mon[77138]: pgmap v3248: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Nov 29 08:44:08 compute-2 ceph-mon[77138]: osdmap e410: 3 total, 3 up, 3 in
Nov 29 08:44:09 compute-2 ovn_controller[134375]: 2025-11-29T08:44:09Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:a5:77 10.100.0.6
Nov 29 08:44:09 compute-2 ovn_controller[134375]: 2025-11-29T08:44:09Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:a5:77 10.100.0.6
Nov 29 08:44:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:09 compute-2 sudo[321234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:09 compute-2 sudo[321234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:09 compute-2 sudo[321234]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:09 compute-2 sudo[321259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:09 compute-2 sudo[321259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:09 compute-2 sudo[321259]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:09.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:10 compute-2 nova_compute[232428]: 2025-11-29 08:44:10.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:10.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:10 compute-2 nova_compute[232428]: 2025-11-29 08:44:10.942 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:11 compute-2 ceph-mon[77138]: pgmap v3250: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 582 KiB/s wr, 40 op/s
Nov 29 08:44:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:11.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:12 compute-2 nova_compute[232428]: 2025-11-29 08:44:12.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:12 compute-2 ceph-mon[77138]: pgmap v3251: 305 pgs: 305 active+clean; 354 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 968 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 29 08:44:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:12 compute-2 podman[321286]: 2025-11-29 08:44:12.70074052 +0000 UTC m=+0.097224278 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:44:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:14 compute-2 ceph-mon[77138]: pgmap v3252: 305 pgs: 305 active+clean; 354 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 968 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.629 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.629 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.629 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.629 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.629 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.630 232432 INFO nova.compute.manager [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Terminating instance
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.631 232432 DEBUG nova.compute.manager [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:44:14 compute-2 kernel: tap9dbbf051-93 (unregistering): left promiscuous mode
Nov 29 08:44:14 compute-2 NetworkManager[48993]: <info>  [1764405854.9026] device (tap9dbbf051-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:44:14 compute-2 ovn_controller[134375]: 2025-11-29T08:44:14Z|00887|binding|INFO|Releasing lport 9dbbf051-93f7-4b5a-8dac-a9943d954736 from this chassis (sb_readonly=0)
Nov 29 08:44:14 compute-2 ovn_controller[134375]: 2025-11-29T08:44:14Z|00888|binding|INFO|Setting lport 9dbbf051-93f7-4b5a-8dac-a9943d954736 down in Southbound
Nov 29 08:44:14 compute-2 ovn_controller[134375]: 2025-11-29T08:44:14Z|00889|binding|INFO|Removing iface tap9dbbf051-93 ovn-installed in OVS
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:14.929 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a5:77 10.100.0.6', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '31ec469a-56ea-4a93-8238-dd69ee5665db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cdd894-e2ae-4700-83cb-f8f82b6152b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=9dbbf051-93f7-4b5a-8dac-a9943d954736) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:44:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:14.931 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 9dbbf051-93f7-4b5a-8dac-a9943d954736 in datapath 826294d5-f5eb-469a-9ec9-f18a05fdaa3c unbound from our chassis
Nov 29 08:44:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:14.933 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:44:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:14.935 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b0078240-9935-49cb-9c07-05d30da2e451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:14.936 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c namespace which is not needed anymore
Nov 29 08:44:14 compute-2 nova_compute[232428]: 2025-11-29 08:44:14.942 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:14 compute-2 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000be.scope: Deactivated successfully.
Nov 29 08:44:14 compute-2 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000be.scope: Consumed 14.217s CPU time.
Nov 29 08:44:14 compute-2 systemd-machined[194747]: Machine qemu-92-instance-000000be terminated.
Nov 29 08:44:15 compute-2 kernel: tap9dbbf051-93: entered promiscuous mode
Nov 29 08:44:15 compute-2 kernel: tap9dbbf051-93 (unregistering): left promiscuous mode
Nov 29 08:44:15 compute-2 NetworkManager[48993]: <info>  [1764405855.0554] manager: (tap9dbbf051-93): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.073 232432 INFO nova.virt.libvirt.driver [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance destroyed successfully.
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.073 232432 DEBUG nova.objects.instance [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid 31ec469a-56ea-4a93-8238-dd69ee5665db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.102 232432 DEBUG nova.virt.libvirt.vif [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-664138028',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=190,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-3a32aqcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:53Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=31ec469a-56ea-4a93-8238-dd69ee5665db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.103 232432 DEBUG nova.network.os_vif_util [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "address": "fa:16:3e:fb:a5:77", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dbbf051-93", "ovs_interfaceid": "9dbbf051-93f7-4b5a-8dac-a9943d954736", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.103 232432 DEBUG nova.network.os_vif_util [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.104 232432 DEBUG os_vif [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.106 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.106 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dbbf051-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.108 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.114 232432 INFO os_vif [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=9dbbf051-93f7-4b5a-8dac-a9943d954736,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dbbf051-93')
Nov 29 08:44:15 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [NOTICE]   (321081) : haproxy version is 2.8.14-c23fe91
Nov 29 08:44:15 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [NOTICE]   (321081) : path to executable is /usr/sbin/haproxy
Nov 29 08:44:15 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [WARNING]  (321081) : Exiting Master process...
Nov 29 08:44:15 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [ALERT]    (321081) : Current worker (321088) exited with code 143 (Terminated)
Nov 29 08:44:15 compute-2 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[321065]: [WARNING]  (321081) : All workers exited. Exiting... (0)
Nov 29 08:44:15 compute-2 systemd[1]: libpod-b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac.scope: Deactivated successfully.
Nov 29 08:44:15 compute-2 podman[321336]: 2025-11-29 08:44:15.142307975 +0000 UTC m=+0.059055588 container died b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:44:15 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac-userdata-shm.mount: Deactivated successfully.
Nov 29 08:44:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-163f5519a202aae6903a6d87306a615d41e298de66c3001b7555a9ecd52ee529-merged.mount: Deactivated successfully.
Nov 29 08:44:15 compute-2 podman[321336]: 2025-11-29 08:44:15.186578233 +0000 UTC m=+0.103325856 container cleanup b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:44:15 compute-2 systemd[1]: libpod-conmon-b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac.scope: Deactivated successfully.
Nov 29 08:44:15 compute-2 podman[321392]: 2025-11-29 08:44:15.245669242 +0000 UTC m=+0.039608293 container remove b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.255 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[14ddfb7f-6802-494a-a96c-7547d7f08bcc]: (4, ('Sat Nov 29 08:44:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c (b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac)\nb556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac\nSat Nov 29 08:44:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c (b556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac)\nb556bc7c88d3addf2993fc827c88e6cd17c9baad1587a6069693ec3bcede00ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.258 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f33722dc-f633-4d25-a47a-392dde0ccc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.259 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap826294d5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.262 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 kernel: tap826294d5-f0: left promiscuous mode
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.275 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.278 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[95c4e0ab-467c-4655-9ec6-d76d3c7b437f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.290 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dac7bf0e-19a4-466a-a2be-ca5d1ad4d1a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.291 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[35d846e6-18f3-4a01-b6e9-1848a38096cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.313 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[36aaf1db-226c-4204-baa9-085949cc893e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879880, 'reachable_time': 17514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321407, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 systemd[1]: run-netns-ovnmeta\x2d826294d5\x2df5eb\x2d469a\x2d9ec9\x2df18a05fdaa3c.mount: Deactivated successfully.
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.318 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:44:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:15.319 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7b6309-f126-4379-b7ce-c592c1a9fc50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.458 232432 DEBUG nova.compute.manager [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-unplugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.459 232432 DEBUG oslo_concurrency.lockutils [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.460 232432 DEBUG oslo_concurrency.lockutils [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.460 232432 DEBUG oslo_concurrency.lockutils [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.460 232432 DEBUG nova.compute.manager [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] No waiting events found dispatching network-vif-unplugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:44:15 compute-2 nova_compute[232428]: 2025-11-29 08:44:15.461 232432 DEBUG nova.compute.manager [req-c477df07-cff9-4d9c-9117-f316000c73c1 req-49d66f6d-7412-4b57-b44c-c1b7369b9291 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-unplugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:44:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:15.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:16 compute-2 ceph-mon[77138]: pgmap v3253: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 978 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Nov 29 08:44:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.136 232432 INFO nova.virt.libvirt.driver [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Deleting instance files /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db_del
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.137 232432 INFO nova.virt.libvirt.driver [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Deletion of /var/lib/nova/instances/31ec469a-56ea-4a93-8238-dd69ee5665db_del complete
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.209 232432 INFO nova.compute.manager [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Took 2.58 seconds to destroy the instance on the hypervisor.
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.210 232432 DEBUG oslo.service.loopingcall [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.211 232432 DEBUG nova.compute.manager [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.211 232432 DEBUG nova.network.neutron [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.264 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.547 232432 DEBUG nova.compute.manager [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.548 232432 DEBUG oslo_concurrency.lockutils [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.549 232432 DEBUG oslo_concurrency.lockutils [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.549 232432 DEBUG oslo_concurrency.lockutils [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.550 232432 DEBUG nova.compute.manager [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] No waiting events found dispatching network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:44:17 compute-2 nova_compute[232428]: 2025-11-29 08:44:17.550 232432 WARNING nova.compute.manager [req-da87e9bb-4e59-4739-a114-f30e81aa7771 req-bcfb3293-88ae-4a3f-a623-e1e84f4b8b5d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received unexpected event network-vif-plugged-9dbbf051-93f7-4b5a-8dac-a9943d954736 for instance with vm_state active and task_state deleting.
Nov 29 08:44:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:17.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.513 232432 DEBUG nova.network.neutron [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.529 232432 INFO nova.compute.manager [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Took 1.32 seconds to deallocate network for instance.
Nov 29 08:44:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:18.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.567 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.568 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:18 compute-2 nova_compute[232428]: 2025-11-29 08:44:18.610 232432 DEBUG oslo_concurrency.processutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:44:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:44:19 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2270193005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.042 232432 DEBUG oslo_concurrency.processutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.054 232432 DEBUG nova.compute.provider_tree [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.083 232432 DEBUG nova.scheduler.client.report [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.120 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.161 232432 INFO nova.scheduler.client.report [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance 31ec469a-56ea-4a93-8238-dd69ee5665db
Nov 29 08:44:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.259 232432 DEBUG oslo_concurrency.lockutils [None req-67141f54-c5f3-49d7-a84a-16df76b50afd a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "31ec469a-56ea-4a93-8238-dd69ee5665db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:19 compute-2 ceph-mon[77138]: pgmap v3254: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 978 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.587 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.588 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.588 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.589 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31ec469a-56ea-4a93-8238-dd69ee5665db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.833 232432 DEBUG nova.compute.utils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Nov 29 08:44:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:19 compute-2 nova_compute[232428]: 2025-11-29 08:44:19.955 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.109 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2270193005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:20 compute-2 ceph-mon[77138]: pgmap v3255: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 872 KiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.366 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.418 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-31ec469a-56ea-4a93-8238-dd69ee5665db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.419 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.420 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:20.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:20 compute-2 nova_compute[232428]: 2025-11-29 08:44:20.660 232432 DEBUG nova.compute.manager [req-fd20556e-affe-4626-917e-d0813c6e7d10 req-38b4548f-0a9a-4d1a-8dae-a1e64d5acba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Received event network-vif-deleted-9dbbf051-93f7-4b5a-8dac-a9943d954736 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:44:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:22 compute-2 nova_compute[232428]: 2025-11-29 08:44:22.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:22 compute-2 nova_compute[232428]: 2025-11-29 08:44:22.266 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:22 compute-2 ceph-mon[77138]: pgmap v3256: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 775 KiB/s rd, 1.7 MiB/s wr, 143 op/s
Nov 29 08:44:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:22 compute-2 nova_compute[232428]: 2025-11-29 08:44:22.670 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:22 compute-2 nova_compute[232428]: 2025-11-29 08:44:22.832 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:23.094 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:44:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:23.094 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:44:23 compute-2 nova_compute[232428]: 2025-11-29 08:44:23.095 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:23 compute-2 nova_compute[232428]: 2025-11-29 08:44:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:23 compute-2 podman[321436]: 2025-11-29 08:44:23.799531763 +0000 UTC m=+0.192251454 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 08:44:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:24 compute-2 ceph-mon[77138]: pgmap v3257: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 38 KiB/s wr, 66 op/s
Nov 29 08:44:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/778055859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:25 compute-2 nova_compute[232428]: 2025-11-29 08:44:25.113 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:25 compute-2 nova_compute[232428]: 2025-11-29 08:44:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1017921705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.237 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:44:26 compute-2 ceph-mon[77138]: pgmap v3258: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 40 KiB/s wr, 94 op/s
Nov 29 08:44:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1528115922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:44:26 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4196655240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.717 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.969 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.970 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4217MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.971 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:44:26 compute-2 nova_compute[232428]: 2025-11-29 08:44:26.971 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.203 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.205 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.266 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.271 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4196655240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.356 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.356 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.377 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.405 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.431 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:44:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:44:27 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3303321146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.851 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.861 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:44:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.907 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.931 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:44:27 compute-2 nova_compute[232428]: 2025-11-29 08:44:27.932 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:44:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:44:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288198689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:44:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:44:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288198689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:44:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:44:28.096 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:44:28 compute-2 ceph-mon[77138]: pgmap v3259: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 13 KiB/s wr, 85 op/s
Nov 29 08:44:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3303321146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1288198689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:44:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1288198689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:44:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:29.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:29 compute-2 sudo[321511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:29 compute-2 sudo[321511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:29 compute-2 sudo[321511]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:30 compute-2 sudo[321536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:30 compute-2 sudo[321536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:30 compute-2 sudo[321536]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.072 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405855.070673, 31ec469a-56ea-4a93-8238-dd69ee5665db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.073 232432 INFO nova.compute.manager [-] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] VM Stopped (Lifecycle Event)
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.096 232432 DEBUG nova.compute.manager [None req-6bb20f01-0718-471f-8d79-75b6bfd58750 - - - - - -] [instance: 31ec469a-56ea-4a93-8238-dd69ee5665db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.116 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:30 compute-2 ceph-mon[77138]: pgmap v3260: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 13 KiB/s wr, 85 op/s
Nov 29 08:44:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.934 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:44:30 compute-2 nova_compute[232428]: 2025-11-29 08:44:30.934 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:44:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:31.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:32 compute-2 nova_compute[232428]: 2025-11-29 08:44:32.272 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:32 compute-2 ceph-mon[77138]: pgmap v3261: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 08:44:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/816626238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:34 compute-2 ceph-mon[77138]: pgmap v3262: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 08:44:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4282406034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:35 compute-2 nova_compute[232428]: 2025-11-29 08:44:35.120 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:36 compute-2 ceph-mon[77138]: pgmap v3263: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 08:44:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:36 compute-2 podman[321564]: 2025-11-29 08:44:36.681748841 +0000 UTC m=+0.075582454 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:44:37 compute-2 nova_compute[232428]: 2025-11-29 08:44:37.277 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:37.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:38 compute-2 ceph-mon[77138]: pgmap v3264: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:44:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:38.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:40 compute-2 nova_compute[232428]: 2025-11-29 08:44:40.124 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:40 compute-2 ceph-mon[77138]: pgmap v3265: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:44:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:42 compute-2 nova_compute[232428]: 2025-11-29 08:44:42.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:42 compute-2 ceph-mon[77138]: pgmap v3266: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:44:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:43 compute-2 podman[321587]: 2025-11-29 08:44:43.669792639 +0000 UTC m=+0.071299389 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:44:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:44 compute-2 ceph-mon[77138]: pgmap v3267: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:44:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2053551327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:45 compute-2 nova_compute[232428]: 2025-11-29 08:44:45.127 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:46.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:46 compute-2 ceph-mon[77138]: pgmap v3268: 305 pgs: 305 active+clean; 139 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 617 KiB/s wr, 3 op/s
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.957849) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886957912, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 997, "num_deletes": 258, "total_data_size": 2158373, "memory_usage": 2192752, "flush_reason": "Manual Compaction"}
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886975758, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 1414103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70940, "largest_seqno": 71932, "table_properties": {"data_size": 1409327, "index_size": 2363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10474, "raw_average_key_size": 19, "raw_value_size": 1399693, "raw_average_value_size": 2650, "num_data_blocks": 102, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405815, "oldest_key_time": 1764405815, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 18004 microseconds, and 8216 cpu microseconds.
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.975831) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 1414103 bytes OK
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.975876) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977489) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977514) EVENT_LOG_v1 {"time_micros": 1764405886977506, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2153332, prev total WAL file size 2153332, number of live WAL files 2.
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.979001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373639' seq:0, type:0; will stop at (end)
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(1380KB)], [141(10MB)]
Nov 29 08:44:46 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886979104, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 12134137, "oldest_snapshot_seqno": -1}
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9759 keys, 11969420 bytes, temperature: kUnknown
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887088521, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11969420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11907313, "index_size": 36586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 258440, "raw_average_key_size": 26, "raw_value_size": 11737065, "raw_average_value_size": 1202, "num_data_blocks": 1389, "num_entries": 9759, "num_filter_entries": 9759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.088862) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11969420 bytes
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.090433) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.8 rd, 109.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.2 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(17.0) write-amplify(8.5) OK, records in: 10296, records dropped: 537 output_compression: NoCompression
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.090450) EVENT_LOG_v1 {"time_micros": 1764405887090442, "job": 90, "event": "compaction_finished", "compaction_time_micros": 109518, "compaction_time_cpu_micros": 59500, "output_level": 6, "num_output_files": 1, "total_output_size": 11969420, "num_input_records": 10296, "num_output_records": 9759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887090797, "job": 90, "event": "table_file_deletion", "file_number": 143}
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887092874, "job": 90, "event": "table_file_deletion", "file_number": 141}
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:46.978824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.092978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.092984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.092990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.092992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:44:47.092994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:44:47 compute-2 nova_compute[232428]: 2025-11-29 08:44:47.284 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:47 compute-2 ceph-mon[77138]: pgmap v3269: 305 pgs: 305 active+clean; 139 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 617 KiB/s wr, 3 op/s
Nov 29 08:44:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/557507739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:44:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2494466256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:44:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3282842492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:44:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:49.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:50 compute-2 ceph-mon[77138]: pgmap v3270: 305 pgs: 305 active+clean; 148 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 970 KiB/s wr, 14 op/s
Nov 29 08:44:50 compute-2 nova_compute[232428]: 2025-11-29 08:44:50.129 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:50 compute-2 sudo[321612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:50 compute-2 sudo[321612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:50 compute-2 sudo[321612]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:50 compute-2 sudo[321637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:44:50 compute-2 sudo[321637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:44:50 compute-2 sudo[321637]: pam_unix(sudo:session): session closed for user root
Nov 29 08:44:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:52 compute-2 ceph-mon[77138]: pgmap v3271: 305 pgs: 305 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 29 08:44:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4233684586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:44:52 compute-2 nova_compute[232428]: 2025-11-29 08:44:52.287 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:44:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:53.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:44:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3655060916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:44:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:54 compute-2 podman[321664]: 2025-11-29 08:44:54.740975062 +0000 UTC m=+0.131841284 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:44:55 compute-2 nova_compute[232428]: 2025-11-29 08:44:55.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:55 compute-2 ceph-mon[77138]: pgmap v3272: 305 pgs: 305 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 29 08:44:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:55.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:56 compute-2 ceph-mon[77138]: pgmap v3273: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Nov 29 08:44:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:56.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:57 compute-2 nova_compute[232428]: 2025-11-29 08:44:57.290 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:44:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:44:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:57.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:44:58 compute-2 ceph-mon[77138]: pgmap v3274: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 130 op/s
Nov 29 08:44:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:44:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:44:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:44:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:44:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:59.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:00 compute-2 nova_compute[232428]: 2025-11-29 08:45:00.134 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:00 compute-2 ceph-mon[77138]: pgmap v3275: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 157 op/s
Nov 29 08:45:00 compute-2 sudo[321693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:00 compute-2 sudo[321693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:00 compute-2 sudo[321693]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:01 compute-2 sudo[321718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:45:01 compute-2 sudo[321718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:01 compute-2 sudo[321718]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:01 compute-2 sudo[321743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:01 compute-2 sudo[321743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:01 compute-2 sudo[321743]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:01 compute-2 sudo[321768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:45:01 compute-2 sudo[321768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:01 compute-2 podman[321866]: 2025-11-29 08:45:01.707574805 +0000 UTC m=+0.073193768 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:45:01 compute-2 podman[321866]: 2025-11-29 08:45:01.824782583 +0000 UTC m=+0.190401526 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:45:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:01.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:02 compute-2 nova_compute[232428]: 2025-11-29 08:45:02.292 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:02 compute-2 ceph-mon[77138]: pgmap v3276: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Nov 29 08:45:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:02 compute-2 podman[322023]: 2025-11-29 08:45:02.64021012 +0000 UTC m=+0.105498934 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:45:02 compute-2 podman[322023]: 2025-11-29 08:45:02.660678887 +0000 UTC m=+0.125967581 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:45:02 compute-2 podman[322088]: 2025-11-29 08:45:02.955975308 +0000 UTC m=+0.073976544 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, version=2.2.4, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Nov 29 08:45:02 compute-2 podman[322088]: 2025-11-29 08:45:02.97887846 +0000 UTC m=+0.096879656 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Nov 29 08:45:03 compute-2 sudo[321768]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:03 compute-2 sudo[322123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:03 compute-2 sudo[322123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:03 compute-2 sudo[322123]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:03 compute-2 sudo[322148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:45:03 compute-2 sudo[322148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:03 compute-2 sudo[322148]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:03 compute-2 sudo[322173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:03 compute-2 sudo[322173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:03 compute-2 sudo[322173]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:03.348 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:03.348 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:03.348 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:03 compute-2 sudo[322198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:45:03 compute-2 sudo[322198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:03 compute-2 sudo[322198]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:04 compute-2 ceph-mon[77138]: pgmap v3277: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:45:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:45:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:05 compute-2 nova_compute[232428]: 2025-11-29 08:45:05.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:05 compute-2 ovn_controller[134375]: 2025-11-29T08:45:05Z|00890|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 08:45:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:06 compute-2 ceph-mon[77138]: pgmap v3278: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 200 op/s
Nov 29 08:45:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:07 compute-2 nova_compute[232428]: 2025-11-29 08:45:07.295 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:07 compute-2 podman[322255]: 2025-11-29 08:45:07.683673852 +0000 UTC m=+0.076270685 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:45:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:07.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:08 compute-2 ceph-mon[77138]: pgmap v3279: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 109 op/s
Nov 29 08:45:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:10 compute-2 nova_compute[232428]: 2025-11-29 08:45:10.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:10 compute-2 ceph-mon[77138]: pgmap v3280: 305 pgs: 305 active+clean; 245 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Nov 29 08:45:10 compute-2 sudo[322276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:10 compute-2 sudo[322276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:10 compute-2 sudo[322276]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:10 compute-2 sudo[322301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:10 compute-2 sudo[322301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:10 compute-2 sudo[322301]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:10.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:10 compute-2 sudo[322326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:10 compute-2 sudo[322326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:10 compute-2 sudo[322326]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:10 compute-2 sudo[322351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:45:10 compute-2 sudo[322351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:10 compute-2 sudo[322351]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:45:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:11.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:12 compute-2 nova_compute[232428]: 2025-11-29 08:45:12.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:12 compute-2 nova_compute[232428]: 2025-11-29 08:45:12.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:12 compute-2 ceph-mon[77138]: pgmap v3281: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 144 op/s
Nov 29 08:45:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:12.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:14 compute-2 ceph-mon[77138]: pgmap v3282: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 437 KiB/s rd, 3.7 MiB/s wr, 103 op/s
Nov 29 08:45:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:14 compute-2 podman[322378]: 2025-11-29 08:45:14.694775759 +0000 UTC m=+0.094821492 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 08:45:15 compute-2 nova_compute[232428]: 2025-11-29 08:45:15.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:15.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:16 compute-2 ceph-mon[77138]: pgmap v3283: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 29 08:45:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:17 compute-2 nova_compute[232428]: 2025-11-29 08:45:17.298 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:17.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:18 compute-2 ceph-mon[77138]: pgmap v3284: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Nov 29 08:45:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.147 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.251 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.251 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:20 compute-2 ceph-mon[77138]: pgmap v3285: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 2.6 MiB/s wr, 89 op/s
Nov 29 08:45:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:20.775 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:45:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:20.777 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:45:20 compute-2 nova_compute[232428]: 2025-11-29 08:45:20.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:21.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:22 compute-2 nova_compute[232428]: 2025-11-29 08:45:22.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:22 compute-2 ceph-mon[77138]: pgmap v3286: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 08:45:23 compute-2 nova_compute[232428]: 2025-11-29 08:45:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:23.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:24 compute-2 ceph-mon[77138]: pgmap v3287: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 609 KiB/s wr, 27 op/s
Nov 29 08:45:24 compute-2 nova_compute[232428]: 2025-11-29 08:45:24.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:24.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:25 compute-2 nova_compute[232428]: 2025-11-29 08:45:25.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:25 compute-2 podman[322403]: 2025-11-29 08:45:25.737200378 +0000 UTC m=+0.129729648 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:45:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:25.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:26 compute-2 nova_compute[232428]: 2025-11-29 08:45:26.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:26 compute-2 ceph-mon[77138]: pgmap v3288: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 609 KiB/s wr, 27 op/s
Nov 29 08:45:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:27 compute-2 nova_compute[232428]: 2025-11-29 08:45:27.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:27 compute-2 nova_compute[232428]: 2025-11-29 08:45:27.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:27.778 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2972397803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3790370612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:27.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:28.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:28 compute-2 ceph-mon[77138]: pgmap v3289: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 853 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 08:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2858280769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:45:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2858280769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:45:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:29 compute-2 nova_compute[232428]: 2025-11-29 08:45:29.481 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:29 compute-2 ceph-mon[77138]: pgmap v3290: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 853 B/s rd, 24 KiB/s wr, 2 op/s
Nov 29 08:45:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:29.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.479 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.480 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.481 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.481 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.482 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:30 compute-2 sudo[322433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:30 compute-2 sudo[322433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:30 compute-2 sudo[322433]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:30 compute-2 sudo[322459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:30 compute-2 sudo[322459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:30 compute-2 sudo[322459]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:45:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/750066426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:30 compute-2 nova_compute[232428]: 2025-11-29 08:45:30.919 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/750066426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.189 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.191 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4214MB free_disk=20.897125244140625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.191 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.192 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.278 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.279 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.307 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:45:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3066954153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.759 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.764 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.782 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.784 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:45:31 compute-2 nova_compute[232428]: 2025-11-29 08:45:31.784 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:31.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:31 compute-2 ceph-mon[77138]: pgmap v3291: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Nov 29 08:45:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3217594026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3066954153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:32 compute-2 nova_compute[232428]: 2025-11-29 08:45:32.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2284728337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:33 compute-2 nova_compute[232428]: 2025-11-29 08:45:33.504 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:45:33 compute-2 nova_compute[232428]: 2025-11-29 08:45:33.504 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:45:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:33.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:34 compute-2 ceph-mon[77138]: pgmap v3292: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 2 op/s
Nov 29 08:45:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:34 compute-2 nova_compute[232428]: 2025-11-29 08:45:34.832 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:34 compute-2 nova_compute[232428]: 2025-11-29 08:45:34.832 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:34 compute-2 nova_compute[232428]: 2025-11-29 08:45:34.933 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:45:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1698163957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.180 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.181 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.218 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.219 232432 INFO nova.compute.claims [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.464 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:45:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2555192421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.965 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:35 compute-2 nova_compute[232428]: 2025-11-29 08:45:35.971 232432 DEBUG nova.compute.provider_tree [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:45:36 compute-2 ceph-mon[77138]: pgmap v3293: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 08:45:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2555192421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3519615254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:36.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:37 compute-2 nova_compute[232428]: 2025-11-29 08:45:37.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:38 compute-2 ceph-mon[77138]: pgmap v3294: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 08:45:38 compute-2 podman[322553]: 2025-11-29 08:45:38.688560397 +0000 UTC m=+0.083524210 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:45:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:39.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.135 232432 DEBUG nova.scheduler.client.report [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.161 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.162 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.209 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.210 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.231 232432 INFO nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.248 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:45:40 compute-2 sshd-session[322573]: Invalid user validator from 45.148.10.240 port 34896
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.383 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.385 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.386 232432 INFO nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Creating image(s)
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.422 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.454 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:40 compute-2 sshd-session[322573]: Connection closed by invalid user validator 45.148.10.240 port 34896 [preauth]
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.486 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.491 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.565 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.566 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.567 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.567 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.598 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.602 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68f07426-9745-4038-b422-7117e62fddf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:40 compute-2 ceph-mon[77138]: pgmap v3295: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.673 232432 DEBUG nova.policy [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:45:40 compute-2 nova_compute[232428]: 2025-11-29 08:45:40.905 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68f07426-9745-4038-b422-7117e62fddf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.012 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.153 232432 DEBUG nova.objects.instance [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid 68f07426-9745-4038-b422-7117e62fddf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.203 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.204 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Ensure instance console log exists: /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.204 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.205 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:41 compute-2 nova_compute[232428]: 2025-11-29 08:45:41.206 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:42 compute-2 nova_compute[232428]: 2025-11-29 08:45:42.311 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:42 compute-2 ceph-mon[77138]: pgmap v3296: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Nov 29 08:45:42 compute-2 nova_compute[232428]: 2025-11-29 08:45:42.743 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Successfully created port: c7be95ae-9b02-4547-85ba-eba9c4fa484c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.634 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Successfully updated port: c7be95ae-9b02-4547-85ba-eba9c4fa484c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.659 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.659 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.659 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:45:43 compute-2 ceph-mon[77138]: pgmap v3297: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.746 232432 DEBUG nova.compute.manager [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-changed-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.747 232432 DEBUG nova.compute.manager [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Refreshing instance network info cache due to event network-changed-c7be95ae-9b02-4547-85ba-eba9c4fa484c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.747 232432 DEBUG oslo_concurrency.lockutils [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:45:43 compute-2 nova_compute[232428]: 2025-11-29 08:45:43.848 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:45:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:43.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:44.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.677 232432 DEBUG nova.network.neutron [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updating instance_info_cache with network_info: [{"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.706 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.707 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Instance network_info: |[{"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.708 232432 DEBUG oslo_concurrency.lockutils [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.709 232432 DEBUG nova.network.neutron [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Refreshing network info cache for port c7be95ae-9b02-4547-85ba-eba9c4fa484c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.714 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Start _get_guest_xml network_info=[{"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.723 232432 WARNING nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.732 232432 DEBUG nova.virt.libvirt.host [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.733 232432 DEBUG nova.virt.libvirt.host [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.738 232432 DEBUG nova.virt.libvirt.host [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.738 232432 DEBUG nova.virt.libvirt.host [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.741 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.741 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.742 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.743 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.744 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.744 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.745 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.745 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.746 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.747 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.747 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.748 232432 DEBUG nova.virt.hardware [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:45:44 compute-2 nova_compute[232428]: 2025-11-29 08:45:44.754 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.170 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:45:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3219811555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.266 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.312 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.317 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:45 compute-2 podman[322803]: 2025-11-29 08:45:45.686095632 +0000 UTC m=+0.081870448 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:45:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:45:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3942794339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.812 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.815 232432 DEBUG nova.virt.libvirt.vif [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=193,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-lxsyjimg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:40Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=68f07426-9745-4038-b422-7117e62fddf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.816 232432 DEBUG nova.network.os_vif_util [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.817 232432 DEBUG nova.network.os_vif_util [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.818 232432 DEBUG nova.objects.instance [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68f07426-9745-4038-b422-7117e62fddf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.836 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <uuid>68f07426-9745-4038-b422-7117e62fddf3</uuid>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <name>instance-000000c1</name>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441</nova:name>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:45:44</nova:creationTime>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <nova:port uuid="c7be95ae-9b02-4547-85ba-eba9c4fa484c">
Nov 29 08:45:45 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <system>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="serial">68f07426-9745-4038-b422-7117e62fddf3</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="uuid">68f07426-9745-4038-b422-7117e62fddf3</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </system>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <os>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </os>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <features>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </features>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/68f07426-9745-4038-b422-7117e62fddf3_disk">
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/68f07426-9745-4038-b422-7117e62fddf3_disk.config">
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </source>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:45:45 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:ed:cc:44"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <target dev="tapc7be95ae-9b"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/console.log" append="off"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <video>
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </video>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:45:45 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:45:45 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:45:45 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:45:45 compute-2 nova_compute[232428]: </domain>
Nov 29 08:45:45 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.837 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Preparing to wait for external event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.837 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.837 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.837 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.838 232432 DEBUG nova.virt.libvirt.vif [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=193,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-lxsyjimg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:40Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=68f07426-9745-4038-b422-7117e62fddf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.838 232432 DEBUG nova.network.os_vif_util [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.839 232432 DEBUG nova.network.os_vif_util [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.839 232432 DEBUG os_vif [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.840 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.840 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.844 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7be95ae-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.844 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7be95ae-9b, col_values=(('external_ids', {'iface-id': 'c7be95ae-9b02-4547-85ba-eba9c4fa484c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:cc:44', 'vm-uuid': '68f07426-9745-4038-b422-7117e62fddf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.845 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:45 compute-2 NetworkManager[48993]: <info>  [1764405945.8467] manager: (tapc7be95ae-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.854 232432 INFO os_vif [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b')
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.933 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.934 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.934 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:ed:cc:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.935 232432 INFO nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Using config drive
Nov 29 08:45:45 compute-2 nova_compute[232428]: 2025-11-29 08:45:45.965 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:45.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.060 232432 DEBUG nova.network.neutron [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updated VIF entry in instance network info cache for port c7be95ae-9b02-4547-85ba-eba9c4fa484c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.062 232432 DEBUG nova.network.neutron [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updating instance_info_cache with network_info: [{"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.113 232432 DEBUG oslo_concurrency.lockutils [req-988c0e0d-f1d8-4078-80d8-08c663898f3d req-766330ff-8d2b-4710-b445-0794d7750a42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:45:46 compute-2 ceph-mon[77138]: pgmap v3298: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 08:45:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3219811555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3942794339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.378 232432 INFO nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Creating config drive at /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.389 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvci32b6w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.556 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvci32b6w" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.613 232432 DEBUG nova.storage.rbd_utils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 68f07426-9745-4038-b422-7117e62fddf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.619 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config 68f07426-9745-4038-b422-7117e62fddf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:45:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.878 232432 DEBUG oslo_concurrency.processutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config 68f07426-9745-4038-b422-7117e62fddf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.879 232432 INFO nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Deleting local config drive /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3/disk.config because it was imported into RBD.
Nov 29 08:45:46 compute-2 kernel: tapc7be95ae-9b: entered promiscuous mode
Nov 29 08:45:46 compute-2 NetworkManager[48993]: <info>  [1764405946.9689] manager: (tapc7be95ae-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 08:45:46 compute-2 ovn_controller[134375]: 2025-11-29T08:45:46Z|00891|binding|INFO|Claiming lport c7be95ae-9b02-4547-85ba-eba9c4fa484c for this chassis.
Nov 29 08:45:46 compute-2 ovn_controller[134375]: 2025-11-29T08:45:46Z|00892|binding|INFO|c7be95ae-9b02-4547-85ba-eba9c4fa484c: Claiming fa:16:3e:ed:cc:44 10.100.0.10
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.984 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:46 compute-2 nova_compute[232428]: 2025-11-29 08:45:46.996 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.0068] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.0080] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 08:45:47 compute-2 systemd-machined[194747]: New machine qemu-93-instance-000000c1.
Nov 29 08:45:47 compute-2 systemd-udevd[322899]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.027 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:cc:44 10.100.0.10'], port_security=['fa:16:3e:ed:cc:44 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '68f07426-9745-4038-b422-7117e62fddf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747d2b6c-68e4-4f8b-89b3-15bbb589ad69', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41d2fa50-52e6-41f0-8017-13145416317d, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c7be95ae-9b02-4547-85ba-eba9c4fa484c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.029 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c7be95ae-9b02-4547-85ba-eba9c4fa484c in datapath d7daafb1-8347-4bc3-b00b-9a558f101e51 bound to our chassis
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.030 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7daafb1-8347-4bc3-b00b-9a558f101e51
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.0371] device (tapc7be95ae-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.0379] device (tapc7be95ae-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:45:47 compute-2 systemd[1]: Started Virtual Machine qemu-93-instance-000000c1.
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.050 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dc984380-2ad1-46e4-bc74-763f0500171f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.051 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7daafb1-81 in ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.053 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7daafb1-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.053 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[26f0e8c7-004f-48a4-8d10-ee99c8114f64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.054 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0d887e75-70cd-4d88-9a02-bbf52dff4f8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.075 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3a329f-06f3-498d-add0-a081013cf34f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.103 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8aec9d-7db0-4615-939d-6b7f4c8794f9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.138 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[49b48265-f241-4960-ba84-3e1141e8abce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.1503] manager: (tapd7daafb1-80): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.148 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[383f36fd-db6b-4ba7-92a6-e0154da1125a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 systemd-udevd[322901]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.166 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.187 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.192 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b81d2f59-2bb0-4f54-8cbe-fbe9e66af0a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.196 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[26dfe2bc-b18f-4ae7-b1eb-e629eee78739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_controller[134375]: 2025-11-29T08:45:47Z|00893|binding|INFO|Setting lport c7be95ae-9b02-4547-85ba-eba9c4fa484c ovn-installed in OVS
Nov 29 08:45:47 compute-2 ovn_controller[134375]: 2025-11-29T08:45:47Z|00894|binding|INFO|Setting lport c7be95ae-9b02-4547-85ba-eba9c4fa484c up in Southbound
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.203 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.2265] device (tapd7daafb1-80): carrier: link connected
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.230 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ad053690-f2e8-4ad6-ac22-0cec0227e7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.251 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b77431b4-0cfe-4afa-8831-3307236cd25c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7daafb1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:cf:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891349, 'reachable_time': 26540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322932, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.270 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7247aab6-0bc0-44a8-a4f3-33f3dd453f7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:cfb8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 891349, 'tstamp': 891349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322933, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.293 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef3167e-ac87-4922-aeb6-cb591da3a3f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7daafb1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:cf:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891349, 'reachable_time': 26540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322934, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.337 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5b0adb-5a08-4f19-ada1-2f6ad9d49ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.442 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[390a6454-e802-4a93-a201-78c42d33b366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.444 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7daafb1-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.445 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.445 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7daafb1-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.448 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 NetworkManager[48993]: <info>  [1764405947.4487] manager: (tapd7daafb1-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 08:45:47 compute-2 kernel: tapd7daafb1-80: entered promiscuous mode
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.450 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.453 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7daafb1-80, col_values=(('external_ids', {'iface-id': '40b4f56a-1cc5-423e-999b-a20c0e423329'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.455 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 ovn_controller[134375]: 2025-11-29T08:45:47Z|00895|binding|INFO|Releasing lport 40b4f56a-1cc5-423e-999b-a20c0e423329 from this chassis (sb_readonly=0)
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.457 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.458 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.459 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfe59f1-bebd-486e-9c1b-eb969468a762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.460 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-d7daafb1-8347-4bc3-b00b-9a558f101e51
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID d7daafb1-8347-4bc3-b00b-9a558f101e51
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:45:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:45:47.463 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'env', 'PROCESS_TAG=haproxy-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7daafb1-8347-4bc3-b00b-9a558f101e51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.475 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.613 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405947.612956, 68f07426-9745-4038-b422-7117e62fddf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.614 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] VM Started (Lifecycle Event)
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.649 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.655 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405947.6140373, 68f07426-9745-4038-b422-7117e62fddf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.655 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] VM Paused (Lifecycle Event)
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.681 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.685 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.710 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.749 232432 DEBUG nova.compute.manager [req-dc9e0039-4ff5-4a89-b6b9-f44ede8b1685 req-8d960c3d-4809-4dda-8f25-0c85806c7773 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.750 232432 DEBUG oslo_concurrency.lockutils [req-dc9e0039-4ff5-4a89-b6b9-f44ede8b1685 req-8d960c3d-4809-4dda-8f25-0c85806c7773 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.751 232432 DEBUG oslo_concurrency.lockutils [req-dc9e0039-4ff5-4a89-b6b9-f44ede8b1685 req-8d960c3d-4809-4dda-8f25-0c85806c7773 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.751 232432 DEBUG oslo_concurrency.lockutils [req-dc9e0039-4ff5-4a89-b6b9-f44ede8b1685 req-8d960c3d-4809-4dda-8f25-0c85806c7773 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.752 232432 DEBUG nova.compute.manager [req-dc9e0039-4ff5-4a89-b6b9-f44ede8b1685 req-8d960c3d-4809-4dda-8f25-0c85806c7773 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Processing event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.753 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.758 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764405947.7580009, 68f07426-9745-4038-b422-7117e62fddf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.758 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] VM Resumed (Lifecycle Event)
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.760 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.764 232432 INFO nova.virt.libvirt.driver [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Instance spawned successfully.
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.764 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.789 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.795 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.797 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.798 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.798 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.798 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.799 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.799 232432 DEBUG nova.virt.libvirt.driver [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.826 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.874 232432 INFO nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Took 7.49 seconds to spawn the instance on the hypervisor.
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.875 232432 DEBUG nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:45:47 compute-2 podman[323009]: 2025-11-29 08:45:47.944407825 +0000 UTC m=+0.079430024 container create 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:45:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:47.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:47 compute-2 nova_compute[232428]: 2025-11-29 08:45:47.988 232432 INFO nova.compute.manager [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Took 12.84 seconds to build instance.
Nov 29 08:45:48 compute-2 podman[323009]: 2025-11-29 08:45:47.909616252 +0000 UTC m=+0.044638461 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:45:48 compute-2 nova_compute[232428]: 2025-11-29 08:45:48.008 232432 DEBUG oslo_concurrency.lockutils [None req-04417f4f-2600-472f-a6fb-088d0342c305 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:48 compute-2 systemd[1]: Started libpod-conmon-0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f.scope.
Nov 29 08:45:48 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:45:48 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a21568c9244fa45a2988c8c3919e9b42d9ce2af49234965165e752777d693/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:45:48 compute-2 podman[323009]: 2025-11-29 08:45:48.092890426 +0000 UTC m=+0.227912655 container init 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:45:48 compute-2 podman[323009]: 2025-11-29 08:45:48.107779969 +0000 UTC m=+0.242802168 container start 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:45:48 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [NOTICE]   (323028) : New worker (323030) forked
Nov 29 08:45:48 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [NOTICE]   (323028) : Loading success.
Nov 29 08:45:48 compute-2 ceph-mon[77138]: pgmap v3299: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 507 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 08:45:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.860 232432 DEBUG nova.compute.manager [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.861 232432 DEBUG oslo_concurrency.lockutils [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.862 232432 DEBUG oslo_concurrency.lockutils [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.862 232432 DEBUG oslo_concurrency.lockutils [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.863 232432 DEBUG nova.compute.manager [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] No waiting events found dispatching network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:45:49 compute-2 nova_compute[232428]: 2025-11-29 08:45:49.865 232432 WARNING nova.compute.manager [req-6071c244-b80f-4ee2-875d-25954c6dae38 req-64b8fc1c-7dd6-40bc-b9f8-24b86eceb9ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received unexpected event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c for instance with vm_state active and task_state None.
Nov 29 08:45:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:49.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:50 compute-2 ceph-mon[77138]: pgmap v3300: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 29 08:45:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:50 compute-2 sudo[323040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:50 compute-2 sudo[323040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:50 compute-2 sudo[323040]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:50 compute-2 sudo[323065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:45:50 compute-2 sudo[323065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:45:50 compute-2 sudo[323065]: pam_unix(sudo:session): session closed for user root
Nov 29 08:45:50 compute-2 nova_compute[232428]: 2025-11-29 08:45:50.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:51.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:52 compute-2 nova_compute[232428]: 2025-11-29 08:45:52.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:52 compute-2 ceph-mon[77138]: pgmap v3301: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Nov 29 08:45:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:53.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:55 compute-2 ceph-mon[77138]: pgmap v3302: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.225140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955225188, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 968, "num_deletes": 251, "total_data_size": 1977505, "memory_usage": 2007936, "flush_reason": "Manual Compaction"}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955294831, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1293643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71938, "largest_seqno": 72900, "table_properties": {"data_size": 1289175, "index_size": 2119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10019, "raw_average_key_size": 19, "raw_value_size": 1280227, "raw_average_value_size": 2550, "num_data_blocks": 92, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405887, "oldest_key_time": 1764405887, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 69737 microseconds, and 3628 cpu microseconds.
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.294875) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1293643 bytes OK
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.294896) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.342740) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.342799) EVENT_LOG_v1 {"time_micros": 1764405955342787, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.342830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1972686, prev total WAL file size 1972686, number of live WAL files 2.
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.343836) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1263KB)], [144(11MB)]
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955343903, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 13263063, "oldest_snapshot_seqno": -1}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9743 keys, 11306429 bytes, temperature: kUnknown
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955465096, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 11306429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11245061, "index_size": 35911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 258805, "raw_average_key_size": 26, "raw_value_size": 11075705, "raw_average_value_size": 1136, "num_data_blocks": 1355, "num_entries": 9743, "num_filter_entries": 9743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.465360) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 11306429 bytes
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.488749) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.4 rd, 93.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(19.0) write-amplify(8.7) OK, records in: 10261, records dropped: 518 output_compression: NoCompression
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.488767) EVENT_LOG_v1 {"time_micros": 1764405955488758, "job": 92, "event": "compaction_finished", "compaction_time_micros": 121269, "compaction_time_cpu_micros": 25681, "output_level": 6, "num_output_files": 1, "total_output_size": 11306429, "num_input_records": 10261, "num_output_records": 9743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955489060, "job": 92, "event": "table_file_deletion", "file_number": 146}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955490755, "job": 92, "event": "table_file_deletion", "file_number": 144}
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.343707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.490829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.490834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.490836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.490838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:45:55.490841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.925 232432 DEBUG nova.compute.manager [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-changed-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.926 232432 DEBUG nova.compute.manager [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Refreshing instance network info cache due to event network-changed-c7be95ae-9b02-4547-85ba-eba9c4fa484c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.926 232432 DEBUG oslo_concurrency.lockutils [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.926 232432 DEBUG oslo_concurrency.lockutils [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:45:55 compute-2 nova_compute[232428]: 2025-11-29 08:45:55.927 232432 DEBUG nova.network.neutron [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Refreshing network info cache for port c7be95ae-9b02-4547-85ba-eba9c4fa484c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:45:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:55.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:45:56 compute-2 ceph-mon[77138]: pgmap v3303: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Nov 29 08:45:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:45:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:45:56 compute-2 podman[323093]: 2025-11-29 08:45:56.750819855 +0000 UTC m=+0.131763152 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 08:45:57 compute-2 nova_compute[232428]: 2025-11-29 08:45:57.317 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:45:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:57.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:58 compute-2 ceph-mon[77138]: pgmap v3304: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 98 op/s
Nov 29 08:45:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2420364000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:45:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:45:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:45:58 compute-2 nova_compute[232428]: 2025-11-29 08:45:58.736 232432 DEBUG nova.network.neutron [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updated VIF entry in instance network info cache for port c7be95ae-9b02-4547-85ba-eba9c4fa484c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:45:58 compute-2 nova_compute[232428]: 2025-11-29 08:45:58.737 232432 DEBUG nova.network.neutron [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updating instance_info_cache with network_info: [{"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:45:58 compute-2 nova_compute[232428]: 2025-11-29 08:45:58.785 232432 DEBUG oslo_concurrency.lockutils [req-d5718b53-8032-4a43-8136-2d587d5c64f3 req-085e8a9b-9c32-440f-bcbd-f3893491f7bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-68f07426-9745-4038-b422-7117e62fddf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:45:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:45:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:45:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:45:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:00 compute-2 ceph-mon[77138]: pgmap v3305: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 106 op/s
Nov 29 08:46:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:00.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:00 compute-2 nova_compute[232428]: 2025-11-29 08:46:00.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:02 compute-2 nova_compute[232428]: 2025-11-29 08:46:02.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:02.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:02 compute-2 ovn_controller[134375]: 2025-11-29T08:46:02Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:cc:44 10.100.0.10
Nov 29 08:46:02 compute-2 ovn_controller[134375]: 2025-11-29T08:46:02Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:cc:44 10.100.0.10
Nov 29 08:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:03.349 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:03.350 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:03.351 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:04 compute-2 ceph-mon[77138]: pgmap v3306: 305 pgs: 305 active+clean; 253 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 889 KiB/s wr, 113 op/s
Nov 29 08:46:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:05 compute-2 ceph-mon[77138]: pgmap v3307: 305 pgs: 305 active+clean; 261 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 286 KiB/s rd, 1.5 MiB/s wr, 64 op/s
Nov 29 08:46:05 compute-2 nova_compute[232428]: 2025-11-29 08:46:05.855 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:06 compute-2 ceph-mon[77138]: pgmap v3308: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 29 08:46:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:06.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:07 compute-2 nova_compute[232428]: 2025-11-29 08:46:07.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:08 compute-2 ceph-mon[77138]: pgmap v3309: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:46:08 compute-2 ovn_controller[134375]: 2025-11-29T08:46:08Z|00896|binding|INFO|Releasing lport 40b4f56a-1cc5-423e-999b-a20c0e423329 from this chassis (sb_readonly=0)
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.378 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:08.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.847 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.848 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.849 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.849 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.850 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.852 232432 INFO nova.compute.manager [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Terminating instance
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.854 232432 DEBUG nova.compute.manager [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:46:08 compute-2 kernel: tapc7be95ae-9b (unregistering): left promiscuous mode
Nov 29 08:46:08 compute-2 NetworkManager[48993]: <info>  [1764405968.9160] device (tapc7be95ae-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:46:08 compute-2 ovn_controller[134375]: 2025-11-29T08:46:08Z|00897|binding|INFO|Releasing lport c7be95ae-9b02-4547-85ba-eba9c4fa484c from this chassis (sb_readonly=0)
Nov 29 08:46:08 compute-2 ovn_controller[134375]: 2025-11-29T08:46:08Z|00898|binding|INFO|Setting lport c7be95ae-9b02-4547-85ba-eba9c4fa484c down in Southbound
Nov 29 08:46:08 compute-2 ovn_controller[134375]: 2025-11-29T08:46:08Z|00899|binding|INFO|Removing iface tapc7be95ae-9b ovn-installed in OVS
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:08.928 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:cc:44 10.100.0.10'], port_security=['fa:16:3e:ed:cc:44 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '68f07426-9745-4038-b422-7117e62fddf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aa0419aa-564f-4458-8adf-ee20a8aa61d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41d2fa50-52e6-41f0-8017-13145416317d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c7be95ae-9b02-4547-85ba-eba9c4fa484c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:46:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:08.930 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c7be95ae-9b02-4547-85ba-eba9c4fa484c in datapath d7daafb1-8347-4bc3-b00b-9a558f101e51 unbound from our chassis
Nov 29 08:46:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:08.932 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7daafb1-8347-4bc3-b00b-9a558f101e51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:46:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:08.933 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2b42afc9-aa58-48be-a907-d4af71bf8f1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:08.934 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 namespace which is not needed anymore
Nov 29 08:46:08 compute-2 nova_compute[232428]: 2025-11-29 08:46:08.969 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:08 compute-2 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Nov 29 08:46:08 compute-2 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c1.scope: Consumed 14.569s CPU time.
Nov 29 08:46:08 compute-2 systemd-machined[194747]: Machine qemu-93-instance-000000c1 terminated.
Nov 29 08:46:09 compute-2 podman[323126]: 2025-11-29 08:46:09.005039629 +0000 UTC m=+0.066570862 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [NOTICE]   (323028) : haproxy version is 2.8.14-c23fe91
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [NOTICE]   (323028) : path to executable is /usr/sbin/haproxy
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [WARNING]  (323028) : Exiting Master process...
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [WARNING]  (323028) : Exiting Master process...
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [ALERT]    (323028) : Current worker (323030) exited with code 143 (Terminated)
Nov 29 08:46:09 compute-2 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[323024]: [WARNING]  (323028) : All workers exited. Exiting... (0)
Nov 29 08:46:09 compute-2 systemd[1]: libpod-0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f.scope: Deactivated successfully.
Nov 29 08:46:09 compute-2 podman[323169]: 2025-11-29 08:46:09.084280056 +0000 UTC m=+0.046896632 container died 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.097 232432 INFO nova.virt.libvirt.driver [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Instance destroyed successfully.
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.098 232432 DEBUG nova.objects.instance [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid 68f07426-9745-4038-b422-7117e62fddf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.112 232432 DEBUG nova.virt.libvirt.vif [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-1-423157441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=193,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:45:47Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-lxsyjimg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:47Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=68f07426-9745-4038-b422-7117e62fddf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.113 232432 DEBUG nova.network.os_vif_util [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "address": "fa:16:3e:ed:cc:44", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7be95ae-9b", "ovs_interfaceid": "c7be95ae-9b02-4547-85ba-eba9c4fa484c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.114 232432 DEBUG nova.network.os_vif_util [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.115 232432 DEBUG os_vif [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.118 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7be95ae-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.119 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f-userdata-shm.mount: Deactivated successfully.
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.121 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-360a21568c9244fa45a2988c8c3919e9b42d9ce2af49234965165e752777d693-merged.mount: Deactivated successfully.
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.126 232432 INFO os_vif [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:cc:44,bridge_name='br-int',has_traffic_filtering=True,id=c7be95ae-9b02-4547-85ba-eba9c4fa484c,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7be95ae-9b')
Nov 29 08:46:09 compute-2 podman[323169]: 2025-11-29 08:46:09.138441501 +0000 UTC m=+0.101058077 container cleanup 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:46:09 compute-2 systemd[1]: libpod-conmon-0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f.scope: Deactivated successfully.
Nov 29 08:46:09 compute-2 podman[323224]: 2025-11-29 08:46:09.207387507 +0000 UTC m=+0.045382004 container remove 0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:46:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.214 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b882fdd2-801a-4a98-a066-1592f1710bed]: (4, ('Sat Nov 29 08:46:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 (0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f)\n0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f\nSat Nov 29 08:46:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 (0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f)\n0261ac0e96704c34702aa0f02e9c11b7fc156611b8a877eee00cb204b0917d2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.216 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c33e4f61-7eb3-4ade-b37b-5981f1e7dd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.217 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7daafb1-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 kernel: tapd7daafb1-80: left promiscuous mode
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.222 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.225 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fd90b41f-778e-43c1-a384-ca6b097c8a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.239 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.245 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f62b95-8e16-46fd-aea4-c06c44950f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.246 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cf23618f-3ec5-4b60-bafc-539ad5796262]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.266 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b07fec03-50fb-4f56-a760-4730a73d95fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891340, 'reachable_time': 18783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323242, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.268 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:46:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:09.268 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[de029e52-ab14-40e4-964a-6b1707acf643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:09 compute-2 systemd[1]: run-netns-ovnmeta\x2dd7daafb1\x2d8347\x2d4bc3\x2db00b\x2d9a558f101e51.mount: Deactivated successfully.
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.623 232432 INFO nova.virt.libvirt.driver [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Deleting instance files /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3_del
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.624 232432 INFO nova.virt.libvirt.driver [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Deletion of /var/lib/nova/instances/68f07426-9745-4038-b422-7117e62fddf3_del complete
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.889 232432 DEBUG nova.compute.manager [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-unplugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.890 232432 DEBUG oslo_concurrency.lockutils [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.891 232432 DEBUG oslo_concurrency.lockutils [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.891 232432 DEBUG oslo_concurrency.lockutils [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.892 232432 DEBUG nova.compute.manager [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] No waiting events found dispatching network-vif-unplugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.893 232432 DEBUG nova.compute.manager [req-c7e49f6d-5e12-472e-8c83-c0cc03aa1db6 req-e645e132-cf86-496a-a39c-a874b789f24e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-unplugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.932 232432 INFO nova.compute.manager [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Took 1.08 seconds to destroy the instance on the hypervisor.
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.933 232432 DEBUG oslo.service.loopingcall [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.934 232432 DEBUG nova.compute.manager [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:46:09 compute-2 nova_compute[232428]: 2025-11-29 08:46:09.934 232432 DEBUG nova.network.neutron [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:46:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:10.218 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:46:10 compute-2 nova_compute[232428]: 2025-11-29 08:46:10.219 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:10.220 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:46:10 compute-2 ceph-mon[77138]: pgmap v3310: 305 pgs: 305 active+clean; 278 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 29 08:46:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:10 compute-2 sudo[323245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:10 compute-2 sudo[323245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:10 compute-2 sudo[323245]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:10 compute-2 sudo[323253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:10 compute-2 sudo[323253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:10 compute-2 sudo[323253]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:10 compute-2 sudo[323295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:10 compute-2 sudo[323295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:11 compute-2 sudo[323295]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:11 compute-2 sudo[323298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:46:11 compute-2 sudo[323298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:11 compute-2 sudo[323298]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:11 compute-2 sudo[323345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:11 compute-2 sudo[323345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:11 compute-2 sudo[323345]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:11 compute-2 sudo[323370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:46:11 compute-2 sudo[323370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:11 compute-2 sudo[323370]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:12.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.325 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:12 compute-2 ceph-mon[77138]: pgmap v3311: 305 pgs: 305 active+clean; 222 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:46:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:46:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:12.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.774 232432 DEBUG nova.compute.manager [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.774 232432 DEBUG oslo_concurrency.lockutils [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68f07426-9745-4038-b422-7117e62fddf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.775 232432 DEBUG oslo_concurrency.lockutils [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.775 232432 DEBUG oslo_concurrency.lockutils [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.775 232432 DEBUG nova.compute.manager [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] No waiting events found dispatching network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:46:12 compute-2 nova_compute[232428]: 2025-11-29 08:46:12.776 232432 WARNING nova.compute.manager [req-6cd7d3f8-9f40-45db-9dce-840851aecfce req-028a02d0-18bb-4d27-a537-d7c157afac33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received unexpected event network-vif-plugged-c7be95ae-9b02-4547-85ba-eba9c4fa484c for instance with vm_state active and task_state deleting.
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.068 232432 DEBUG nova.network.neutron [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.116 232432 INFO nova.compute.manager [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Took 3.18 seconds to deallocate network for instance.
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.207 232432 DEBUG nova.compute.manager [req-14b4544b-e012-4161-8b76-d1ed1cb03930 req-d61e3c6c-1131-400c-9509-e07e74ddfbeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Received event network-vif-deleted-c7be95ae-9b02-4547-85ba-eba9c4fa484c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.215 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.216 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.289 232432 DEBUG oslo_concurrency.processutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:46:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1594226553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.775 232432 DEBUG oslo_concurrency.processutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.781 232432 DEBUG nova.compute.provider_tree [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.808 232432 DEBUG nova.scheduler.client.report [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.855 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:13 compute-2 nova_compute[232428]: 2025-11-29 08:46:13.909 232432 INFO nova.scheduler.client.report [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance 68f07426-9745-4038-b422-7117e62fddf3
Nov 29 08:46:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:14.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:14 compute-2 nova_compute[232428]: 2025-11-29 08:46:14.090 232432 DEBUG oslo_concurrency.lockutils [None req-4f977f18-e2cf-4914-8c77-915ff68678f4 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "68f07426-9745-4038-b422-7117e62fddf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:14 compute-2 nova_compute[232428]: 2025-11-29 08:46:14.120 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:14 compute-2 ceph-mon[77138]: pgmap v3312: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 337 KiB/s rd, 1.3 MiB/s wr, 80 op/s
Nov 29 08:46:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1594226553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:14.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:16.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:16 compute-2 ceph-mon[77138]: pgmap v3313: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 700 KiB/s wr, 61 op/s
Nov 29 08:46:16 compute-2 podman[323452]: 2025-11-29 08:46:16.717499584 +0000 UTC m=+0.115647371 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:46:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:17 compute-2 nova_compute[232428]: 2025-11-29 08:46:17.326 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:18.224 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:18 compute-2 ceph-mon[77138]: pgmap v3314: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 61 KiB/s wr, 37 op/s
Nov 29 08:46:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:18 compute-2 sudo[323473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:18 compute-2 sudo[323473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:18 compute-2 sudo[323473]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:18 compute-2 sudo[323498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:46:18 compute-2 sudo[323498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:18 compute-2 sudo[323498]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:19 compute-2 nova_compute[232428]: 2025-11-29 08:46:19.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:46:19 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:46:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:20.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:20 compute-2 nova_compute[232428]: 2025-11-29 08:46:20.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:20 compute-2 ceph-mon[77138]: pgmap v3315: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 62 KiB/s wr, 37 op/s
Nov 29 08:46:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:21 compute-2 nova_compute[232428]: 2025-11-29 08:46:21.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:22.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:22 compute-2 nova_compute[232428]: 2025-11-29 08:46:22.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:22 compute-2 nova_compute[232428]: 2025-11-29 08:46:22.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:46:22 compute-2 nova_compute[232428]: 2025-11-29 08:46:22.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:46:22 compute-2 nova_compute[232428]: 2025-11-29 08:46:22.328 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:22 compute-2 ceph-mon[77138]: pgmap v3316: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 37 KiB/s wr, 33 op/s
Nov 29 08:46:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:22.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:22 compute-2 nova_compute[232428]: 2025-11-29 08:46:22.876 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:46:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:24.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:24 compute-2 nova_compute[232428]: 2025-11-29 08:46:24.095 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405969.0944264, 68f07426-9745-4038-b422-7117e62fddf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:46:24 compute-2 nova_compute[232428]: 2025-11-29 08:46:24.096 232432 INFO nova.compute.manager [-] [instance: 68f07426-9745-4038-b422-7117e62fddf3] VM Stopped (Lifecycle Event)
Nov 29 08:46:24 compute-2 nova_compute[232428]: 2025-11-29 08:46:24.124 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:24 compute-2 nova_compute[232428]: 2025-11-29 08:46:24.176 232432 DEBUG nova.compute.manager [None req-ef29ddd0-aa50-4c5d-9d04-b22adbe35037 - - - - - -] [instance: 68f07426-9745-4038-b422-7117e62fddf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:46:24 compute-2 nova_compute[232428]: 2025-11-29 08:46:24.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:24 compute-2 ceph-mon[77138]: pgmap v3317: 305 pgs: 305 active+clean; 166 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 10 KiB/s wr, 6 op/s
Nov 29 08:46:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:25 compute-2 nova_compute[232428]: 2025-11-29 08:46:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3802839166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:26.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:26 compute-2 ceph-mon[77138]: pgmap v3318: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 29 op/s
Nov 29 08:46:26 compute-2 nova_compute[232428]: 2025-11-29 08:46:26.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:27 compute-2 nova_compute[232428]: 2025-11-29 08:46:27.329 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:27 compute-2 podman[323527]: 2025-11-29 08:46:27.729489156 +0000 UTC m=+0.118746647 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 08:46:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:28.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:28 compute-2 nova_compute[232428]: 2025-11-29 08:46:28.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:46:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276330531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:46:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:46:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276330531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:46:28 compute-2 ceph-mon[77138]: pgmap v3319: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 08:46:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3712535892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1276330531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:46:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1276330531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:46:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.127 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.269 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.270 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.270 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.270 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.271 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2041589643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:29 compute-2 ceph-mon[77138]: pgmap v3320: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 08:46:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:46:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4236838875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.758 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.983 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.985 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4202MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.986 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:29 compute-2 nova_compute[232428]: 2025-11-29 08:46:29.986 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:30.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.365 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.366 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.435 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4236838875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:46:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2930446240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.929 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.938 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:46:30 compute-2 nova_compute[232428]: 2025-11-29 08:46:30.969 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:46:31 compute-2 nova_compute[232428]: 2025-11-29 08:46:31.110 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:46:31 compute-2 nova_compute[232428]: 2025-11-29 08:46:31.111 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:31 compute-2 sudo[323600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:31 compute-2 sudo[323600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:31 compute-2 sudo[323600]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:31 compute-2 sudo[323625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:31 compute-2 sudo[323625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:31 compute-2 sudo[323625]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2930446240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:31 compute-2 ceph-mon[77138]: pgmap v3321: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:46:31 compute-2 nova_compute[232428]: 2025-11-29 08:46:31.860 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:31 compute-2 nova_compute[232428]: 2025-11-29 08:46:31.986 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:32 compute-2 nova_compute[232428]: 2025-11-29 08:46:32.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:32.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4231914523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:33 compute-2 ceph-mon[77138]: pgmap v3322: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 29 08:46:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/238002989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:34.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:34 compute-2 nova_compute[232428]: 2025-11-29 08:46:34.112 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:46:34 compute-2 nova_compute[232428]: 2025-11-29 08:46:34.113 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:46:34 compute-2 nova_compute[232428]: 2025-11-29 08:46:34.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:34.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:36.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:36 compute-2 ceph-mon[77138]: pgmap v3323: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 29 08:46:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:37 compute-2 nova_compute[232428]: 2025-11-29 08:46:37.335 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:38 compute-2 ceph-mon[77138]: pgmap v3324: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:46:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:39 compute-2 nova_compute[232428]: 2025-11-29 08:46:39.130 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:39 compute-2 podman[323655]: 2025-11-29 08:46:39.70167661 +0000 UTC m=+0.094994907 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 08:46:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:40 compute-2 ceph-mon[77138]: pgmap v3325: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:46:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:40.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:40 compute-2 nova_compute[232428]: 2025-11-29 08:46:40.945 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:40 compute-2 nova_compute[232428]: 2025-11-29 08:46:40.945 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:40 compute-2 nova_compute[232428]: 2025-11-29 08:46:40.987 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.179 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.180 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.194 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.195 232432 INFO nova.compute.claims [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.336 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:46:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3123141209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.799 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.808 232432 DEBUG nova.compute.provider_tree [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.838 232432 DEBUG nova.scheduler.client.report [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.905 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.907 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.996 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:46:41 compute-2 nova_compute[232428]: 2025-11-29 08:46:41.997 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:46:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:42.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.042 232432 INFO nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.109 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:46:42 compute-2 ceph-mon[77138]: pgmap v3326: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:46:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3123141209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.246 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.248 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.249 232432 INFO nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Creating image(s)
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.299 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.351 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.388 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.393 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.423 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.482 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.483 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.483 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.484 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.511 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.516 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.557 232432 DEBUG nova.policy [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:46:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.818 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:42 compute-2 nova_compute[232428]: 2025-11-29 08:46:42.935 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.080 232432 DEBUG nova.objects.instance [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.145 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.146 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Ensure instance console log exists: /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.146 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.147 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:43 compute-2 nova_compute[232428]: 2025-11-29 08:46:43.147 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:44.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:44 compute-2 nova_compute[232428]: 2025-11-29 08:46:44.133 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:44 compute-2 ceph-mon[77138]: pgmap v3327: 305 pgs: 305 active+clean; 136 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 441 KiB/s wr, 22 op/s
Nov 29 08:46:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:45 compute-2 nova_compute[232428]: 2025-11-29 08:46:45.960 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Successfully created port: 5526816d-0012-44bf-b42e-02c86af3be28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:46:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:46.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:46 compute-2 ceph-mon[77138]: pgmap v3328: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:46:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.340 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:47 compute-2 podman[323866]: 2025-11-29 08:46:47.647414315 +0000 UTC m=+0.057382888 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.747 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Successfully updated port: 5526816d-0012-44bf-b42e-02c86af3be28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.771 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.771 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.771 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.969 232432 DEBUG nova.compute.manager [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-changed-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.970 232432 DEBUG nova.compute.manager [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing instance network info cache due to event network-changed-5526816d-0012-44bf-b42e-02c86af3be28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:46:47 compute-2 nova_compute[232428]: 2025-11-29 08:46:47.971 232432 DEBUG oslo_concurrency.lockutils [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:46:48 compute-2 nova_compute[232428]: 2025-11-29 08:46:48.012 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:46:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:48.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:48 compute-2 ceph-mon[77138]: pgmap v3329: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:46:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:49 compute-2 nova_compute[232428]: 2025-11-29 08:46:49.136 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:50 compute-2 ceph-mon[77138]: pgmap v3330: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:46:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:50 compute-2 nova_compute[232428]: 2025-11-29 08:46:50.949 232432 DEBUG nova.network.neutron [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.013 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.013 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance network_info: |[{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.014 232432 DEBUG oslo_concurrency.lockutils [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.014 232432 DEBUG nova.network.neutron [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.020 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Start _get_guest_xml network_info=[{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.027 232432 WARNING nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.031 232432 DEBUG nova.virt.libvirt.host [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.032 232432 DEBUG nova.virt.libvirt.host [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.034 232432 DEBUG nova.virt.libvirt.host [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.035 232432 DEBUG nova.virt.libvirt.host [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.036 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.036 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.037 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.037 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.037 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.038 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.038 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.038 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.038 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.039 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.039 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.039 232432 DEBUG nova.virt.hardware [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.043 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:51 compute-2 sudo[323908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:51 compute-2 sudo[323908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:51 compute-2 sudo[323908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:51 compute-2 sudo[323933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:46:51 compute-2 sudo[323933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:46:51 compute-2 sudo[323933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:46:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:46:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/19907677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.487 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.529 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.534 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:46:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1146843468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.983 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.987 232432 DEBUG nova.virt.libvirt.vif [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1075941855',display_name='tempest-TestNetworkAdvancedServerOps-server-1075941855',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1075941855',id=194,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGE5pn+I70oZmHuiim1TUtpKIGR7Kqkz86qivG/cMWSpTnsyRPWZiJEZtpdA3p62tfZxizy0V57ahkV6w9stXKLiRpJWXZOS5iNHoB3QHRD9m7s70TW3AQX73YlXMmf2hg==',key_name='tempest-TestNetworkAdvancedServerOps-553049570',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-pr4xow81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:42Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.987 232432 DEBUG nova.network.os_vif_util [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.989 232432 DEBUG nova.network.os_vif_util [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:46:51 compute-2 nova_compute[232428]: 2025-11-29 08:46:51.992 232432 DEBUG nova.objects.instance [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.025 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <uuid>16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4</uuid>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <name>instance-000000c2</name>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1075941855</nova:name>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:46:51</nova:creationTime>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <nova:port uuid="5526816d-0012-44bf-b42e-02c86af3be28">
Nov 29 08:46:52 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <system>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="serial">16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="uuid">16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </system>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <os>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </os>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <features>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </features>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk">
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </source>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config">
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </source>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:46:52 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:d6:89:f6"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <target dev="tap5526816d-00"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/console.log" append="off"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <video>
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </video>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:46:52 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:46:52 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:46:52 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:46:52 compute-2 nova_compute[232428]: </domain>
Nov 29 08:46:52 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.026 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Preparing to wait for external event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.027 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.028 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.028 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.029 232432 DEBUG nova.virt.libvirt.vif [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1075941855',display_name='tempest-TestNetworkAdvancedServerOps-server-1075941855',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1075941855',id=194,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGE5pn+I70oZmHuiim1TUtpKIGR7Kqkz86qivG/cMWSpTnsyRPWZiJEZtpdA3p62tfZxizy0V57ahkV6w9stXKLiRpJWXZOS5iNHoB3QHRD9m7s70TW3AQX73YlXMmf2hg==',key_name='tempest-TestNetworkAdvancedServerOps-553049570',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-pr4xow81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:42Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.030 232432 DEBUG nova.network.os_vif_util [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.031 232432 DEBUG nova.network.os_vif_util [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.032 232432 DEBUG os_vif [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.033 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.034 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.035 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.044 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.045 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5526816d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.046 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5526816d-00, col_values=(('external_ids', {'iface-id': '5526816d-0012-44bf-b42e-02c86af3be28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:89:f6', 'vm-uuid': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:52 compute-2 NetworkManager[48993]: <info>  [1764406012.0490] manager: (tap5526816d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 29 08:46:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:46:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:46:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:52.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.060 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.061 232432 INFO os_vif [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00')
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.147 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.148 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.149 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:d6:89:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.150 232432 INFO nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Using config drive
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.195 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:52 compute-2 ceph-mon[77138]: pgmap v3331: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:46:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/19907677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:46:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1146843468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.343 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.757 232432 DEBUG nova.network.neutron [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updated VIF entry in instance network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.757 232432 DEBUG nova.network.neutron [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:46:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:52.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.795 232432 DEBUG oslo_concurrency.lockutils [req-668d1c3b-286d-4001-b0b0-b21c3c5c63d6 req-e51f09fc-9ff5-4865-9c5c-65a8127a8265 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.864 232432 INFO nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Creating config drive at /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config
Nov 29 08:46:52 compute-2 nova_compute[232428]: 2025-11-29 08:46:52.875 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis5g8d_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.039 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis5g8d_r" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.090 232432 DEBUG nova.storage.rbd_utils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.096 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.327 232432 DEBUG oslo_concurrency.processutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.328 232432 INFO nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Deleting local config drive /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4/disk.config because it was imported into RBD.
Nov 29 08:46:53 compute-2 kernel: tap5526816d-00: entered promiscuous mode
Nov 29 08:46:53 compute-2 ovn_controller[134375]: 2025-11-29T08:46:53Z|00900|binding|INFO|Claiming lport 5526816d-0012-44bf-b42e-02c86af3be28 for this chassis.
Nov 29 08:46:53 compute-2 ovn_controller[134375]: 2025-11-29T08:46:53Z|00901|binding|INFO|5526816d-0012-44bf-b42e-02c86af3be28: Claiming fa:16:3e:d6:89:f6 10.100.0.9
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.4072] manager: (tap5526816d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.413 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.429 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:89:f6 10.100.0.9'], port_security=['fa:16:3e:d6:89:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd43b1520-d847-4774-a28f-9922d649e636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f5a9c50-8742-4c67-ad6b-d9e1980cf3cc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=5526816d-0012-44bf-b42e-02c86af3be28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.431 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 5526816d-0012-44bf-b42e-02c86af3be28 in datapath aab720ee-c62f-473f-a1c1-c350c6375dc0 bound to our chassis
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.433 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:46:53 compute-2 systemd-udevd[324071]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.453 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8f3377-04f0-4789-840b-7745bedece2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.455 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaab720ee-c1 in ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.4597] device (tap5526816d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.4617] device (tap5526816d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.457 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaab720ee-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.457 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[00782115-b7cd-4613-8799-44ce00ecdc56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.462 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[872ea496-e872-457a-ac21-4891b3c876ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 systemd-machined[194747]: New machine qemu-94-instance-000000c2.
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.482 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[30ce0ffd-0000-4d63-b167-ad23f6668308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.487 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 ovn_controller[134375]: 2025-11-29T08:46:53Z|00902|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 ovn-installed in OVS
Nov 29 08:46:53 compute-2 ovn_controller[134375]: 2025-11-29T08:46:53Z|00903|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 up in Southbound
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 systemd[1]: Started Virtual Machine qemu-94-instance-000000c2.
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.501 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[98fa0ef1-79c8-49a3-be9b-6e8fb5a61732]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.541 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6dcda5-6baa-46bf-90f0-0e28780f95d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.549 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9eaa1a81-d5af-42c6-bef9-b28d9ce9909a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.5509] manager: (tapaab720ee-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.597 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[da61a873-d5e7-4af9-a310-6b6705ec69f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.601 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecdfd75-732e-42ed-b4fb-2cb62b1c8474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.6356] device (tapaab720ee-c0): carrier: link connected
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.645 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[04e32324-2a27-451c-bf27-10de0a7cd26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.672 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8bce5de5-da90-474c-9396-34ee81b52d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaab720ee-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:7d:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897990, 'reachable_time': 16298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324107, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.695 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[45cf1aad-52ad-4b9c-a386-2c287edfbafe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:7d04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897990, 'tstamp': 897990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324108, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.722 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c386a7d3-c668-4001-a2d6-763f096a80a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaab720ee-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:7d:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897990, 'reachable_time': 16298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324109, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.767 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f4825118-5682-4f90-a553-d30cd5b0acb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.865 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f990e6f-d933-47e0-88de-13929c7d04e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaab720ee-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.868 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaab720ee-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 NetworkManager[48993]: <info>  [1764406013.8716] manager: (tapaab720ee-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 29 08:46:53 compute-2 kernel: tapaab720ee-c0: entered promiscuous mode
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.875 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaab720ee-c0, col_values=(('external_ids', {'iface-id': '586a9bd1-be8b-4e03-9f94-3539bdb70afe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 ovn_controller[134375]: 2025-11-29T08:46:53Z|00904|binding|INFO|Releasing lport 586a9bd1-be8b-4e03-9f94-3539bdb70afe from this chassis (sb_readonly=0)
Nov 29 08:46:53 compute-2 nova_compute[232428]: 2025-11-29 08:46:53.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.909 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.910 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bfa261-cf03-4f24-8ee0-c1af81a79959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.912 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:46:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:53.913 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'env', 'PROCESS_TAG=haproxy-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aab720ee-c62f-473f-a1c1-c350c6375dc0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:46:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.145 232432 DEBUG nova.compute.manager [req-e5b4dd11-3684-4bf4-a207-e2f1e32bb914 req-b978f0f6-0217-4dba-8e57-ee5d727213ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.145 232432 DEBUG oslo_concurrency.lockutils [req-e5b4dd11-3684-4bf4-a207-e2f1e32bb914 req-b978f0f6-0217-4dba-8e57-ee5d727213ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.146 232432 DEBUG oslo_concurrency.lockutils [req-e5b4dd11-3684-4bf4-a207-e2f1e32bb914 req-b978f0f6-0217-4dba-8e57-ee5d727213ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.146 232432 DEBUG oslo_concurrency.lockutils [req-e5b4dd11-3684-4bf4-a207-e2f1e32bb914 req-b978f0f6-0217-4dba-8e57-ee5d727213ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.147 232432 DEBUG nova.compute.manager [req-e5b4dd11-3684-4bf4-a207-e2f1e32bb914 req-b978f0f6-0217-4dba-8e57-ee5d727213ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Processing event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:46:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.231 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406014.2304902, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.231 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Started (Lifecycle Event)
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.235 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.240 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.244 232432 INFO nova.virt.libvirt.driver [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance spawned successfully.
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.244 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.265 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.269 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:46:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:54.269 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.271 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.295 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.295 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.296 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.296 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.297 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.297 232432 DEBUG nova.virt.libvirt.driver [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.301 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.302 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406014.230619, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.302 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Paused (Lifecycle Event)
Nov 29 08:46:54 compute-2 ceph-mon[77138]: pgmap v3332: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.355 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.359 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406014.2385693, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.359 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Resumed (Lifecycle Event)
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.386 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.390 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.395 232432 INFO nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Took 12.15 seconds to spawn the instance on the hypervisor.
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.396 232432 DEBUG nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.412 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:46:54 compute-2 podman[324182]: 2025-11-29 08:46:54.43385211 +0000 UTC m=+0.090085105 container create e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:46:54 compute-2 podman[324182]: 2025-11-29 08:46:54.378886218 +0000 UTC m=+0.035119303 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.479 232432 INFO nova.compute.manager [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Took 13.37 seconds to build instance.
Nov 29 08:46:54 compute-2 systemd[1]: Started libpod-conmon-e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77.scope.
Nov 29 08:46:54 compute-2 nova_compute[232428]: 2025-11-29 08:46:54.494 232432 DEBUG oslo_concurrency.lockutils [None req-9513b956-6ab4-439c-b3e1-08b287586a78 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:54 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:46:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb13e461e393b3e945d2b8fbd570b17451ccde1fd6abcc6083f287705183bff1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:46:54 compute-2 podman[324182]: 2025-11-29 08:46:54.539736325 +0000 UTC m=+0.195969360 container init e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:46:54 compute-2 podman[324182]: 2025-11-29 08:46:54.545473313 +0000 UTC m=+0.201706308 container start e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:46:54 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [NOTICE]   (324201) : New worker (324203) forked
Nov 29 08:46:54 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [NOTICE]   (324201) : Loading success.
Nov 29 08:46:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:54.605 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:46:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:46:55.608 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:46:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:46:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:56.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.272 232432 DEBUG nova.compute.manager [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.272 232432 DEBUG oslo_concurrency.lockutils [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.273 232432 DEBUG oslo_concurrency.lockutils [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.273 232432 DEBUG oslo_concurrency.lockutils [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.273 232432 DEBUG nova.compute.manager [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:46:56 compute-2 nova_compute[232428]: 2025-11-29 08:46:56.273 232432 WARNING nova.compute.manager [req-9e1db0f6-9db6-40a7-83fa-f76d47026ef8 req-fe80c593-cdbb-4cb9-9f8b-00811e04ef88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state active and task_state None.
Nov 29 08:46:56 compute-2 ceph-mon[77138]: pgmap v3333: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Nov 29 08:46:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:57 compute-2 nova_compute[232428]: 2025-11-29 08:46:57.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:57 compute-2 nova_compute[232428]: 2025-11-29 08:46:57.345 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:58.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:58 compute-2 podman[324215]: 2025-11-29 08:46:58.765016903 +0000 UTC m=+0.155790590 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Nov 29 08:46:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:46:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:46:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:46:58 compute-2 ceph-mon[77138]: pgmap v3334: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 511 B/s wr, 13 op/s
Nov 29 08:46:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:46:59 compute-2 NetworkManager[48993]: <info>  [1764406019.4332] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.432 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:59 compute-2 NetworkManager[48993]: <info>  [1764406019.4343] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.543 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:59 compute-2 ovn_controller[134375]: 2025-11-29T08:46:59Z|00905|binding|INFO|Releasing lport 586a9bd1-be8b-4e03-9f94-3539bdb70afe from this chassis (sb_readonly=0)
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.555 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.818 232432 DEBUG nova.compute.manager [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-changed-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.819 232432 DEBUG nova.compute.manager [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing instance network info cache due to event network-changed-5526816d-0012-44bf-b42e-02c86af3be28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.819 232432 DEBUG oslo_concurrency.lockutils [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.820 232432 DEBUG oslo_concurrency.lockutils [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:46:59 compute-2 nova_compute[232428]: 2025-11-29 08:46:59.820 232432 DEBUG nova.network.neutron [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:47:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:00.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:00 compute-2 ceph-mon[77138]: pgmap v3335: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 605 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 29 08:47:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:02 compute-2 nova_compute[232428]: 2025-11-29 08:47:02.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:02 compute-2 ceph-mon[77138]: pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:47:02 compute-2 nova_compute[232428]: 2025-11-29 08:47:02.349 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:02.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:03.350 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:03.351 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:03.352 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:03 compute-2 nova_compute[232428]: 2025-11-29 08:47:03.909 232432 DEBUG nova.network.neutron [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updated VIF entry in instance network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:47:03 compute-2 nova_compute[232428]: 2025-11-29 08:47:03.909 232432 DEBUG nova.network.neutron [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:47:03 compute-2 nova_compute[232428]: 2025-11-29 08:47:03.945 232432 DEBUG oslo_concurrency.lockutils [req-3d27c749-dbda-492a-a5cd-9e9571648f61 req-dde31af9-b1b0-40a5-b481-12e4868375eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:47:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:04.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:04 compute-2 ceph-mon[77138]: pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:47:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:04.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:06.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:06 compute-2 ceph-mon[77138]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:47:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:06.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:07 compute-2 nova_compute[232428]: 2025-11-29 08:47:07.054 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:07 compute-2 nova_compute[232428]: 2025-11-29 08:47:07.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:07 compute-2 nova_compute[232428]: 2025-11-29 08:47:07.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:07 compute-2 ovn_controller[134375]: 2025-11-29T08:47:07Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:89:f6 10.100.0.9
Nov 29 08:47:07 compute-2 ovn_controller[134375]: 2025-11-29T08:47:07Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:89:f6 10.100.0.9
Nov 29 08:47:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:08 compute-2 nova_compute[232428]: 2025-11-29 08:47:08.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:08 compute-2 ceph-mon[77138]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 29 08:47:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:08.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:10 compute-2 ceph-mon[77138]: pgmap v3340: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 582 KiB/s wr, 84 op/s
Nov 29 08:47:10 compute-2 podman[324247]: 2025-11-29 08:47:10.710103144 +0000 UTC m=+0.099148407 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:47:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:11 compute-2 sudo[324266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:11 compute-2 sudo[324266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:11 compute-2 sudo[324266]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:11 compute-2 sudo[324291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:11 compute-2 sudo[324291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:11 compute-2 sudo[324291]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:12 compute-2 nova_compute[232428]: 2025-11-29 08:47:12.056 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:12.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:12 compute-2 ceph-mon[77138]: pgmap v3341: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 08:47:12 compute-2 nova_compute[232428]: 2025-11-29 08:47:12.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:12.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.228 232432 INFO nova.compute.manager [None req-7a2bd8a3-0de4-42a4-93b3-5e2794cdc81e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Get console output
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.236 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.587 232432 DEBUG nova.objects.instance [None req-38856925-1a10-4f89-8f0f-35bf66d3afa1 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.611 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406033.6108413, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.611 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Paused (Lifecycle Event)
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.631 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.635 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:47:13 compute-2 nova_compute[232428]: 2025-11-29 08:47:13.660 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 29 08:47:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:14 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:14 compute-2 ceph-mon[77138]: pgmap v3342: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:47:14 compute-2 kernel: tap5526816d-00 (unregistering): left promiscuous mode
Nov 29 08:47:14 compute-2 NetworkManager[48993]: <info>  [1764406034.9290] device (tap5526816d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:47:14 compute-2 nova_compute[232428]: 2025-11-29 08:47:14.943 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:14 compute-2 ovn_controller[134375]: 2025-11-29T08:47:14Z|00906|binding|INFO|Releasing lport 5526816d-0012-44bf-b42e-02c86af3be28 from this chassis (sb_readonly=0)
Nov 29 08:47:14 compute-2 ovn_controller[134375]: 2025-11-29T08:47:14Z|00907|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 down in Southbound
Nov 29 08:47:14 compute-2 ovn_controller[134375]: 2025-11-29T08:47:14Z|00908|binding|INFO|Removing iface tap5526816d-00 ovn-installed in OVS
Nov 29 08:47:14 compute-2 nova_compute[232428]: 2025-11-29 08:47:14.948 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:14.957 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:89:f6 10.100.0.9'], port_security=['fa:16:3e:d6:89:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd43b1520-d847-4774-a28f-9922d649e636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f5a9c50-8742-4c67-ad6b-d9e1980cf3cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=5526816d-0012-44bf-b42e-02c86af3be28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:47:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:14.959 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 5526816d-0012-44bf-b42e-02c86af3be28 in datapath aab720ee-c62f-473f-a1c1-c350c6375dc0 unbound from our chassis
Nov 29 08:47:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:14.962 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aab720ee-c62f-473f-a1c1-c350c6375dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:47:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:14.964 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[655e7e3c-45e4-43ab-ad69-d2d6111c3849]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:14.965 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 namespace which is not needed anymore
Nov 29 08:47:14 compute-2 nova_compute[232428]: 2025-11-29 08:47:14.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:15 compute-2 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Nov 29 08:47:15 compute-2 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c2.scope: Consumed 14.760s CPU time.
Nov 29 08:47:15 compute-2 systemd-machined[194747]: Machine qemu-94-instance-000000c2 terminated.
Nov 29 08:47:15 compute-2 nova_compute[232428]: 2025-11-29 08:47:15.101 232432 DEBUG nova.compute.manager [None req-38856925-1a10-4f89-8f0f-35bf66d3afa1 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:15 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [NOTICE]   (324201) : haproxy version is 2.8.14-c23fe91
Nov 29 08:47:15 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [NOTICE]   (324201) : path to executable is /usr/sbin/haproxy
Nov 29 08:47:15 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [WARNING]  (324201) : Exiting Master process...
Nov 29 08:47:15 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [ALERT]    (324201) : Current worker (324203) exited with code 143 (Terminated)
Nov 29 08:47:15 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324197]: [WARNING]  (324201) : All workers exited. Exiting... (0)
Nov 29 08:47:15 compute-2 systemd[1]: libpod-e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77.scope: Deactivated successfully.
Nov 29 08:47:15 compute-2 podman[324348]: 2025-11-29 08:47:15.139684941 +0000 UTC m=+0.052291509 container died e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:47:15 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77-userdata-shm.mount: Deactivated successfully.
Nov 29 08:47:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-fb13e461e393b3e945d2b8fbd570b17451ccde1fd6abcc6083f287705183bff1-merged.mount: Deactivated successfully.
Nov 29 08:47:15 compute-2 podman[324348]: 2025-11-29 08:47:15.20618223 +0000 UTC m=+0.118788778 container cleanup e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 08:47:15 compute-2 systemd[1]: libpod-conmon-e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77.scope: Deactivated successfully.
Nov 29 08:47:15 compute-2 nova_compute[232428]: 2025-11-29 08:47:15.231 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:15 compute-2 podman[324386]: 2025-11-29 08:47:15.307895076 +0000 UTC m=+0.065712447 container remove e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.314 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[08a73996-c230-4c68-9207-8a6c721cfd6a]: (4, ('Sat Nov 29 08:47:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 (e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77)\ne60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77\nSat Nov 29 08:47:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 (e60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77)\ne60e5541a8dcbdf8cb8ce3914633c1b30addf695b823024458c2b8327a935d77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.316 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3590e510-cf38-4ea8-8afc-2c848c39f8a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.318 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaab720ee-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:15 compute-2 nova_compute[232428]: 2025-11-29 08:47:15.321 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:15 compute-2 kernel: tapaab720ee-c0: left promiscuous mode
Nov 29 08:47:15 compute-2 nova_compute[232428]: 2025-11-29 08:47:15.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.357 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5046249a-fb33-4783-beaf-5ca2bcaffb25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.378 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f219d233-38be-4866-9bac-0b380ed6f21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.378 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[84ab4f51-8c52-47ea-b029-c2841c19fc48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.396 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f83f696f-a897-4a1e-ac3d-60e5eedb3f30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897980, 'reachable_time': 34591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324404, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.399 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:47:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:15.399 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[646f0673-96da-483f-bd80-cef5e245a631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:15 compute-2 systemd[1]: run-netns-ovnmeta\x2daab720ee\x2dc62f\x2d473f\x2da1c1\x2dc350c6375dc0.mount: Deactivated successfully.
Nov 29 08:47:15 compute-2 ceph-mon[77138]: pgmap v3343: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:47:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:16.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:16.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.823 232432 DEBUG nova.compute.manager [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.824 232432 DEBUG oslo_concurrency.lockutils [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.824 232432 DEBUG oslo_concurrency.lockutils [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.824 232432 DEBUG oslo_concurrency.lockutils [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.825 232432 DEBUG nova.compute.manager [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:17 compute-2 nova_compute[232428]: 2025-11-29 08:47:17.825 232432 WARNING nova.compute.manager [req-0b8274f5-51dc-4abc-a51b-75f1ad57c3f2 req-3d29dfd1-5a64-4fda-ac59-7ca5f18128ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state suspended and task_state None.
Nov 29 08:47:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:18.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:18 compute-2 ceph-mon[77138]: pgmap v3344: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:47:18 compute-2 podman[324407]: 2025-11-29 08:47:18.715273219 +0000 UTC m=+0.106768984 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:47:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:18.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:18 compute-2 nova_compute[232428]: 2025-11-29 08:47:18.923 232432 INFO nova.compute.manager [None req-1f8109b6-f6be-4422-a399-58e700af5708 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Get console output
Nov 29 08:47:19 compute-2 sudo[324429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:19 compute-2 sudo[324429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324429]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:19 compute-2 sudo[324454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:47:19 compute-2 sudo[324454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324454]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:19 compute-2 nova_compute[232428]: 2025-11-29 08:47:19.217 232432 INFO nova.compute.manager [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Resuming
Nov 29 08:47:19 compute-2 nova_compute[232428]: 2025-11-29 08:47:19.219 232432 DEBUG nova.objects.instance [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'flavor' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:19 compute-2 nova_compute[232428]: 2025-11-29 08:47:19.290 232432 DEBUG oslo_concurrency.lockutils [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:47:19 compute-2 nova_compute[232428]: 2025-11-29 08:47:19.291 232432 DEBUG oslo_concurrency.lockutils [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:47:19 compute-2 nova_compute[232428]: 2025-11-29 08:47:19.291 232432 DEBUG nova.network.neutron [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:47:19 compute-2 sudo[324479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:19 compute-2 sudo[324479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324479]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:19 compute-2 sudo[324504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:47:19 compute-2 sudo[324504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324504]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:19 compute-2 sudo[324549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:19 compute-2 sudo[324549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324549]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:19 compute-2 sudo[324575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:47:19 compute-2 sudo[324575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:19 compute-2 sudo[324575]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.023 232432 DEBUG nova.compute.manager [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.024 232432 DEBUG oslo_concurrency.lockutils [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.025 232432 DEBUG oslo_concurrency.lockutils [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.026 232432 DEBUG oslo_concurrency.lockutils [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.026 232432 DEBUG nova.compute.manager [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:20 compute-2 nova_compute[232428]: 2025-11-29 08:47:20.027 232432 WARNING nova.compute.manager [req-c9a7b4fd-7926-4ac5-b21a-6496f71f3fcb req-3766b9aa-2d7d-4425-afbd-e58136cff027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state suspended and task_state resuming.
Nov 29 08:47:20 compute-2 sudo[324600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:20 compute-2 sudo[324600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:20 compute-2 sudo[324600]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:20.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:20 compute-2 sudo[324625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:47:20 compute-2 sudo[324625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:20 compute-2 ceph-mon[77138]: pgmap v3345: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:47:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:47:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:47:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:47:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:47:20 compute-2 sudo[324625]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:21 compute-2 nova_compute[232428]: 2025-11-29 08:47:21.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:21 compute-2 nova_compute[232428]: 2025-11-29 08:47:21.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:47:21 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.062 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:22.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.225 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:47:22 compute-2 ceph-mon[77138]: pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Nov 29 08:47:22 compute-2 nova_compute[232428]: 2025-11-29 08:47:22.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:22.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.270 232432 DEBUG nova.network.neutron [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.293 232432 DEBUG oslo_concurrency.lockutils [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.294 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.294 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.295 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.302 232432 DEBUG nova.virt.libvirt.vif [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1075941855',display_name='tempest-TestNetworkAdvancedServerOps-server-1075941855',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1075941855',id=194,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGE5pn+I70oZmHuiim1TUtpKIGR7Kqkz86qivG/cMWSpTnsyRPWZiJEZtpdA3p62tfZxizy0V57ahkV6w9stXKLiRpJWXZOS5iNHoB3QHRD9m7s70TW3AQX73YlXMmf2hg==',key_name='tempest-TestNetworkAdvancedServerOps-553049570',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-pr4xow81',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:47:15Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.302 232432 DEBUG nova.network.os_vif_util [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.304 232432 DEBUG nova.network.os_vif_util [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.304 232432 DEBUG os_vif [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.307 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.308 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.313 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5526816d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.313 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5526816d-00, col_values=(('external_ids', {'iface-id': '5526816d-0012-44bf-b42e-02c86af3be28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:89:f6', 'vm-uuid': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.314 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.315 232432 INFO os_vif [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00')
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.340 232432 DEBUG nova.objects.instance [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:23 compute-2 kernel: tap5526816d-00: entered promiscuous mode
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.4560] manager: (tap5526816d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Nov 29 08:47:23 compute-2 ovn_controller[134375]: 2025-11-29T08:47:23Z|00909|binding|INFO|Claiming lport 5526816d-0012-44bf-b42e-02c86af3be28 for this chassis.
Nov 29 08:47:23 compute-2 ovn_controller[134375]: 2025-11-29T08:47:23Z|00910|binding|INFO|5526816d-0012-44bf-b42e-02c86af3be28: Claiming fa:16:3e:d6:89:f6 10.100.0.9
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.468 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:89:f6 10.100.0.9'], port_security=['fa:16:3e:d6:89:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd43b1520-d847-4774-a28f-9922d649e636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f5a9c50-8742-4c67-ad6b-d9e1980cf3cc, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=5526816d-0012-44bf-b42e-02c86af3be28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.471 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 5526816d-0012-44bf-b42e-02c86af3be28 in datapath aab720ee-c62f-473f-a1c1-c350c6375dc0 bound to our chassis
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.475 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:47:23 compute-2 ovn_controller[134375]: 2025-11-29T08:47:23Z|00911|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 ovn-installed in OVS
Nov 29 08:47:23 compute-2 ovn_controller[134375]: 2025-11-29T08:47:23Z|00912|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 up in Southbound
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.482 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.486 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.495 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ed1970-09ac-47f7-9aec-3e88679c21e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.496 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaab720ee-c1 in ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.499 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaab720ee-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.499 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4426f7d9-6f18-4227-ad51-0b00ad1fe3da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.500 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80aca198-67c8-4a87-918c-e1817885bc7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 systemd-machined[194747]: New machine qemu-95-instance-000000c2.
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.516 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c1750-2cc9-4025-b236-418db6d924fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 systemd[1]: Started Virtual Machine qemu-95-instance-000000c2.
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.532 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c58ec6cd-90d9-466b-8c69-894e57f684bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 systemd-udevd[324699]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.5606] device (tap5526816d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.5623] device (tap5526816d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.573 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[331992d1-b34b-4862-8712-e6273844166e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.5835] manager: (tapaab720ee-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.582 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b2db221f-7100-49eb-bcdb-09dd410d1bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 systemd-udevd[324703]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.620 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7b50d1-1b67-491e-b438-abce728cd66a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.625 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd7c387-f0a8-41b6-841f-08e8fca20ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.6543] device (tapaab720ee-c0): carrier: link connected
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.660 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe8ce18-1751-40e3-8725-ed6f814573b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.681 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1b44ca-0c43-48e5-9ab7-83ba3d231fab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaab720ee-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:7d:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900992, 'reachable_time': 16518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324731, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.697 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4708d222-d469-4a5a-8378-3dcc3d4ab1d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:7d04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 900992, 'tstamp': 900992}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324732, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.720 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[896dc803-0606-4e55-af7c-5828f1254746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaab720ee-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:7d:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900992, 'reachable_time': 16518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324733, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.767 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e034a1-8cc6-4349-921f-f2454427a7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.859 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[13c3cd50-4f58-4f52-b3cd-b66fd443e6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.861 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaab720ee-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.861 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.862 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaab720ee-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 kernel: tapaab720ee-c0: entered promiscuous mode
Nov 29 08:47:23 compute-2 NetworkManager[48993]: <info>  [1764406043.8656] manager: (tapaab720ee-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaab720ee-c0, col_values=(('external_ids', {'iface-id': '586a9bd1-be8b-4e03-9f94-3539bdb70afe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:23 compute-2 ovn_controller[134375]: 2025-11-29T08:47:23Z|00913|binding|INFO|Releasing lport 586a9bd1-be8b-4e03-9f94-3539bdb70afe from this chassis (sb_readonly=0)
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.885 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 nova_compute[232428]: 2025-11-29 08:47:23.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.907 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.908 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[07e45447-581f-4692-9cb9-1ef4b7d18144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.909 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/aab720ee-c62f-473f-a1c1-c350c6375dc0.pid.haproxy
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID aab720ee-c62f-473f-a1c1-c350c6375dc0
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:47:23 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:23.910 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'env', 'PROCESS_TAG=haproxy-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aab720ee-c62f-473f-a1c1-c350c6375dc0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:47:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.149 232432 DEBUG nova.compute.manager [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.158 232432 DEBUG oslo_concurrency.lockutils [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.158 232432 DEBUG oslo_concurrency.lockutils [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.158 232432 DEBUG oslo_concurrency.lockutils [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.159 232432 DEBUG nova.compute.manager [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.159 232432 WARNING nova.compute.manager [req-861ea239-7250-44a0-8ed4-8e02b0bd6e51 req-031efac3-c473-4443-af12-4923deae7c2f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state suspended and task_state resuming.
Nov 29 08:47:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:24 compute-2 ceph-mon[77138]: pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Nov 29 08:47:24 compute-2 podman[324807]: 2025-11-29 08:47:24.410560715 +0000 UTC m=+0.103103309 container create 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:47:24 compute-2 systemd[1]: Started libpod-conmon-8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a.scope.
Nov 29 08:47:24 compute-2 podman[324807]: 2025-11-29 08:47:24.382595695 +0000 UTC m=+0.075138319 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.474 232432 DEBUG nova.virt.libvirt.host [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Removed pending event for 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.474 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406044.4735115, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.474 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Started (Lifecycle Event)
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.497 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:24 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.512 232432 DEBUG nova.compute.manager [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.513 232432 DEBUG nova.objects.instance [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.517 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:47:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70a4902446335d17200f1c2f61e6c9f3f2f8bf225b96bc159f22da50fd9bf6e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.535 232432 INFO nova.virt.libvirt.driver [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance running successfully.
Nov 29 08:47:24 compute-2 virtqemud[231977]: argument unsupported: QEMU guest agent is not configured
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.538 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.538 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406044.4807405, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.538 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Resumed (Lifecycle Event)
Nov 29 08:47:24 compute-2 podman[324807]: 2025-11-29 08:47:24.539854529 +0000 UTC m=+0.232397153 container init 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.542 232432 DEBUG nova.virt.libvirt.guest [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.542 232432 DEBUG nova.compute.manager [None req-4dfdfb8b-7e8b-4906-8076-9300e6217372 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:24 compute-2 podman[324807]: 2025-11-29 08:47:24.54821297 +0000 UTC m=+0.240755574 container start 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.561 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.564 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:47:24 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [NOTICE]   (324826) : New worker (324828) forked
Nov 29 08:47:24 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [NOTICE]   (324826) : Loading success.
Nov 29 08:47:24 compute-2 nova_compute[232428]: 2025-11-29 08:47:24.609 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 29 08:47:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:24.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.000 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.018 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.018 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.018 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.018 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.262 232432 DEBUG nova.compute.manager [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.262 232432 DEBUG oslo_concurrency.lockutils [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:26 compute-2 ceph-mon[77138]: pgmap v3348: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.263 232432 DEBUG oslo_concurrency.lockutils [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.263 232432 DEBUG oslo_concurrency.lockutils [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.264 232432 DEBUG nova.compute.manager [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:26 compute-2 nova_compute[232428]: 2025-11-29 08:47:26.264 232432 WARNING nova.compute.manager [req-d8a58306-e9dc-4cec-bd8d-13bb0e3c8214 req-03241768-8cfb-441b-be81-afd77e97f465 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state active and task_state None.
Nov 29 08:47:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:27 compute-2 nova_compute[232428]: 2025-11-29 08:47:27.064 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:27 compute-2 nova_compute[232428]: 2025-11-29 08:47:27.275 232432 INFO nova.compute.manager [None req-449fe508-7a34-4b6c-b534-300ff5ad8fde 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Get console output
Nov 29 08:47:27 compute-2 nova_compute[232428]: 2025-11-29 08:47:27.282 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:47:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3330788449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:27 compute-2 nova_compute[232428]: 2025-11-29 08:47:27.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:27 compute-2 sudo[324838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:27 compute-2 sudo[324838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:27 compute-2 sudo[324838]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:27 compute-2 sudo[324863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:47:27 compute-2 sudo[324863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:27 compute-2 sudo[324863]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:47:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1502716317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:47:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:47:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1502716317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:47:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:28 compute-2 ceph-mon[77138]: pgmap v3349: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 08:47:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:47:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1502716317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1502716317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:47:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2508676362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.657 232432 DEBUG nova.compute.manager [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-changed-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.657 232432 DEBUG nova.compute.manager [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing instance network info cache due to event network-changed-5526816d-0012-44bf-b42e-02c86af3be28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.657 232432 DEBUG oslo_concurrency.lockutils [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.658 232432 DEBUG oslo_concurrency.lockutils [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.658 232432 DEBUG nova.network.neutron [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Refreshing network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.728 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.729 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.730 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.730 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.731 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.733 232432 INFO nova.compute.manager [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Terminating instance
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.734 232432 DEBUG nova.compute.manager [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:47:28 compute-2 kernel: tap5526816d-00 (unregistering): left promiscuous mode
Nov 29 08:47:28 compute-2 NetworkManager[48993]: <info>  [1764406048.7815] device (tap5526816d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:47:28 compute-2 ovn_controller[134375]: 2025-11-29T08:47:28Z|00914|binding|INFO|Releasing lport 5526816d-0012-44bf-b42e-02c86af3be28 from this chassis (sb_readonly=0)
Nov 29 08:47:28 compute-2 ovn_controller[134375]: 2025-11-29T08:47:28Z|00915|binding|INFO|Setting lport 5526816d-0012-44bf-b42e-02c86af3be28 down in Southbound
Nov 29 08:47:28 compute-2 ovn_controller[134375]: 2025-11-29T08:47:28Z|00916|binding|INFO|Removing iface tap5526816d-00 ovn-installed in OVS
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.792 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:28.797 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:89:f6 10.100.0.9'], port_security=['fa:16:3e:d6:89:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd43b1520-d847-4774-a28f-9922d649e636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f5a9c50-8742-4c67-ad6b-d9e1980cf3cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=5526816d-0012-44bf-b42e-02c86af3be28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:28.799 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 5526816d-0012-44bf-b42e-02c86af3be28 in datapath aab720ee-c62f-473f-a1c1-c350c6375dc0 unbound from our chassis
Nov 29 08:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:28.800 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aab720ee-c62f-473f-a1c1-c350c6375dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:28.801 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8972ec-b099-4d84-82dd-0fba20d9339d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:28 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:28.802 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 namespace which is not needed anymore
Nov 29 08:47:28 compute-2 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Nov 29 08:47:28 compute-2 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000c2.scope: Consumed 1.060s CPU time.
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.827 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:28 compute-2 systemd-machined[194747]: Machine qemu-95-instance-000000c2 terminated.
Nov 29 08:47:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:28 compute-2 podman[324889]: 2025-11-29 08:47:28.915919552 +0000 UTC m=+0.104300437 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:47:28 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [NOTICE]   (324826) : haproxy version is 2.8.14-c23fe91
Nov 29 08:47:28 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [NOTICE]   (324826) : path to executable is /usr/sbin/haproxy
Nov 29 08:47:28 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [WARNING]  (324826) : Exiting Master process...
Nov 29 08:47:28 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [ALERT]    (324826) : Current worker (324828) exited with code 143 (Terminated)
Nov 29 08:47:28 compute-2 neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0[324822]: [WARNING]  (324826) : All workers exited. Exiting... (0)
Nov 29 08:47:28 compute-2 systemd[1]: libpod-8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a.scope: Deactivated successfully.
Nov 29 08:47:28 compute-2 podman[324930]: 2025-11-29 08:47:28.931265229 +0000 UTC m=+0.045922100 container died 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:47:28 compute-2 systemd[1]: var-lib-containers-storage-overlay-c70a4902446335d17200f1c2f61e6c9f3f2f8bf225b96bc159f22da50fd9bf6e-merged.mount: Deactivated successfully.
Nov 29 08:47:28 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.965 232432 DEBUG nova.compute.manager [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.965 232432 DEBUG oslo_concurrency.lockutils [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:28 compute-2 podman[324930]: 2025-11-29 08:47:28.966202267 +0000 UTC m=+0.080859138 container cleanup 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.966 232432 DEBUG oslo_concurrency.lockutils [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.966 232432 DEBUG oslo_concurrency.lockutils [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.966 232432 DEBUG nova.compute.manager [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.966 232432 DEBUG nova.compute.manager [req-a147de4b-6c4a-431f-b75c-222ebea3746c req-e32507b6-b6d4-4bf6-93a5-02fa0f3afc78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-unplugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:47:28 compute-2 systemd[1]: libpod-conmon-8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a.scope: Deactivated successfully.
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.980 232432 INFO nova.virt.libvirt.driver [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Instance destroyed successfully.
Nov 29 08:47:28 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.980 232432 DEBUG nova.objects.instance [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:28.999 232432 DEBUG nova.virt.libvirt.vif [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1075941855',display_name='tempest-TestNetworkAdvancedServerOps-server-1075941855',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1075941855',id=194,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGE5pn+I70oZmHuiim1TUtpKIGR7Kqkz86qivG/cMWSpTnsyRPWZiJEZtpdA3p62tfZxizy0V57ahkV6w9stXKLiRpJWXZOS5iNHoB3QHRD9m7s70TW3AQX73YlXMmf2hg==',key_name='tempest-TestNetworkAdvancedServerOps-553049570',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-pr4xow81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:47:24Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.000 232432 DEBUG nova.network.os_vif_util [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.000 232432 DEBUG nova.network.os_vif_util [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.001 232432 DEBUG os_vif [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.003 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.003 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5526816d-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.005 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.008 232432 INFO os_vif [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:89:f6,bridge_name='br-int',has_traffic_filtering=True,id=5526816d-0012-44bf-b42e-02c86af3be28,network=Network(aab720ee-c62f-473f-a1c1-c350c6375dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5526816d-00')
Nov 29 08:47:29 compute-2 podman[324976]: 2025-11-29 08:47:29.068983415 +0000 UTC m=+0.064934942 container remove 8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.074 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7d46448f-0e15-4e80-bd04-e6dc66d0016c]: (4, ('Sat Nov 29 08:47:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 (8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a)\n8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a\nSat Nov 29 08:47:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 (8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a)\n8240fea2da0b874232444e98ec1aceb183ad737e4f4d6c369160d0f03b74431a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.076 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0a563c-36f2-40fa-acbc-b898b169ae60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.078 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaab720ee-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:47:29 compute-2 kernel: tapaab720ee-c0: left promiscuous mode
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.081 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.093 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.096 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f65a4-066b-4ac6-810d-0845b6ede541]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.108 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a70a4642-2961-49f9-a710-88d8a30e5ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.108 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5507be-8eda-4f14-b497-528abae248fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.123 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[359392ed-1e48-4e84-a81c-e89e3f764d40]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900983, 'reachable_time': 23908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325010, 'error': None, 'target': 'ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 systemd[1]: run-netns-ovnmeta\x2daab720ee\x2dc62f\x2d473f\x2da1c1\x2dc350c6375dc0.mount: Deactivated successfully.
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.129 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aab720ee-c62f-473f-a1c1-c350c6375dc0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:47:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:47:29.130 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[15494c14-6109-4ffc-a04c-af42f1f7a16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2449081529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.498 232432 INFO nova.virt.libvirt.driver [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Deleting instance files /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_del
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.499 232432 INFO nova.virt.libvirt.driver [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Deletion of /var/lib/nova/instances/16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4_del complete
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.612 232432 INFO nova.compute.manager [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.613 232432 DEBUG oslo.service.loopingcall [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.613 232432 DEBUG nova.compute.manager [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:47:29 compute-2 nova_compute[232428]: 2025-11-29 08:47:29.614 232432 DEBUG nova.network.neutron [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:47:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.226 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.227 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.227 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:47:30 compute-2 ceph-mon[77138]: pgmap v3350: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 455 KiB/s wr, 16 op/s
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.412 232432 DEBUG nova.network.neutron [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updated VIF entry in instance network info cache for port 5526816d-0012-44bf-b42e-02c86af3be28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.414 232432 DEBUG nova.network.neutron [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [{"id": "5526816d-0012-44bf-b42e-02c86af3be28", "address": "fa:16:3e:d6:89:f6", "network": {"id": "aab720ee-c62f-473f-a1c1-c350c6375dc0", "bridge": "br-int", "label": "tempest-network-smoke--1695428245", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5526816d-00", "ovs_interfaceid": "5526816d-0012-44bf-b42e-02c86af3be28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.469 232432 DEBUG oslo_concurrency.lockutils [req-c092156f-0b66-4899-a5e9-02e115bd3ff6 req-4a88761d-145c-47ef-863d-1b70d24bdff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.589 232432 DEBUG nova.network.neutron [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.612 232432 INFO nova.compute.manager [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Took 1.00 seconds to deallocate network for instance.
Nov 29 08:47:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:47:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2669232615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.645 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.668 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.669 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.735 232432 DEBUG oslo_concurrency.processutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:47:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:30.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.925 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.928 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4164MB free_disk=20.93756866455078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:47:30 compute-2 nova_compute[232428]: 2025-11-29 08:47:30.928 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.099 232432 DEBUG nova.compute.manager [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.100 232432 DEBUG oslo_concurrency.lockutils [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.101 232432 DEBUG oslo_concurrency.lockutils [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.101 232432 DEBUG oslo_concurrency.lockutils [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.101 232432 DEBUG nova.compute.manager [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] No waiting events found dispatching network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.102 232432 WARNING nova.compute.manager [req-4ac06aa5-d2c4-4700-8dbe-6a52b5acc142 req-db61a7fa-5799-452c-98ba-9851a188e054 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received unexpected event network-vif-plugged-5526816d-0012-44bf-b42e-02c86af3be28 for instance with vm_state deleted and task_state None.
Nov 29 08:47:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:47:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1081212943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.184 232432 DEBUG oslo_concurrency.processutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.194 232432 DEBUG nova.compute.provider_tree [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.215 232432 DEBUG nova.scheduler.client.report [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.244 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.248 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.274 232432 INFO nova.scheduler.client.report [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.360 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.361 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.406 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:47:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2669232615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1081212943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.454 232432 DEBUG oslo_concurrency.lockutils [None req-ff8bcc3a-45e9-4b82-906c-7643547af624 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:31 compute-2 sudo[325078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:31 compute-2 sudo[325078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:31 compute-2 sudo[325078]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:47:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3266942042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.859 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.865 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:47:31 compute-2 sudo[325103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:31 compute-2 sudo[325103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.881 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:47:31 compute-2 sudo[325103]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.917 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:47:31 compute-2 nova_compute[232428]: 2025-11-29 08:47:31.917 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:47:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:32 compute-2 nova_compute[232428]: 2025-11-29 08:47:32.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:32 compute-2 ceph-mon[77138]: pgmap v3351: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 29 08:47:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3266942042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1619977189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:47:32 compute-2 nova_compute[232428]: 2025-11-29 08:47:32.441 232432 DEBUG nova.compute.manager [req-dfa9d5be-f58a-460a-ba3c-6e7cabf443e0 req-62d92cf7-0772-44e2-9bab-2d8b355ada69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Received event network-vif-deleted-5526816d-0012-44bf-b42e-02c86af3be28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:47:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:32 compute-2 nova_compute[232428]: 2025-11-29 08:47:32.907 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/654561874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:47:34 compute-2 nova_compute[232428]: 2025-11-29 08:47:34.007 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:34 compute-2 nova_compute[232428]: 2025-11-29 08:47:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:47:34 compute-2 nova_compute[232428]: 2025-11-29 08:47:34.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:47:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:34 compute-2 ceph-mon[77138]: pgmap v3352: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 08:47:34 compute-2 nova_compute[232428]: 2025-11-29 08:47:34.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:34 compute-2 nova_compute[232428]: 2025-11-29 08:47:34.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/954613710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:36.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:36 compute-2 ceph-mon[77138]: pgmap v3353: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Nov 29 08:47:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3787107875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:47:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:37 compute-2 nova_compute[232428]: 2025-11-29 08:47:37.366 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:38 compute-2 ceph-mon[77138]: pgmap v3354: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 29 08:47:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:38.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:39 compute-2 nova_compute[232428]: 2025-11-29 08:47:39.011 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:40 compute-2 ceph-mon[77138]: pgmap v3355: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 29 08:47:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:41 compute-2 podman[325137]: 2025-11-29 08:47:41.692530512 +0000 UTC m=+0.088310069 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 08:47:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:42 compute-2 nova_compute[232428]: 2025-11-29 08:47:42.369 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:42 compute-2 ceph-mon[77138]: pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 116 op/s
Nov 29 08:47:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:43 compute-2 nova_compute[232428]: 2025-11-29 08:47:43.978 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406048.9771702, 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:47:43 compute-2 nova_compute[232428]: 2025-11-29 08:47:43.978 232432 INFO nova.compute.manager [-] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] VM Stopped (Lifecycle Event)
Nov 29 08:47:44 compute-2 nova_compute[232428]: 2025-11-29 08:47:44.013 232432 DEBUG nova.compute.manager [None req-b8458398-ba86-429e-8472-8c6bd2be3ab4 - - - - - -] [instance: 16f8bd5e-c7d7-4f0e-a6c0-1f92c36d92e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:47:44 compute-2 nova_compute[232428]: 2025-11-29 08:47:44.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:44 compute-2 ceph-mon[77138]: pgmap v3357: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 08:47:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:46.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:46 compute-2 ceph-mon[77138]: pgmap v3358: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Nov 29 08:47:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:47 compute-2 nova_compute[232428]: 2025-11-29 08:47:47.371 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:48.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:48 compute-2 ceph-mon[77138]: pgmap v3359: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 08:47:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:48.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:49 compute-2 nova_compute[232428]: 2025-11-29 08:47:49.018 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:49 compute-2 podman[325160]: 2025-11-29 08:47:49.704671153 +0000 UTC m=+0.099951681 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 08:47:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:50 compute-2 ceph-mon[77138]: pgmap v3360: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 623 KiB/s wr, 84 op/s
Nov 29 08:47:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:47:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:47:51 compute-2 sudo[325184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:51 compute-2 sudo[325184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:52 compute-2 sudo[325184]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:52 compute-2 sudo[325209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:47:52 compute-2 sudo[325209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:47:52 compute-2 sudo[325209]: pam_unix(sudo:session): session closed for user root
Nov 29 08:47:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:52 compute-2 sshd-session[325182]: Invalid user solana from 45.148.10.240 port 47660
Nov 29 08:47:52 compute-2 sshd-session[325182]: Connection closed by invalid user solana 45.148.10.240 port 47660 [preauth]
Nov 29 08:47:52 compute-2 nova_compute[232428]: 2025-11-29 08:47:52.374 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:53 compute-2 ceph-mon[77138]: pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 08:47:54 compute-2 nova_compute[232428]: 2025-11-29 08:47:54.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:47:55 compute-2 ceph-mon[77138]: pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:47:56 compute-2 ceph-mon[77138]: pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:47:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:57 compute-2 nova_compute[232428]: 2025-11-29 08:47:57.376 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:47:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:47:58 compute-2 ceph-mon[77138]: pgmap v3364: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:47:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:47:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:47:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:47:59 compute-2 nova_compute[232428]: 2025-11-29 08:47:59.026 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:47:59 compute-2 podman[325239]: 2025-11-29 08:47:59.736821431 +0000 UTC m=+0.129828122 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:48:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:00 compute-2 ceph-mon[77138]: pgmap v3365: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 29 08:48:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:00.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.424 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:01.423 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:48:01 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:01.426 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.823 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.824 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.850 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.947 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.947 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.956 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:48:01 compute-2 nova_compute[232428]: 2025-11-29 08:48:01.957 232432 INFO nova.compute.claims [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.089 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:48:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 08:48:02 compute-2 ceph-mon[77138]: pgmap v3366: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.380 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:48:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2359041878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.617 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.627 232432 DEBUG nova.compute.provider_tree [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.652 232432 DEBUG nova.scheduler.client.report [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.681 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.682 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.725 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.726 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.768 232432 INFO nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.784 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.870 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.871 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.872 232432 INFO nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Creating image(s)
Nov 29 08:48:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.906 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.955 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.993 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:02 compute-2 nova_compute[232428]: 2025-11-29 08:48:02.998 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.037 232432 DEBUG nova.policy [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.072 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.073 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.073 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.074 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.117 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.122 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 92d81698-33ac-48a9-81bb-01d007be477e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:03 compute-2 ceph-mon[77138]: osdmap e411: 3 total, 3 up, 3 in
Nov 29 08:48:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2359041878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 08:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:03.352 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:03.352 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:03.352 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.470 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 92d81698-33ac-48a9-81bb-01d007be477e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.596 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.758 232432 DEBUG nova.objects.instance [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 92d81698-33ac-48a9-81bb-01d007be477e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.790 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.790 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Ensure instance console log exists: /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.791 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.792 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:03 compute-2 nova_compute[232428]: 2025-11-29 08:48:03.792 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:04 compute-2 nova_compute[232428]: 2025-11-29 08:48:04.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:04 compute-2 nova_compute[232428]: 2025-11-29 08:48:04.123 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Successfully created port: 66bfa039-be6e-4b2e-aeb6-64d238c6f483 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:48:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:04 compute-2 ceph-mon[77138]: pgmap v3368: 305 pgs: 305 active+clean; 222 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 20 op/s
Nov 29 08:48:04 compute-2 ceph-mon[77138]: osdmap e412: 3 total, 3 up, 3 in
Nov 29 08:48:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 08:48:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:05 compute-2 ceph-mon[77138]: osdmap e413: 3 total, 3 up, 3 in
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.612 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Successfully updated port: 66bfa039-be6e-4b2e-aeb6-64d238c6f483 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.643 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.644 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.644 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.742 232432 DEBUG nova.compute.manager [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.742 232432 DEBUG nova.compute.manager [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing instance network info cache due to event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.743 232432 DEBUG oslo_concurrency.lockutils [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:48:05 compute-2 nova_compute[232428]: 2025-11-29 08:48:05.846 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:48:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:06.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:06 compute-2 ceph-mon[77138]: pgmap v3371: 305 pgs: 305 active+clean; 319 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 180 op/s
Nov 29 08:48:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.059 232432 DEBUG nova.network.neutron [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.107 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.108 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance network_info: |[{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.108 232432 DEBUG oslo_concurrency.lockutils [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.108 232432 DEBUG nova.network.neutron [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.112 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Start _get_guest_xml network_info=[{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.119 232432 WARNING nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.128 232432 DEBUG nova.virt.libvirt.host [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.129 232432 DEBUG nova.virt.libvirt.host [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.135 232432 DEBUG nova.virt.libvirt.host [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.136 232432 DEBUG nova.virt.libvirt.host [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.138 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.139 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.140 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.140 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.141 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.142 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.142 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.143 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.143 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.143 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.144 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.144 232432 DEBUG nova.virt.hardware [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.150 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.225 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.226 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.251 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.387 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:48:07 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1431972653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.636 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.665 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:07 compute-2 nova_compute[232428]: 2025-11-29 08:48:07.669 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:48:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1432911398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.087 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.091 232432 DEBUG nova.virt.libvirt.vif [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-522943302',display_name='tempest-TestNetworkBasicOps-server-522943302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-522943302',id=196,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGO25ExiI3lU7BDW/20DGqhiuv3n/5rkI+iBJvIFfiPlzkaiZ4VFWYX0IExiZpcQrUKyio66sZKRUeH3ZYQ/NiuBbck/cDknlwhc2+FPTK1S1ITiynoMKv44G5IHMaKE1w==',key_name='tempest-TestNetworkBasicOps-824741121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-gt7jkxoz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:02Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=92d81698-33ac-48a9-81bb-01d007be477e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.092 232432 DEBUG nova.network.os_vif_util [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.093 232432 DEBUG nova.network.os_vif_util [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.096 232432 DEBUG nova.objects.instance [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92d81698-33ac-48a9-81bb-01d007be477e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.121 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <uuid>92d81698-33ac-48a9-81bb-01d007be477e</uuid>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <name>instance-000000c4</name>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-522943302</nova:name>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:48:07</nova:creationTime>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <nova:port uuid="66bfa039-be6e-4b2e-aeb6-64d238c6f483">
Nov 29 08:48:08 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <system>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="serial">92d81698-33ac-48a9-81bb-01d007be477e</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="uuid">92d81698-33ac-48a9-81bb-01d007be477e</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </system>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <os>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </os>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <features>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </features>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/92d81698-33ac-48a9-81bb-01d007be477e_disk">
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/92d81698-33ac-48a9-81bb-01d007be477e_disk.config">
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </source>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:48:08 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:2e:8a:99"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <target dev="tap66bfa039-be"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/console.log" append="off"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <video>
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </video>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:48:08 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:48:08 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:48:08 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:48:08 compute-2 nova_compute[232428]: </domain>
Nov 29 08:48:08 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.125 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Preparing to wait for external event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.126 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.126 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.127 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.128 232432 DEBUG nova.virt.libvirt.vif [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-522943302',display_name='tempest-TestNetworkBasicOps-server-522943302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-522943302',id=196,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGO25ExiI3lU7BDW/20DGqhiuv3n/5rkI+iBJvIFfiPlzkaiZ4VFWYX0IExiZpcQrUKyio66sZKRUeH3ZYQ/NiuBbck/cDknlwhc2+FPTK1S1ITiynoMKv44G5IHMaKE1w==',key_name='tempest-TestNetworkBasicOps-824741121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-gt7jkxoz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:02Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=92d81698-33ac-48a9-81bb-01d007be477e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.129 232432 DEBUG nova.network.os_vif_util [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.130 232432 DEBUG nova.network.os_vif_util [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.131 232432 DEBUG os_vif [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.134 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.134 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:48:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.143 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.144 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66bfa039-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.145 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66bfa039-be, col_values=(('external_ids', {'iface-id': '66bfa039-be6e-4b2e-aeb6-64d238c6f483', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:8a:99', 'vm-uuid': '92d81698-33ac-48a9-81bb-01d007be477e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:08 compute-2 NetworkManager[48993]: <info>  [1764406088.1499] manager: (tap66bfa039-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.159 232432 INFO os_vif [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be')
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.242 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.242 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.243 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:2e:8a:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.243 232432 INFO nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Using config drive
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.288 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:08.428 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:08 compute-2 ceph-mon[77138]: pgmap v3372: 305 pgs: 305 active+clean; 319 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 173 op/s
Nov 29 08:48:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1431972653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1432911398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.696 232432 INFO nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Creating config drive at /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.705 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsmeacoy3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.796 232432 DEBUG nova.network.neutron [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updated VIF entry in instance network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.798 232432 DEBUG nova.network.neutron [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.831 232432 DEBUG oslo_concurrency.lockutils [req-a256363c-921a-4776-823c-8472e91452e0 req-7835b269-e8fe-4825-9fff-3374f94227d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.867 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsmeacoy3" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.914 232432 DEBUG nova.storage.rbd_utils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 92d81698-33ac-48a9-81bb-01d007be477e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:48:08 compute-2 nova_compute[232428]: 2025-11-29 08:48:08.919 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config 92d81698-33ac-48a9-81bb-01d007be477e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.242 232432 DEBUG oslo_concurrency.processutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config 92d81698-33ac-48a9-81bb-01d007be477e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.243 232432 INFO nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Deleting local config drive /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e/disk.config because it was imported into RBD.
Nov 29 08:48:09 compute-2 kernel: tap66bfa039-be: entered promiscuous mode
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_controller[134375]: 2025-11-29T08:48:09Z|00917|binding|INFO|Claiming lport 66bfa039-be6e-4b2e-aeb6-64d238c6f483 for this chassis.
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.3052] manager: (tap66bfa039-be): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Nov 29 08:48:09 compute-2 ovn_controller[134375]: 2025-11-29T08:48:09Z|00918|binding|INFO|66bfa039-be6e-4b2e-aeb6-64d238c6f483: Claiming fa:16:3e:2e:8a:99 10.100.0.14
Nov 29 08:48:09 compute-2 systemd-udevd[325592]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.328 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:8a:99 10.100.0.14'], port_security=['fa:16:3e:2e:8a:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '92d81698-33ac-48a9-81bb-01d007be477e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df4af3dd-c97e-4e77-a104-030ea689b036', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e7d1c11-8bfa-4faa-9599-e1a280f61b9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55afe2d8-6a14-4ba5-9e8a-7420cfaa2dfd, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=66bfa039-be6e-4b2e-aeb6-64d238c6f483) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.331 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 66bfa039-be6e-4b2e-aeb6-64d238c6f483 in datapath df4af3dd-c97e-4e77-a104-030ea689b036 bound to our chassis
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.333 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df4af3dd-c97e-4e77-a104-030ea689b036
Nov 29 08:48:09 compute-2 systemd-machined[194747]: New machine qemu-96-instance-000000c4.
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.346 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[98f11a7c-8120-413d-952f-d6d4a81507a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.347 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf4af3dd-c1 in ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.3497] device (tap66bfa039-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.349 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf4af3dd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.349 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7c429297-5a31-4efb-9412-fe4b746f38e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.351 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0817292e-91b5-4ef0-8dd8-e03eb9dd5f92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.3522] device (tap66bfa039-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.373 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[97fafaae-97b2-4419-a362-fa8db670c406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 systemd[1]: Started Virtual Machine qemu-96-instance-000000c4.
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.395 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.398 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa59d10-c4fd-4371-9299-b30adfd310c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.401 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_controller[134375]: 2025-11-29T08:48:09Z|00919|binding|INFO|Setting lport 66bfa039-be6e-4b2e-aeb6-64d238c6f483 ovn-installed in OVS
Nov 29 08:48:09 compute-2 ovn_controller[134375]: 2025-11-29T08:48:09Z|00920|binding|INFO|Setting lport 66bfa039-be6e-4b2e-aeb6-64d238c6f483 up in Southbound
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.406 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.435 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c2eed9-e2e8-470b-b802-4c5b455496b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.441 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9adfaa22-36a2-46f0-b0b1-042634a247e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.4420] manager: (tapdf4af3dd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Nov 29 08:48:09 compute-2 systemd-udevd[325597]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.480 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd5bdd4-81a8-4bf6-9d40-432d8a0bcde3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.484 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8ca0b9-356a-4147-8a3c-28532fd422b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.5233] device (tapdf4af3dd-c0): carrier: link connected
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.527 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[71f9a9b1-0041-4539-84b9-8d59ed0d9872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.551 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[daf2a557-972b-410d-849e-6f5cf24ed72c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf4af3dd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:fa:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905579, 'reachable_time': 43789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325627, 'error': None, 'target': 'ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.574 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7afda065-ce27-48b3-a57e-17c48077b7f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:faa4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 905579, 'tstamp': 905579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325628, 'error': None, 'target': 'ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.602 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3412609a-23f4-44dc-8dd7-34cba887d923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf4af3dd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:fa:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905579, 'reachable_time': 43789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325629, 'error': None, 'target': 'ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.655 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[49ce12d1-4d6c-47d0-95db-e85878d30f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.731 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[971a9d35-456b-4e97-a07e-5ddf68117f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.732 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf4af3dd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.732 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.733 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf4af3dd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.735 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 NetworkManager[48993]: <info>  [1764406089.7360] manager: (tapdf4af3dd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Nov 29 08:48:09 compute-2 kernel: tapdf4af3dd-c0: entered promiscuous mode
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.739 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.739 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf4af3dd-c0, col_values=(('external_ids', {'iface-id': '3b344a9a-e72c-41f0-8603-26e403886af8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.741 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_controller[134375]: 2025-11-29T08:48:09Z|00921|binding|INFO|Releasing lport 3b344a9a-e72c-41f0-8603-26e403886af8 from this chassis (sb_readonly=0)
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.757 232432 DEBUG nova.compute.manager [req-331694da-bfa9-4a55-85d8-e7b035f2fc99 req-cf6b2173-15e3-4ea2-bb81-0a2e58f26aca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.758 232432 DEBUG oslo_concurrency.lockutils [req-331694da-bfa9-4a55-85d8-e7b035f2fc99 req-cf6b2173-15e3-4ea2-bb81-0a2e58f26aca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.758 232432 DEBUG oslo_concurrency.lockutils [req-331694da-bfa9-4a55-85d8-e7b035f2fc99 req-cf6b2173-15e3-4ea2-bb81-0a2e58f26aca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.758 232432 DEBUG oslo_concurrency.lockutils [req-331694da-bfa9-4a55-85d8-e7b035f2fc99 req-cf6b2173-15e3-4ea2-bb81-0a2e58f26aca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.758 232432 DEBUG nova.compute.manager [req-331694da-bfa9-4a55-85d8-e7b035f2fc99 req-cf6b2173-15e3-4ea2-bb81-0a2e58f26aca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Processing event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:48:09 compute-2 nova_compute[232428]: 2025-11-29 08:48:09.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.766 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df4af3dd-c97e-4e77-a104-030ea689b036.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df4af3dd-c97e-4e77-a104-030ea689b036.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.767 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[529948e7-57a4-4f30-8472-4bb4e156e654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.768 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-df4af3dd-c97e-4e77-a104-030ea689b036
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/df4af3dd-c97e-4e77-a104-030ea689b036.pid.haproxy
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID df4af3dd-c97e-4e77-a104-030ea689b036
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:48:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:09.768 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036', 'env', 'PROCESS_TAG=haproxy-df4af3dd-c97e-4e77-a104-030ea689b036', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df4af3dd-c97e-4e77-a104-030ea689b036.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:48:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.177 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.177 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406090.176526, 92d81698-33ac-48a9-81bb-01d007be477e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.178 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] VM Started (Lifecycle Event)
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.181 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.185 232432 INFO nova.virt.libvirt.driver [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance spawned successfully.
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.186 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:48:10 compute-2 podman[325701]: 2025-11-29 08:48:10.113822711 +0000 UTC m=+0.026880348 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.209 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.213 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.228 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.229 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.229 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.229 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.230 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.230 232432 DEBUG nova.virt.libvirt.driver [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.272 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.272 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406090.1775582, 92d81698-33ac-48a9-81bb-01d007be477e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.273 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] VM Paused (Lifecycle Event)
Nov 29 08:48:10 compute-2 podman[325701]: 2025-11-29 08:48:10.283083929 +0000 UTC m=+0.196141556 container create 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.316 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.322 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406090.1801622, 92d81698-33ac-48a9-81bb-01d007be477e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.322 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] VM Resumed (Lifecycle Event)
Nov 29 08:48:10 compute-2 systemd[1]: Started libpod-conmon-226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4.scope.
Nov 29 08:48:10 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.402 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:48:10 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd06de9f9c0adac7f293213f763f5834cb57120f080208d37f9797906c3aab33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.416 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:48:10 compute-2 podman[325701]: 2025-11-29 08:48:10.472001338 +0000 UTC m=+0.385059015 container init 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:48:10 compute-2 podman[325701]: 2025-11-29 08:48:10.479931604 +0000 UTC m=+0.392989241 container start 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.488 232432 INFO nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Took 7.62 seconds to spawn the instance on the hypervisor.
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.489 232432 DEBUG nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.504 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:48:10 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [NOTICE]   (325724) : New worker (325726) forked
Nov 29 08:48:10 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [NOTICE]   (325724) : Loading success.
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.565 232432 INFO nova.compute.manager [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Took 8.65 seconds to build instance.
Nov 29 08:48:10 compute-2 ceph-mon[77138]: pgmap v3373: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 9.9 MiB/s wr, 193 op/s
Nov 29 08:48:10 compute-2 nova_compute[232428]: 2025-11-29 08:48:10.602 232432 DEBUG oslo_concurrency.lockutils [None req-d2ef5f2e-2188-4f42-871d-a8805ad3261e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2815288591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.893 232432 DEBUG nova.compute.manager [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.895 232432 DEBUG oslo_concurrency.lockutils [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.896 232432 DEBUG oslo_concurrency.lockutils [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.896 232432 DEBUG oslo_concurrency.lockutils [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.897 232432 DEBUG nova.compute.manager [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] No waiting events found dispatching network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:48:11 compute-2 nova_compute[232428]: 2025-11-29 08:48:11.898 232432 WARNING nova.compute.manager [req-4ba3dce0-3e28-4c67-abb9-c1d07bfbad86 req-210a9728-0856-4d0e-ab15-7cfb407c9d32 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received unexpected event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 for instance with vm_state active and task_state None.
Nov 29 08:48:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 08:48:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:12.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:12 compute-2 sudo[325736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:12 compute-2 sudo[325736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:12 compute-2 sudo[325736]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:12 compute-2 sudo[325767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:12 compute-2 sudo[325767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:12 compute-2 sudo[325767]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:12 compute-2 podman[325760]: 2025-11-29 08:48:12.355375731 +0000 UTC m=+0.093102198 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:48:12 compute-2 nova_compute[232428]: 2025-11-29 08:48:12.388 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:12 compute-2 ceph-mon[77138]: pgmap v3374: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.1 MiB/s wr, 165 op/s
Nov 29 08:48:12 compute-2 ceph-mon[77138]: osdmap e414: 3 total, 3 up, 3 in
Nov 29 08:48:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:13 compute-2 nova_compute[232428]: 2025-11-29 08:48:13.148 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:14.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:14 compute-2 ceph-mon[77138]: pgmap v3376: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.5 MiB/s wr, 162 op/s
Nov 29 08:48:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:15 compute-2 NetworkManager[48993]: <info>  [1764406095.5345] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Nov 29 08:48:15 compute-2 NetworkManager[48993]: <info>  [1764406095.5352] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Nov 29 08:48:15 compute-2 nova_compute[232428]: 2025-11-29 08:48:15.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:15 compute-2 nova_compute[232428]: 2025-11-29 08:48:15.650 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:15 compute-2 ovn_controller[134375]: 2025-11-29T08:48:15Z|00922|binding|INFO|Releasing lport 3b344a9a-e72c-41f0-8603-26e403886af8 from this chassis (sb_readonly=0)
Nov 29 08:48:15 compute-2 nova_compute[232428]: 2025-11-29 08:48:15.667 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:15 compute-2 ceph-mon[77138]: pgmap v3377: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 181 KiB/s wr, 153 op/s
Nov 29 08:48:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.229 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.381 232432 DEBUG nova.compute.manager [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.381 232432 DEBUG nova.compute.manager [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing instance network info cache due to event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.382 232432 DEBUG oslo_concurrency.lockutils [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.382 232432 DEBUG oslo_concurrency.lockutils [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:48:16 compute-2 nova_compute[232428]: 2025-11-29 08:48:16.382 232432 DEBUG nova.network.neutron [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:48:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:16.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:17 compute-2 nova_compute[232428]: 2025-11-29 08:48:17.394 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:18 compute-2 nova_compute[232428]: 2025-11-29 08:48:18.045 232432 DEBUG nova.network.neutron [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updated VIF entry in instance network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:48:18 compute-2 nova_compute[232428]: 2025-11-29 08:48:18.046 232432 DEBUG nova.network.neutron [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:48:18 compute-2 nova_compute[232428]: 2025-11-29 08:48:18.077 232432 DEBUG oslo_concurrency.lockutils [req-7d45b204-4b57-4921-9201-96484016335a req-8e00960d-c546-433d-a4eb-ef0a06095087 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:48:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:18 compute-2 nova_compute[232428]: 2025-11-29 08:48:18.153 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:18 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4206582262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:19 compute-2 ceph-mon[77138]: pgmap v3378: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 181 KiB/s wr, 153 op/s
Nov 29 08:48:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:20.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:20 compute-2 ceph-mon[77138]: pgmap v3379: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 123 op/s
Nov 29 08:48:20 compute-2 podman[325808]: 2025-11-29 08:48:20.75041362 +0000 UTC m=+0.102967426 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:48:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:20.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:21 compute-2 nova_compute[232428]: 2025-11-29 08:48:21.579 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:21 compute-2 nova_compute[232428]: 2025-11-29 08:48:21.596 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Triggering sync for uuid 92d81698-33ac-48a9-81bb-01d007be477e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:48:21 compute-2 nova_compute[232428]: 2025-11-29 08:48:21.596 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:21 compute-2 nova_compute[232428]: 2025-11-29 08:48:21.596 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "92d81698-33ac-48a9-81bb-01d007be477e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:21 compute-2 nova_compute[232428]: 2025-11-29 08:48:21.615 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "92d81698-33ac-48a9-81bb-01d007be477e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:22.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.219 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.219 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.397 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:22 compute-2 ceph-mon[77138]: pgmap v3380: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 110 op/s
Nov 29 08:48:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/69639738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2507809565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.861 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.861 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.862 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:48:22 compute-2 nova_compute[232428]: 2025-11-29 08:48:22.862 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 92d81698-33ac-48a9-81bb-01d007be477e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:48:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:23 compute-2 nova_compute[232428]: 2025-11-29 08:48:23.155 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:24.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:24 compute-2 ovn_controller[134375]: 2025-11-29T08:48:24Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:8a:99 10.100.0.14
Nov 29 08:48:24 compute-2 ovn_controller[134375]: 2025-11-29T08:48:24Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:8a:99 10.100.0.14
Nov 29 08:48:24 compute-2 ceph-mon[77138]: pgmap v3381: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Nov 29 08:48:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.204 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.219 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.219 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:25 compute-2 nova_compute[232428]: 2025-11-29 08:48:25.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 08:48:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:26.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:26 compute-2 nova_compute[232428]: 2025-11-29 08:48:26.193 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:26 compute-2 ceph-mon[77138]: pgmap v3382: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 223 op/s
Nov 29 08:48:26 compute-2 ceph-mon[77138]: osdmap e415: 3 total, 3 up, 3 in
Nov 29 08:48:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:27 compute-2 nova_compute[232428]: 2025-11-29 08:48:27.399 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:27 compute-2 ceph-mon[77138]: pgmap v3384: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 169 op/s
Nov 29 08:48:28 compute-2 sudo[325834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:28 compute-2 sudo[325834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:28 compute-2 sudo[325834]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:28 compute-2 sudo[325859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:48:28 compute-2 sudo[325859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:28 compute-2 sudo[325859]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:28 compute-2 nova_compute[232428]: 2025-11-29 08:48:28.158 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:48:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982172606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:48:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:48:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982172606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:48:28 compute-2 sudo[325884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:28 compute-2 sudo[325884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:28 compute-2 sudo[325884]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:28 compute-2 sudo[325909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:48:28 compute-2 sudo[325909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1982172606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:48:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1982172606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:48:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:28.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:28 compute-2 sudo[325909]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1121584574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:29 compute-2 ceph-mon[77138]: pgmap v3385: 305 pgs: 305 active+clean; 336 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.2 MiB/s wr, 212 op/s
Nov 29 08:48:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:30 compute-2 nova_compute[232428]: 2025-11-29 08:48:30.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:30 compute-2 podman[325966]: 2025-11-29 08:48:30.751538721 +0000 UTC m=+0.149542705 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 08:48:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1577900064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:31 compute-2 nova_compute[232428]: 2025-11-29 08:48:31.146 232432 INFO nova.compute.manager [None req-8da9029a-b611-499c-bb7f-436d0f33d711 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Get console output
Nov 29 08:48:31 compute-2 nova_compute[232428]: 2025-11-29 08:48:31.153 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:48:32 compute-2 ceph-mon[77138]: pgmap v3386: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 280 op/s
Nov 29 08:48:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:32.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.239 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.240 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.241 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:32 compute-2 sudo[325995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:32 compute-2 sudo[325995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:32 compute-2 sudo[325995]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:32 compute-2 sudo[326039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:32 compute-2 sudo[326039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:32 compute-2 sudo[326039]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:48:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3636395337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.759 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.850 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:48:32 compute-2 nova_compute[232428]: 2025-11-29 08:48:32.850 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:48:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3636395337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.130 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.132 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3991MB free_disk=20.897113800048828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.132 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.133 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.160 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.232 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 92d81698-33ac-48a9-81bb-01d007be477e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.233 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.278 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:48:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:48:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2767646178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.743 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.750 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.771 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.799 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:48:33 compute-2 nova_compute[232428]: 2025-11-29 08:48:33.800 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:48:34 compute-2 ceph-mon[77138]: pgmap v3387: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 236 op/s
Nov 29 08:48:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2767646178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:34.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:35 compute-2 sudo[326090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:35 compute-2 sudo[326090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:35 compute-2 sudo[326090]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:35 compute-2 sudo[326115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:48:35 compute-2 sudo[326115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:35 compute-2 sudo[326115]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:35 compute-2 nova_compute[232428]: 2025-11-29 08:48:35.801 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:48:35 compute-2 nova_compute[232428]: 2025-11-29 08:48:35.802 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:48:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:48:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:48:36 compute-2 ceph-mon[77138]: pgmap v3388: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 112 op/s
Nov 29 08:48:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1461582722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:36.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 08:48:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1038626851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:37 compute-2 ceph-mon[77138]: osdmap e416: 3 total, 3 up, 3 in
Nov 29 08:48:37 compute-2 nova_compute[232428]: 2025-11-29 08:48:37.406 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:38 compute-2 ceph-mon[77138]: pgmap v3390: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 112 op/s
Nov 29 08:48:38 compute-2 nova_compute[232428]: 2025-11-29 08:48:38.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:40 compute-2 ceph-mon[77138]: pgmap v3391: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 20 KiB/s wr, 78 op/s
Nov 29 08:48:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:42 compute-2 ceph-mon[77138]: pgmap v3392: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 23 KiB/s wr, 52 op/s
Nov 29 08:48:42 compute-2 nova_compute[232428]: 2025-11-29 08:48:42.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:42 compute-2 podman[326144]: 2025-11-29 08:48:42.717801652 +0000 UTC m=+0.104168073 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:48:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:43 compute-2 nova_compute[232428]: 2025-11-29 08:48:43.164 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:44 compute-2 ceph-mon[77138]: pgmap v3393: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 22 KiB/s wr, 52 op/s
Nov 29 08:48:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/512683767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:46 compute-2 ceph-mon[77138]: pgmap v3394: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 822 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Nov 29 08:48:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:46.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:47 compute-2 nova_compute[232428]: 2025-11-29 08:48:47.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:48 compute-2 nova_compute[232428]: 2025-11-29 08:48:48.167 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:48 compute-2 ceph-mon[77138]: pgmap v3395: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 812 KiB/s rd, 1.2 MiB/s wr, 72 op/s
Nov 29 08:48:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1255916700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:48:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:50 compute-2 ceph-mon[77138]: pgmap v3396: 305 pgs: 305 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 752 KiB/s rd, 1.6 MiB/s wr, 65 op/s
Nov 29 08:48:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2972507368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1410571772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:48:51 compute-2 podman[326169]: 2025-11-29 08:48:51.456129914 +0000 UTC m=+0.098866588 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:48:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:52.184 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:48:52 compute-2 nova_compute[232428]: 2025-11-29 08:48:52.185 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:48:52.187 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:48:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:52 compute-2 nova_compute[232428]: 2025-11-29 08:48:52.413 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:52 compute-2 ceph-mon[77138]: pgmap v3397: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 722 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 29 08:48:52 compute-2 sudo[326190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:52 compute-2 sudo[326190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:52 compute-2 sudo[326190]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:52 compute-2 sudo[326215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:48:52 compute-2 sudo[326215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:48:52 compute-2 sudo[326215]: pam_unix(sudo:session): session closed for user root
Nov 29 08:48:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:52.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:53 compute-2 nova_compute[232428]: 2025-11-29 08:48:53.169 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:54 compute-2 ceph-mon[77138]: pgmap v3398: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 08:48:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:48:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:56 compute-2 ceph-mon[77138]: pgmap v3399: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 08:48:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:48:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:56.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:48:57 compute-2 nova_compute[232428]: 2025-11-29 08:48:57.416 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:58 compute-2 nova_compute[232428]: 2025-11-29 08:48:58.172 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:48:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:48:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:48:58 compute-2 ceph-mon[77138]: pgmap v3400: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 823 KiB/s wr, 89 op/s
Nov 29 08:48:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:48:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:48:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:48:59 compute-2 nova_compute[232428]: 2025-11-29 08:48:59.523 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:00 compute-2 ceph-mon[77138]: pgmap v3401: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 824 KiB/s wr, 118 op/s
Nov 29 08:49:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:01 compute-2 podman[326245]: 2025-11-29 08:49:01.762477236 +0000 UTC m=+0.151687431 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Nov 29 08:49:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:02.189 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:02.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:02 compute-2 nova_compute[232428]: 2025-11-29 08:49:02.420 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:02 compute-2 ceph-mon[77138]: pgmap v3402: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 231 KiB/s wr, 121 op/s
Nov 29 08:49:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:03 compute-2 nova_compute[232428]: 2025-11-29 08:49:03.174 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:03.353 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:03.354 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:03.355 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:04.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:04 compute-2 ceph-mon[77138]: pgmap v3403: 305 pgs: 305 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 398 KiB/s wr, 92 op/s
Nov 29 08:49:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:05 compute-2 ceph-mon[77138]: pgmap v3404: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 29 08:49:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:06.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:07 compute-2 nova_compute[232428]: 2025-11-29 08:49:07.422 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:08 compute-2 nova_compute[232428]: 2025-11-29 08:49:08.177 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:08 compute-2 ceph-mon[77138]: pgmap v3405: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 08:49:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:08.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3026947189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:10.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:10 compute-2 ceph-mon[77138]: pgmap v3406: 305 pgs: 305 active+clean; 309 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 96 op/s
Nov 29 08:49:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1668438124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:12 compute-2 ceph-mon[77138]: pgmap v3407: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 98 op/s
Nov 29 08:49:12 compute-2 nova_compute[232428]: 2025-11-29 08:49:12.425 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:12 compute-2 sudo[326277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:12 compute-2 sudo[326277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:12 compute-2 sudo[326277]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:12 compute-2 sudo[326308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:12 compute-2 sudo[326308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:12 compute-2 sudo[326308]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:13 compute-2 podman[326301]: 2025-11-29 08:49:13.015799519 +0000 UTC m=+0.083956183 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:49:13 compute-2 nova_compute[232428]: 2025-11-29 08:49:13.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:14.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:14 compute-2 ceph-mon[77138]: pgmap v3408: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 29 08:49:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4202656210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:14.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:16 compute-2 nova_compute[232428]: 2025-11-29 08:49:16.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:16.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:16 compute-2 ceph-mon[77138]: pgmap v3409: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Nov 29 08:49:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3549323233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:17 compute-2 nova_compute[232428]: 2025-11-29 08:49:17.427 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:18 compute-2 nova_compute[232428]: 2025-11-29 08:49:18.180 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:18 compute-2 ceph-mon[77138]: pgmap v3410: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 210 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Nov 29 08:49:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:19.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:19 compute-2 ovn_controller[134375]: 2025-11-29T08:49:19Z|00923|binding|INFO|Releasing lport 3b344a9a-e72c-41f0-8603-26e403886af8 from this chassis (sb_readonly=0)
Nov 29 08:49:19 compute-2 nova_compute[232428]: 2025-11-29 08:49:19.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.266 232432 DEBUG nova.compute.manager [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.267 232432 DEBUG nova.compute.manager [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing instance network info cache due to event network-changed-66bfa039-be6e-4b2e-aeb6-64d238c6f483. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.267 232432 DEBUG oslo_concurrency.lockutils [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.268 232432 DEBUG oslo_concurrency.lockutils [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.268 232432 DEBUG nova.network.neutron [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Refreshing network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:49:20 compute-2 ceph-mon[77138]: pgmap v3411: 305 pgs: 305 active+clean; 261 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 713 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.405 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.405 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.406 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.406 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.406 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.408 232432 INFO nova.compute.manager [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Terminating instance
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.410 232432 DEBUG nova.compute.manager [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:49:20 compute-2 kernel: tap66bfa039-be (unregistering): left promiscuous mode
Nov 29 08:49:20 compute-2 NetworkManager[48993]: <info>  [1764406160.4688] device (tap66bfa039-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.481 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 ovn_controller[134375]: 2025-11-29T08:49:20Z|00924|binding|INFO|Releasing lport 66bfa039-be6e-4b2e-aeb6-64d238c6f483 from this chassis (sb_readonly=0)
Nov 29 08:49:20 compute-2 ovn_controller[134375]: 2025-11-29T08:49:20Z|00925|binding|INFO|Setting lport 66bfa039-be6e-4b2e-aeb6-64d238c6f483 down in Southbound
Nov 29 08:49:20 compute-2 ovn_controller[134375]: 2025-11-29T08:49:20Z|00926|binding|INFO|Removing iface tap66bfa039-be ovn-installed in OVS
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.484 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.491 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:8a:99 10.100.0.14'], port_security=['fa:16:3e:2e:8a:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '92d81698-33ac-48a9-81bb-01d007be477e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df4af3dd-c97e-4e77-a104-030ea689b036', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e7d1c11-8bfa-4faa-9599-e1a280f61b9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55afe2d8-6a14-4ba5-9e8a-7420cfaa2dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=66bfa039-be6e-4b2e-aeb6-64d238c6f483) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.493 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 66bfa039-be6e-4b2e-aeb6-64d238c6f483 in datapath df4af3dd-c97e-4e77-a104-030ea689b036 unbound from our chassis
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.494 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df4af3dd-c97e-4e77-a104-030ea689b036, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.496 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[891a8b7e-a488-4191-a529-92f904a897f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.497 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036 namespace which is not needed anymore
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Nov 29 08:49:20 compute-2 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c4.scope: Consumed 17.307s CPU time.
Nov 29 08:49:20 compute-2 systemd-machined[194747]: Machine qemu-96-instance-000000c4 terminated.
Nov 29 08:49:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.656 232432 INFO nova.virt.libvirt.driver [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance destroyed successfully.
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.656 232432 DEBUG nova.objects.instance [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 92d81698-33ac-48a9-81bb-01d007be477e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.669 232432 DEBUG nova.virt.libvirt.vif [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-522943302',display_name='tempest-TestNetworkBasicOps-server-522943302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-522943302',id=196,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGO25ExiI3lU7BDW/20DGqhiuv3n/5rkI+iBJvIFfiPlzkaiZ4VFWYX0IExiZpcQrUKyio66sZKRUeH3ZYQ/NiuBbck/cDknlwhc2+FPTK1S1ITiynoMKv44G5IHMaKE1w==',key_name='tempest-TestNetworkBasicOps-824741121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-gt7jkxoz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:10Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=92d81698-33ac-48a9-81bb-01d007be477e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.670 232432 DEBUG nova.network.os_vif_util [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.670 232432 DEBUG nova.network.os_vif_util [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.671 232432 DEBUG os_vif [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.673 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.673 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66bfa039-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.677 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.682 232432 INFO os_vif [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:8a:99,bridge_name='br-int',has_traffic_filtering=True,id=66bfa039-be6e-4b2e-aeb6-64d238c6f483,network=Network(df4af3dd-c97e-4e77-a104-030ea689b036),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66bfa039-be')
Nov 29 08:49:20 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [NOTICE]   (325724) : haproxy version is 2.8.14-c23fe91
Nov 29 08:49:20 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [NOTICE]   (325724) : path to executable is /usr/sbin/haproxy
Nov 29 08:49:20 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [WARNING]  (325724) : Exiting Master process...
Nov 29 08:49:20 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [ALERT]    (325724) : Current worker (325726) exited with code 143 (Terminated)
Nov 29 08:49:20 compute-2 neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036[325720]: [WARNING]  (325724) : All workers exited. Exiting... (0)
Nov 29 08:49:20 compute-2 systemd[1]: libpod-226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4.scope: Deactivated successfully.
Nov 29 08:49:20 compute-2 podman[326374]: 2025-11-29 08:49:20.720939767 +0000 UTC m=+0.081287421 container died 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:49:20 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4-userdata-shm.mount: Deactivated successfully.
Nov 29 08:49:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-bd06de9f9c0adac7f293213f763f5834cb57120f080208d37f9797906c3aab33-merged.mount: Deactivated successfully.
Nov 29 08:49:20 compute-2 podman[326374]: 2025-11-29 08:49:20.774178694 +0000 UTC m=+0.134526318 container cleanup 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:49:20 compute-2 systemd[1]: libpod-conmon-226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4.scope: Deactivated successfully.
Nov 29 08:49:20 compute-2 podman[326428]: 2025-11-29 08:49:20.861790541 +0000 UTC m=+0.058972036 container remove 226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.869 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5a89d21d-1d7c-44e7-a166-c4faf4019ea4]: (4, ('Sat Nov 29 08:49:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036 (226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4)\n226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4\nSat Nov 29 08:49:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036 (226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4)\n226b39823b38b46f4f13d79525e3c71f68f32173bba824ddc31a3fd6e121caf4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.871 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a6805dec-5552-4652-bcc8-41850fcb98ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.872 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf4af3dd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 kernel: tapdf4af3dd-c0: left promiscuous mode
Nov 29 08:49:20 compute-2 nova_compute[232428]: 2025-11-29 08:49:20.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.914 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c6a79b-5dcf-4816-8dae-eddd517f8fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.934 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d39749c9-a6b5-4c0a-bb45-400748c32409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.936 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa8b5f5-7440-49c1-8efb-2389361b5886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.960 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0d80ad-82be-417f-b175-b4728eb0dc12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905569, 'reachable_time': 22952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326446, 'error': None, 'target': 'ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.964 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df4af3dd-c97e-4e77-a104-030ea689b036 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:49:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:20.965 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4a715255-22ac-452e-bcb3-1943fffcb12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:20 compute-2 systemd[1]: run-netns-ovnmeta\x2ddf4af3dd\x2dc97e\x2d4e77\x2da104\x2d030ea689b036.mount: Deactivated successfully.
Nov 29 08:49:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:21.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.300 232432 INFO nova.virt.libvirt.driver [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Deleting instance files /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e_del
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.301 232432 INFO nova.virt.libvirt.driver [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Deletion of /var/lib/nova/instances/92d81698-33ac-48a9-81bb-01d007be477e_del complete
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.384 232432 INFO nova.compute.manager [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Took 0.97 seconds to destroy the instance on the hypervisor.
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.385 232432 DEBUG oslo.service.loopingcall [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.385 232432 DEBUG nova.compute.manager [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.386 232432 DEBUG nova.network.neutron [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:49:21 compute-2 podman[326448]: 2025-11-29 08:49:21.711756354 +0000 UTC m=+0.097344911 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true)
Nov 29 08:49:21 compute-2 nova_compute[232428]: 2025-11-29 08:49:21.986 232432 DEBUG nova.network.neutron [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.003 232432 DEBUG nova.network.neutron [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updated VIF entry in instance network info cache for port 66bfa039-be6e-4b2e-aeb6-64d238c6f483. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.004 232432 DEBUG nova.network.neutron [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [{"id": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "address": "fa:16:3e:2e:8a:99", "network": {"id": "df4af3dd-c97e-4e77-a104-030ea689b036", "bridge": "br-int", "label": "tempest-network-smoke--224100746", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66bfa039-be", "ovs_interfaceid": "66bfa039-be6e-4b2e-aeb6-64d238c6f483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.010 232432 INFO nova.compute.manager [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Took 0.62 seconds to deallocate network for instance.
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.033 232432 DEBUG oslo_concurrency.lockutils [req-87ef6657-2dd8-4558-9c29-914911c5c82c req-f02d54cc-e751-47e5-aefa-fa94c568180d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.055 232432 DEBUG nova.compute.manager [req-6db473ff-5054-443b-ae91-104d5ab5a399 req-b32e4828-d285-4da8-b518-20ae65e7f819 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-vif-deleted-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.055 232432 INFO nova.compute.manager [req-6db473ff-5054-443b-ae91-104d5ab5a399 req-b32e4828-d285-4da8-b518-20ae65e7f819 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Neutron deleted interface 66bfa039-be6e-4b2e-aeb6-64d238c6f483; detaching it from the instance and deleting it from the info cache
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.056 232432 DEBUG nova.network.neutron [req-6db473ff-5054-443b-ae91-104d5ab5a399 req-b32e4828-d285-4da8-b518-20ae65e7f819 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.060 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.060 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:49:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.259 232432 DEBUG nova.compute.manager [req-6db473ff-5054-443b-ae91-104d5ab5a399 req-b32e4828-d285-4da8-b518-20ae65e7f819 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Detach interface failed, port_id=66bfa039-be6e-4b2e-aeb6-64d238c6f483, reason: Instance 92d81698-33ac-48a9-81bb-01d007be477e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.280 232432 DEBUG oslo_concurrency.processutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.361 232432 DEBUG nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-vif-unplugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.362 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.363 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.363 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.364 232432 DEBUG nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] No waiting events found dispatching network-vif-unplugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.364 232432 WARNING nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received unexpected event network-vif-unplugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 for instance with vm_state deleted and task_state None.
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.365 232432 DEBUG nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.365 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "92d81698-33ac-48a9-81bb-01d007be477e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.366 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.366 232432 DEBUG oslo_concurrency.lockutils [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.367 232432 DEBUG nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] No waiting events found dispatching network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.367 232432 WARNING nova.compute.manager [req-9162db0d-13c3-4ee7-ab29-8c1caef81a29 req-14578472-0079-4dc6-9fae-62d52891fea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Received unexpected event network-vif-plugged-66bfa039-be6e-4b2e-aeb6-64d238c6f483 for instance with vm_state deleted and task_state None.
Nov 29 08:49:22 compute-2 ceph-mon[77138]: pgmap v3412: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 932 KiB/s wr, 133 op/s
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.411 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.411 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.412 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.412 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 92d81698-33ac-48a9-81bb-01d007be477e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.430 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.555 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:49:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:49:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1048579177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.725 232432 DEBUG oslo_concurrency.processutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.733 232432 DEBUG nova.compute.provider_tree [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.765 232432 DEBUG nova.scheduler.client.report [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.809 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.849 232432 INFO nova.scheduler.client.report [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 92d81698-33ac-48a9-81bb-01d007be477e
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.858 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.879 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-92d81698-33ac-48a9-81bb-01d007be477e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.880 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:49:22 compute-2 nova_compute[232428]: 2025-11-29 08:49:22.952 232432 DEBUG oslo_concurrency.lockutils [None req-31d5591e-f8ed-4b1e-9872-c12ff8017cd0 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "92d81698-33ac-48a9-81bb-01d007be477e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:23 compute-2 nova_compute[232428]: 2025-11-29 08:49:23.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1048579177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:24.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:24 compute-2 ceph-mon[77138]: pgmap v3413: 305 pgs: 305 active+clean; 234 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 105 op/s
Nov 29 08:49:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:25.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:25 compute-2 nova_compute[232428]: 2025-11-29 08:49:25.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:25 compute-2 nova_compute[232428]: 2025-11-29 08:49:25.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.445107) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165445156, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2395, "num_deletes": 253, "total_data_size": 5708713, "memory_usage": 5779968, "flush_reason": "Manual Compaction"}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165473924, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3742524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72905, "largest_seqno": 75295, "table_properties": {"data_size": 3732857, "index_size": 6096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20029, "raw_average_key_size": 20, "raw_value_size": 3713546, "raw_average_value_size": 3804, "num_data_blocks": 265, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405955, "oldest_key_time": 1764405955, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 28876 microseconds, and 10116 cpu microseconds.
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.473972) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3742524 bytes OK
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.474001) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.475820) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.475835) EVENT_LOG_v1 {"time_micros": 1764406165475830, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.475854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5698295, prev total WAL file size 5698295, number of live WAL files 2.
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.477358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3654KB)], [147(10MB)]
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165477436, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 15048953, "oldest_snapshot_seqno": -1}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10194 keys, 13101914 bytes, temperature: kUnknown
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165583972, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 13101914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13035884, "index_size": 39421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 268968, "raw_average_key_size": 26, "raw_value_size": 12856886, "raw_average_value_size": 1261, "num_data_blocks": 1499, "num_entries": 10194, "num_filter_entries": 10194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.585032) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 13101914 bytes
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.586879) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.2 rd, 122.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10719, records dropped: 525 output_compression: NoCompression
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.586900) EVENT_LOG_v1 {"time_micros": 1764406165586890, "job": 94, "event": "compaction_finished", "compaction_time_micros": 106587, "compaction_time_cpu_micros": 54258, "output_level": 6, "num_output_files": 1, "total_output_size": 13101914, "num_input_records": 10719, "num_output_records": 10194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165587803, "job": 94, "event": "table_file_deletion", "file_number": 149}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165590652, "job": 94, "event": "table_file_deletion", "file_number": 147}
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.477255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.590740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.590746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.590748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.590749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:49:25.590751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:49:25 compute-2 nova_compute[232428]: 2025-11-29 08:49:25.677 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:26.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:26 compute-2 nova_compute[232428]: 2025-11-29 08:49:26.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:26 compute-2 nova_compute[232428]: 2025-11-29 08:49:26.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:26 compute-2 ceph-mon[77138]: pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 129 op/s
Nov 29 08:49:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:27.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:27 compute-2 nova_compute[232428]: 2025-11-29 08:49:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:27 compute-2 nova_compute[232428]: 2025-11-29 08:49:27.432 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:49:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920586515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:49:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:49:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920586515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:49:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:28 compute-2 ceph-mon[77138]: pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 117 op/s
Nov 29 08:49:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3920586515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:49:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3920586515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:49:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:29.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:30 compute-2 nova_compute[232428]: 2025-11-29 08:49:30.681 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:30 compute-2 ceph-mon[77138]: pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 KiB/s wr, 120 op/s
Nov 29 08:49:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2190195639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:31.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1625207753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:32.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.241 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.242 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.243 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.434 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:49:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1312665422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:32 compute-2 nova_compute[232428]: 2025-11-29 08:49:32.704 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:32 compute-2 podman[326518]: 2025-11-29 08:49:32.751019414 +0000 UTC m=+0.141224136 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 08:49:32 compute-2 ceph-mon[77138]: pgmap v3417: 305 pgs: 305 active+clean; 193 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Nov 29 08:49:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1312665422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.003 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.005 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4167MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.006 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.006 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:33 compute-2 sudo[326547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:33 compute-2 sudo[326547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:33 compute-2 sudo[326547]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.140 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.141 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:49:33 compute-2 sudo[326572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:33 compute-2 sudo[326572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.221 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:49:33 compute-2 sudo[326572]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.287 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.288 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.314 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.365 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.385 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:49:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2865240163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.858 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.868 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.893 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.922 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:49:33 compute-2 nova_compute[232428]: 2025-11-29 08:49:33.922 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:34 compute-2 ceph-mon[77138]: pgmap v3418: 305 pgs: 305 active+clean; 196 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Nov 29 08:49:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:35.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2865240163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:35 compute-2 nova_compute[232428]: 2025-11-29 08:49:35.654 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406160.652623, 92d81698-33ac-48a9-81bb-01d007be477e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:49:35 compute-2 nova_compute[232428]: 2025-11-29 08:49:35.655 232432 INFO nova.compute.manager [-] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] VM Stopped (Lifecycle Event)
Nov 29 08:49:35 compute-2 nova_compute[232428]: 2025-11-29 08:49:35.673 232432 DEBUG nova.compute.manager [None req-b6cfe8e9-8c28-4d0e-a824-c266e90a44df - - - - - -] [instance: 92d81698-33ac-48a9-81bb-01d007be477e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:49:35 compute-2 nova_compute[232428]: 2025-11-29 08:49:35.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:35 compute-2 sudo[326620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:35 compute-2 sudo[326620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:35 compute-2 sudo[326620]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:35 compute-2 sudo[326645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:49:35 compute-2 sudo[326645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:35 compute-2 sudo[326645]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:35 compute-2 sudo[326671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:35 compute-2 sudo[326671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:35 compute-2 sudo[326671]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:36 compute-2 sudo[326696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:49:36 compute-2 sudo[326696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:36 compute-2 ceph-mon[77138]: pgmap v3419: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:49:36 compute-2 sudo[326696]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:36 compute-2 nova_compute[232428]: 2025-11-29 08:49:36.913 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:36 compute-2 nova_compute[232428]: 2025-11-29 08:49:36.929 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:49:36 compute-2 nova_compute[232428]: 2025-11-29 08:49:36.929 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:49:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:37 compute-2 nova_compute[232428]: 2025-11-29 08:49:37.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:49:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1499617626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:38 compute-2 ceph-mon[77138]: pgmap v3420: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:49:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3467271190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:40 compute-2 ceph-mon[77138]: pgmap v3421: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 08:49:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:40 compute-2 nova_compute[232428]: 2025-11-29 08:49:40.687 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:41.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:42 compute-2 nova_compute[232428]: 2025-11-29 08:49:42.438 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:42 compute-2 nova_compute[232428]: 2025-11-29 08:49:42.573 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:42.575 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:49:42 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:42.576 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:49:42 compute-2 ceph-mon[77138]: pgmap v3422: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 29 08:49:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:43 compute-2 sudo[326756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:43 compute-2 sudo[326756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:43 compute-2 sudo[326756]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:43 compute-2 sudo[326782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:49:43 compute-2 podman[326780]: 2025-11-29 08:49:43.311863168 +0000 UTC m=+0.088817503 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 08:49:43 compute-2 sudo[326782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:43 compute-2 sudo[326782]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:49:43 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:49:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:44.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:44 compute-2 ceph-mon[77138]: pgmap v3423: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 183 KiB/s rd, 636 KiB/s wr, 28 op/s
Nov 29 08:49:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:45.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:45 compute-2 nova_compute[232428]: 2025-11-29 08:49:45.689 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3366238603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:45 compute-2 ceph-mon[77138]: pgmap v3424: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 87 KiB/s wr, 24 op/s
Nov 29 08:49:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:46.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.697 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.698 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.717 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.802 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.803 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.811 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.812 232432 INFO nova.compute.claims [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:49:46 compute-2 nova_compute[232428]: 2025-11-29 08:49:46.959 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:47.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:49:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/253586256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.409 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.419 232432 DEBUG nova.compute.provider_tree [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.441 232432 DEBUG nova.scheduler.client.report [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.465 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.465 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.660 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.661 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.678 232432 INFO nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.697 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.815 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.816 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.817 232432 INFO nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Creating image(s)
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.861 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.903 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.949 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:47 compute-2 nova_compute[232428]: 2025-11-29 08:49:47.955 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.007 232432 DEBUG nova.policy [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.062 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.065 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.066 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.067 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.114 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.121 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8520b12e-fae5-487c-afe3-0cf9b569d761_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:48 compute-2 ceph-mon[77138]: pgmap v3425: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 40 KiB/s wr, 17 op/s
Nov 29 08:49:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/253586256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:49:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912420050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.561 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8520b12e-fae5-487c-afe3-0cf9b569d761_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.693 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.843 232432 DEBUG nova.objects.instance [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.859 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Successfully created port: fc56688b-60dd-4706-ac5a-9f0570b47503 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.866 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.867 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Ensure instance console log exists: /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.868 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.869 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:48 compute-2 nova_compute[232428]: 2025-11-29 08:49:48.869 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:49.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2912420050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:49:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:49:50 compute-2 ceph-mon[77138]: pgmap v3426: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 40 KiB/s wr, 17 op/s
Nov 29 08:49:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1161134043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:49:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:50 compute-2 nova_compute[232428]: 2025-11-29 08:49:50.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:51.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:52 compute-2 ceph-mon[77138]: pgmap v3427: 305 pgs: 305 active+clean; 233 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 993 KiB/s wr, 28 op/s
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.442 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:52.578 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.704 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Successfully updated port: fc56688b-60dd-4706-ac5a-9f0570b47503 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:49:52 compute-2 podman[327018]: 2025-11-29 08:49:52.710877465 +0000 UTC m=+0.096955637 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.725 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.725 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.726 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.821 232432 DEBUG nova.compute.manager [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.822 232432 DEBUG nova.compute.manager [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing instance network info cache due to event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.822 232432 DEBUG oslo_concurrency.lockutils [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:49:52 compute-2 nova_compute[232428]: 2025-11-29 08:49:52.914 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:49:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:53.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:53 compute-2 sudo[327038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:53 compute-2 sudo[327038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:53 compute-2 sudo[327038]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:53 compute-2 sudo[327063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:49:53 compute-2 sudo[327063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:49:53 compute-2 sudo[327063]: pam_unix(sudo:session): session closed for user root
Nov 29 08:49:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:54 compute-2 ceph-mon[77138]: pgmap v3428: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 29 08:49:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:55.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.144 232432 DEBUG nova.network.neutron [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.160 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.161 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance network_info: |[{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.162 232432 DEBUG oslo_concurrency.lockutils [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.162 232432 DEBUG nova.network.neutron [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing network info cache for port fc56688b-60dd-4706-ac5a-9f0570b47503 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.167 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Start _get_guest_xml network_info=[{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.175 232432 WARNING nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.188 232432 DEBUG nova.virt.libvirt.host [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.189 232432 DEBUG nova.virt.libvirt.host [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.193 232432 DEBUG nova.virt.libvirt.host [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.194 232432 DEBUG nova.virt.libvirt.host [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.195 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.196 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.197 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.197 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.198 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.198 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.199 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.199 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.200 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.200 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.200 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.201 232432 DEBUG nova.virt.hardware [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.206 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:49:55 compute-2 nova_compute[232428]: 2025-11-29 08:49:55.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:49:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/68118888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.017 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.041 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.045 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:49:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/512221427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.472 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.475 232432 DEBUG nova.virt.libvirt.vif [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:47Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.476 232432 DEBUG nova.network.os_vif_util [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.477 232432 DEBUG nova.network.os_vif_util [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.479 232432 DEBUG nova.objects.instance [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.495 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <uuid>8520b12e-fae5-487c-afe3-0cf9b569d761</uuid>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <name>instance-000000c7</name>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-1429235908</nova:name>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:49:55</nova:creationTime>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <nova:port uuid="fc56688b-60dd-4706-ac5a-9f0570b47503">
Nov 29 08:49:56 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <system>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="serial">8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="uuid">8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </system>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <os>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </os>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <features>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </features>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk">
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </source>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config">
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </source>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:49:56 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:fc:9b:a2"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <target dev="tapfc56688b-60"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log" append="off"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <video>
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </video>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:49:56 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:49:56 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:49:56 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:49:56 compute-2 nova_compute[232428]: </domain>
Nov 29 08:49:56 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.496 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Preparing to wait for external event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.498 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.498 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.499 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.500 232432 DEBUG nova.virt.libvirt.vif [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:47Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.501 232432 DEBUG nova.network.os_vif_util [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.502 232432 DEBUG nova.network.os_vif_util [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.503 232432 DEBUG os_vif [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.505 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.506 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.515 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc56688b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.516 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc56688b-60, col_values=(('external_ids', {'iface-id': 'fc56688b-60dd-4706-ac5a-9f0570b47503', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:9b:a2', 'vm-uuid': '8520b12e-fae5-487c-afe3-0cf9b569d761'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:56 compute-2 NetworkManager[48993]: <info>  [1764406196.5190] manager: (tapfc56688b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.519 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.525 232432 INFO os_vif [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60')
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.572 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.573 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.573 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:fc:9b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.574 232432 INFO nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Using config drive
Nov 29 08:49:56 compute-2 nova_compute[232428]: 2025-11-29 08:49:56.609 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:56 compute-2 ceph-mon[77138]: pgmap v3429: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 29 08:49:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/68118888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/512221427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.555 232432 INFO nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Creating config drive at /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.566 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqt1k0hs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.733 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqt1k0hs" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.775 232432 DEBUG nova.storage.rbd_utils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.780 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config 8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.829 232432 DEBUG nova.network.neutron [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updated VIF entry in instance network info cache for port fc56688b-60dd-4706-ac5a-9f0570b47503. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.831 232432 DEBUG nova.network.neutron [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:49:57 compute-2 nova_compute[232428]: 2025-11-29 08:49:57.850 232432 DEBUG oslo_concurrency.lockutils [req-88f0b4af-8aa3-4854-8888-30bd3cd4f626 req-9ddc0d3e-77ab-428a-8c87-639e7be215b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.039 232432 DEBUG oslo_concurrency.processutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config 8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.041 232432 INFO nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Deleting local config drive /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/disk.config because it was imported into RBD.
Nov 29 08:49:58 compute-2 kernel: tapfc56688b-60: entered promiscuous mode
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.1312] manager: (tapfc56688b-60): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Nov 29 08:49:58 compute-2 ovn_controller[134375]: 2025-11-29T08:49:58Z|00927|binding|INFO|Claiming lport fc56688b-60dd-4706-ac5a-9f0570b47503 for this chassis.
Nov 29 08:49:58 compute-2 ovn_controller[134375]: 2025-11-29T08:49:58Z|00928|binding|INFO|fc56688b-60dd-4706-ac5a-9f0570b47503: Claiming fa:16:3e:fc:9b:a2 10.100.0.3
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.133 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.177 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:9b:a2 10.100.0.3'], port_security=['fa:16:3e:fc:9b:a2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8520b12e-fae5-487c-afe3-0cf9b569d761', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d7dedf4-5b50-4817-8e83-2bc5e963fef6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9e8934-e2c9-4c71-9edb-881a58049801, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fc56688b-60dd-4706-ac5a-9f0570b47503) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.180 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fc56688b-60dd-4706-ac5a-9f0570b47503 in datapath a3d2cdb4-1226-4823-8b1e-558c7decebb4 bound to our chassis
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.182 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3d2cdb4-1226-4823-8b1e-558c7decebb4
Nov 29 08:49:58 compute-2 systemd-udevd[327225]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:49:58 compute-2 systemd-machined[194747]: New machine qemu-97-instance-000000c7.
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.2052] device (tapfc56688b-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.2066] device (tapfc56688b-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.216 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf1b229-0733-4640-b8c5-5de74d96fddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.217 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3d2cdb4-11 in ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.220 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3d2cdb4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.221 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7177a062-660b-4f82-9b9d-00ccab098b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.222 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[676f9cd0-c9cf-48d3-87b1-c4619a356a94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 systemd[1]: Started Virtual Machine qemu-97-instance-000000c7.
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.252 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[f178d367-be9d-4c14-a474-183795247088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.255 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 ovn_controller[134375]: 2025-11-29T08:49:58Z|00929|binding|INFO|Setting lport fc56688b-60dd-4706-ac5a-9f0570b47503 ovn-installed in OVS
Nov 29 08:49:58 compute-2 ovn_controller[134375]: 2025-11-29T08:49:58Z|00930|binding|INFO|Setting lport fc56688b-60dd-4706-ac5a-9f0570b47503 up in Southbound
Nov 29 08:49:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:49:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.264 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.272 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1aaced59-8829-42a6-a413-fc3e0d570b85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.324 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d2101a18-a2a6-4c00-808f-b83056c9911d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.335 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fd351fec-d57c-4d31-b557-997f49744b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.3371] manager: (tapa3d2cdb4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.383 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[efe2dac0-c2fa-478d-87e9-8ad099e64a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.389 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[953b4bea-63e9-4a87-aa69-9ba2b7f9f280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.4297] device (tapa3d2cdb4-10): carrier: link connected
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.440 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[36b0adde-082a-4606-ba13-ad615b1fc446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ffe9a7-8ef6-43b8-a59f-61dbb7824eec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3d2cdb4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:95:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916469, 'reachable_time': 40195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327258, 'error': None, 'target': 'ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.491 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac48e2d-d28a-4c21-ac31-0e7c74df3c79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:95ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 916469, 'tstamp': 916469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327259, 'error': None, 'target': 'ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.516 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cefa416b-9331-4c40-a084-f1b84b6ff696]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3d2cdb4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:95:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916469, 'reachable_time': 40195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327260, 'error': None, 'target': 'ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.553 232432 DEBUG nova.compute.manager [req-04d200e3-e1cf-4525-b8a4-85f3e0c17f26 req-9a0851a9-c0d1-4d4c-8c69-cd00c516d14c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.554 232432 DEBUG oslo_concurrency.lockutils [req-04d200e3-e1cf-4525-b8a4-85f3e0c17f26 req-9a0851a9-c0d1-4d4c-8c69-cd00c516d14c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.555 232432 DEBUG oslo_concurrency.lockutils [req-04d200e3-e1cf-4525-b8a4-85f3e0c17f26 req-9a0851a9-c0d1-4d4c-8c69-cd00c516d14c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.555 232432 DEBUG oslo_concurrency.lockutils [req-04d200e3-e1cf-4525-b8a4-85f3e0c17f26 req-9a0851a9-c0d1-4d4c-8c69-cd00c516d14c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.555 232432 DEBUG nova.compute.manager [req-04d200e3-e1cf-4525-b8a4-85f3e0c17f26 req-9a0851a9-c0d1-4d4c-8c69-cd00c516d14c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Processing event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.575 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b580119-fa8a-41ce-ad7d-99b201c49953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.679 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[534b41ee-a9d7-4cad-81fa-6305378f920a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.681 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3d2cdb4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.682 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.683 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3d2cdb4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.686 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 NetworkManager[48993]: <info>  [1764406198.6875] manager: (tapa3d2cdb4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Nov 29 08:49:58 compute-2 kernel: tapa3d2cdb4-10: entered promiscuous mode
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.701 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.702 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3d2cdb4-10, col_values=(('external_ids', {'iface-id': '5aeb81c5-8d05-4127-b095-49ab41849fe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:49:58 compute-2 ovn_controller[134375]: 2025-11-29T08:49:58Z|00931|binding|INFO|Releasing lport 5aeb81c5-8d05-4127-b095-49ab41849fe5 from this chassis (sb_readonly=0)
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.704 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.727 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 nova_compute[232428]: 2025-11-29 08:49:58.728 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.729 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3d2cdb4-1226-4823-8b1e-558c7decebb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3d2cdb4-1226-4823-8b1e-558c7decebb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.731 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9851f918-304c-4ce2-8178-4f3e9c7c4b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.732 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-a3d2cdb4-1226-4823-8b1e-558c7decebb4
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/a3d2cdb4-1226-4823-8b1e-558c7decebb4.pid.haproxy
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID a3d2cdb4-1226-4823-8b1e-558c7decebb4
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:49:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:49:58.734 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'env', 'PROCESS_TAG=haproxy-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3d2cdb4-1226-4823-8b1e-558c7decebb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:49:59 compute-2 ceph-mon[77138]: pgmap v3430: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:49:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/200399157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:49:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:49:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:49:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:59.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:49:59 compute-2 podman[327292]: 2025-11-29 08:49:59.27255252 +0000 UTC m=+0.104280184 container create 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 08:49:59 compute-2 podman[327292]: 2025-11-29 08:49:59.218120267 +0000 UTC m=+0.049848101 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:49:59 compute-2 systemd[1]: Started libpod-conmon-4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a.scope.
Nov 29 08:49:59 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:49:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c16ea50cfb68b1a4988684b596b5a93b2f315ddea4ce25fdf3adee12db7654/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:49:59 compute-2 podman[327292]: 2025-11-29 08:49:59.388556927 +0000 UTC m=+0.220284591 container init 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:49:59 compute-2 podman[327292]: 2025-11-29 08:49:59.400366515 +0000 UTC m=+0.232094149 container start 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:49:59 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [NOTICE]   (327353) : New worker (327355) forked
Nov 29 08:49:59 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [NOTICE]   (327353) : Loading success.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.488 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406199.488067, 8520b12e-fae5-487c-afe3-0cf9b569d761 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.489 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] VM Started (Lifecycle Event)
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.492 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.499 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.503 232432 INFO nova.virt.libvirt.driver [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance spawned successfully.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.503 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.515 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.519 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.530 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.531 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.531 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.532 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.532 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.533 232432 DEBUG nova.virt.libvirt.driver [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.543 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.544 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406199.489194, 8520b12e-fae5-487c-afe3-0cf9b569d761 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.544 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] VM Paused (Lifecycle Event)
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.561 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.564 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406199.495119, 8520b12e-fae5-487c-afe3-0cf9b569d761 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.565 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] VM Resumed (Lifecycle Event)
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.581 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.584 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.588 232432 INFO nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Took 11.77 seconds to spawn the instance on the hypervisor.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.589 232432 DEBUG nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.611 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.658 232432 INFO nova.compute.manager [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Took 12.89 seconds to build instance.
Nov 29 08:49:59 compute-2 nova_compute[232428]: 2025-11-29 08:49:59.678 232432 DEBUG oslo_concurrency.lockutils [None req-412e2ba4-e237-4a7d-8735-a510b0105d3d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:00 compute-2 ceph-mon[77138]: pgmap v3431: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 08:50:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 08:50:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.763 232432 DEBUG nova.compute.manager [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.763 232432 DEBUG oslo_concurrency.lockutils [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.763 232432 DEBUG oslo_concurrency.lockutils [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.763 232432 DEBUG oslo_concurrency.lockutils [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.764 232432 DEBUG nova.compute.manager [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:00 compute-2 nova_compute[232428]: 2025-11-29 08:50:00.764 232432 WARNING nova.compute.manager [req-45d5b34c-28f6-4e90-bb8a-f9a914748e96 req-96809b09-fa97-44a6-930f-041388270c42 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 for instance with vm_state active and task_state None.
Nov 29 08:50:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:01.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:01 compute-2 nova_compute[232428]: 2025-11-29 08:50:01.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.447 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:02 compute-2 NetworkManager[48993]: <info>  [1764406202.4725] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:02 compute-2 NetworkManager[48993]: <info>  [1764406202.4742] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.609 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:02 compute-2 ovn_controller[134375]: 2025-11-29T08:50:02Z|00932|binding|INFO|Releasing lport 5aeb81c5-8d05-4127-b095-49ab41849fe5 from this chassis (sb_readonly=0)
Nov 29 08:50:02 compute-2 ceph-mon[77138]: pgmap v3432: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.621 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.754 232432 DEBUG nova.compute.manager [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.755 232432 DEBUG nova.compute.manager [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing instance network info cache due to event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.755 232432 DEBUG oslo_concurrency.lockutils [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.755 232432 DEBUG oslo_concurrency.lockutils [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:02 compute-2 nova_compute[232428]: 2025-11-29 08:50:02.755 232432 DEBUG nova.network.neutron [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing network info cache for port fc56688b-60dd-4706-ac5a-9f0570b47503 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:50:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:03.354 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:03.355 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:03.355 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:03 compute-2 podman[327368]: 2025-11-29 08:50:03.767048433 +0000 UTC m=+0.156774317 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:50:03 compute-2 ceph-mon[77138]: pgmap v3433: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 872 KiB/s wr, 72 op/s
Nov 29 08:50:03 compute-2 nova_compute[232428]: 2025-11-29 08:50:03.937 232432 DEBUG nova.network.neutron [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updated VIF entry in instance network info cache for port fc56688b-60dd-4706-ac5a-9f0570b47503. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:50:03 compute-2 nova_compute[232428]: 2025-11-29 08:50:03.938 232432 DEBUG nova.network.neutron [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:03 compute-2 nova_compute[232428]: 2025-11-29 08:50:03.963 232432 DEBUG oslo_concurrency.lockutils [req-fcb2a508-764d-4838-8419-77dfd099731e req-c8dbf527-9d1d-4092-becd-d5cd138819be 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:06 compute-2 ceph-mon[77138]: pgmap v3434: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 08:50:06 compute-2 nova_compute[232428]: 2025-11-29 08:50:06.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:07 compute-2 sshd-session[327395]: Invalid user solana from 45.148.10.240 port 33722
Nov 29 08:50:07 compute-2 sshd-session[327395]: Connection closed by invalid user solana 45.148.10.240 port 33722 [preauth]
Nov 29 08:50:07 compute-2 nova_compute[232428]: 2025-11-29 08:50:07.449 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:08 compute-2 ceph-mon[77138]: pgmap v3435: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 08:50:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:09.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:10 compute-2 ceph-mon[77138]: pgmap v3436: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 08:50:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:11 compute-2 nova_compute[232428]: 2025-11-29 08:50:11.528 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:12 compute-2 ceph-mon[77138]: pgmap v3437: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Nov 29 08:50:12 compute-2 nova_compute[232428]: 2025-11-29 08:50:12.451 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:13 compute-2 ovn_controller[134375]: 2025-11-29T08:50:13Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:9b:a2 10.100.0.3
Nov 29 08:50:13 compute-2 ovn_controller[134375]: 2025-11-29T08:50:13Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:9b:a2 10.100.0.3
Nov 29 08:50:13 compute-2 sudo[327400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:13 compute-2 sudo[327400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:13 compute-2 sudo[327400]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:13 compute-2 sudo[327431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:13 compute-2 sudo[327431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:13 compute-2 sudo[327431]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:13 compute-2 podman[327424]: 2025-11-29 08:50:13.70460378 +0000 UTC m=+0.089599518 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:50:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:14 compute-2 ceph-mon[77138]: pgmap v3438: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 593 KiB/s wr, 149 op/s
Nov 29 08:50:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:16 compute-2 nova_compute[232428]: 2025-11-29 08:50:16.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:16 compute-2 ceph-mon[77138]: pgmap v3439: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Nov 29 08:50:16 compute-2 nova_compute[232428]: 2025-11-29 08:50:16.533 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:17 compute-2 nova_compute[232428]: 2025-11-29 08:50:17.453 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:18 compute-2 ceph-mon[77138]: pgmap v3440: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 586 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:50:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:19 compute-2 nova_compute[232428]: 2025-11-29 08:50:19.874 232432 INFO nova.compute.manager [None req-42fbbf27-2f99-4f43-ab4b-511971bf562c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Get console output
Nov 29 08:50:19 compute-2 nova_compute[232428]: 2025-11-29 08:50:19.881 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:50:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:20.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:20 compute-2 ceph-mon[77138]: pgmap v3441: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 790 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 08:50:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:21 compute-2 nova_compute[232428]: 2025-11-29 08:50:21.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:22.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:22 compute-2 ceph-mon[77138]: pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 08:50:22 compute-2 nova_compute[232428]: 2025-11-29 08:50:22.455 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:22.548 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:50:22 compute-2 nova_compute[232428]: 2025-11-29 08:50:22.549 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:22 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:22.550 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:50:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:23 compute-2 nova_compute[232428]: 2025-11-29 08:50:23.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:23 compute-2 podman[327473]: 2025-11-29 08:50:23.71462152 +0000 UTC m=+0.110213218 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:50:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:24.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.367 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.368 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.369 232432 DEBUG nova.objects.instance [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'flavor' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:50:24 compute-2 ceph-mon[77138]: pgmap v3443: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.511 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.511 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.512 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.512 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:50:24 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:24.553 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.824 232432 DEBUG nova.objects.instance [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:50:24 compute-2 nova_compute[232428]: 2025-11-29 08:50:24.843 232432 DEBUG nova.network.neutron [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:50:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:25 compute-2 nova_compute[232428]: 2025-11-29 08:50:25.217 232432 DEBUG nova.policy [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:50:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.154 232432 DEBUG nova.network.neutron [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Successfully created port: 6f147e13-c230-43cc-b999-71bff624665a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.168 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.189 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.189 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:26 compute-2 ceph-mon[77138]: pgmap v3444: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 865 KiB/s rd, 1.6 MiB/s wr, 95 op/s
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.540 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.882 232432 DEBUG nova.network.neutron [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Successfully updated port: 6f147e13-c230-43cc-b999-71bff624665a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.912 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.913 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:26 compute-2 nova_compute[232428]: 2025-11-29 08:50:26.913 232432 DEBUG nova.network.neutron [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.023 232432 DEBUG nova.compute.manager [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-changed-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.024 232432 DEBUG nova.compute.manager [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing instance network info cache due to event network-changed-6f147e13-c230-43cc-b999-71bff624665a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.025 232432 DEBUG oslo_concurrency.lockutils [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:27 compute-2 nova_compute[232428]: 2025-11-29 08:50:27.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:50:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813580764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:50:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:50:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813580764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:50:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:28.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:28 compute-2 ceph-mon[77138]: pgmap v3445: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 42 KiB/s wr, 25 op/s
Nov 29 08:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1813580764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:50:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1813580764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.960 232432 DEBUG nova.network.neutron [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.982 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.983 232432 DEBUG oslo_concurrency.lockutils [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.984 232432 DEBUG nova.network.neutron [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing network info cache for port 6f147e13-c230-43cc-b999-71bff624665a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.989 232432 DEBUG nova.virt.libvirt.vif [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.990 232432 DEBUG nova.network.os_vif_util [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.991 232432 DEBUG nova.network.os_vif_util [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.992 232432 DEBUG os_vif [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.993 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.994 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:28 compute-2 nova_compute[232428]: 2025-11-29 08:50:28.994 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.002 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f147e13-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.003 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f147e13-c2, col_values=(('external_ids', {'iface-id': '6f147e13-c230-43cc-b999-71bff624665a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:6a:e8', 'vm-uuid': '8520b12e-fae5-487c-afe3-0cf9b569d761'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.005 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.0065] manager: (tap6f147e13-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.018 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.022 232432 INFO os_vif [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2')
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.023 232432 DEBUG nova.virt.libvirt.vif [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.024 232432 DEBUG nova.network.os_vif_util [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.025 232432 DEBUG nova.network.os_vif_util [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.029 232432 DEBUG nova.virt.libvirt.guest [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] attach device xml: <interface type="ethernet">
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:0b:6a:e8"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <target dev="tap6f147e13-c2"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]: </interface>
Nov 29 08:50:29 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:50:29 compute-2 kernel: tap6f147e13-c2: entered promiscuous mode
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.0507] manager: (tap6f147e13-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Nov 29 08:50:29 compute-2 ovn_controller[134375]: 2025-11-29T08:50:29Z|00933|binding|INFO|Claiming lport 6f147e13-c230-43cc-b999-71bff624665a for this chassis.
Nov 29 08:50:29 compute-2 ovn_controller[134375]: 2025-11-29T08:50:29Z|00934|binding|INFO|6f147e13-c230-43cc-b999-71bff624665a: Claiming fa:16:3e:0b:6a:e8 10.100.0.27
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.066 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:6a:e8 10.100.0.27'], port_security=['fa:16:3e:0b:6a:e8 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '8520b12e-fae5-487c-afe3-0cf9b569d761', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77935487-36f3-42ac-9707-dc086662769f, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6f147e13-c230-43cc-b999-71bff624665a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.068 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6f147e13-c230-43cc-b999-71bff624665a in datapath 43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 bound to our chassis
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.071 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43d9ebc4-86fe-4a98-9913-ad59ccd9ad79
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.093 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6c51767b-1aa3-4653-9f9b-efe7c41cdf93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.094 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43d9ebc4-81 in ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.097 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43d9ebc4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.098 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3b1d62-af62-4af9-8a4c-4e8a3b716d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.099 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d51c6fe6-252b-40cd-94a7-2b975da99a4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:29.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.121 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[1c58ad7b-bf55-4f7f-824b-ed1dbf9fc5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_controller[134375]: 2025-11-29T08:50:29Z|00935|binding|INFO|Setting lport 6f147e13-c230-43cc-b999-71bff624665a ovn-installed in OVS
Nov 29 08:50:29 compute-2 ovn_controller[134375]: 2025-11-29T08:50:29Z|00936|binding|INFO|Setting lport 6f147e13-c230-43cc-b999-71bff624665a up in Southbound
Nov 29 08:50:29 compute-2 systemd-udevd[327505]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.127 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.1494] device (tap6f147e13-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.1515] device (tap6f147e13-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.152 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a35440-9df6-4f7d-bb12-9ed2ce55d796]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.186 232432 DEBUG nova.virt.libvirt.driver [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.187 232432 DEBUG nova.virt.libvirt.driver [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.187 232432 DEBUG nova.virt.libvirt.driver [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:fc:9b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.188 232432 DEBUG nova.virt.libvirt.driver [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:0b:6a:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.198 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f05797ff-4f30-4e60-813b-e15769f54e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.203 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8d35e336-9193-45c2-91ab-240aaaa79fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.2049] manager: (tap43d9ebc4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/445)
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.227 232432 DEBUG nova.virt.libvirt.guest [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-1429235908</nova:name>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:50:29</nova:creationTime>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:port uuid="fc56688b-60dd-4706-ac5a-9f0570b47503">
Nov 29 08:50:29 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     <nova:port uuid="6f147e13-c230-43cc-b999-71bff624665a">
Nov 29 08:50:29 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 08:50:29 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:29 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:50:29 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:50:29 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.249 232432 DEBUG oslo_concurrency.lockutils [None req-9f42c1b4-7ff1-4df6-8e65-aa8bfe1a3a8e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.267 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b08343-56d2-449a-905f-3d3410afe52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.272 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9d43343f-1681-4ce3-9e40-e8b0c68076f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.3074] device (tap43d9ebc4-80): carrier: link connected
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.318 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[740147dc-c0f2-4fd3-ac11-eb1ec9e52468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.350 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cddda6e0-8e2a-4fe2-9d6c-a62c2fb538f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43d9ebc4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:99:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919557, 'reachable_time': 20369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327530, 'error': None, 'target': 'ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.373 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b37f902a-94b3-4680-8ae3-253cbc698aff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:99a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 919557, 'tstamp': 919557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327531, 'error': None, 'target': 'ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.395 232432 DEBUG nova.compute.manager [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.395 232432 DEBUG oslo_concurrency.lockutils [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.395 232432 DEBUG oslo_concurrency.lockutils [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.396 232432 DEBUG oslo_concurrency.lockutils [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.396 232432 DEBUG nova.compute.manager [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.396 232432 WARNING nova.compute.manager [req-46242579-f533-4f7f-bc0f-510b37c8c32b req-ed5cb410-f309-4ebd-a812-2b36286e6683 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a for instance with vm_state active and task_state None.
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.404 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5da15c-f78e-4b36-b646-d9f8b613ebd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43d9ebc4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:99:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919557, 'reachable_time': 20369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327532, 'error': None, 'target': 'ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.459 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8952e76f-a4d2-49ad-b150-a5a657f16be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.558 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[50972e59-0a5e-4656-a32d-d0d92539fb76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.560 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d9ebc4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.560 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.561 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43d9ebc4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:29 compute-2 NetworkManager[48993]: <info>  [1764406229.5635] manager: (tap43d9ebc4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Nov 29 08:50:29 compute-2 kernel: tap43d9ebc4-80: entered promiscuous mode
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.567 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.569 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43d9ebc4-80, col_values=(('external_ids', {'iface-id': '11597451-36e9-418c-90e9-7e14b6b4a3a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.571 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 ovn_controller[134375]: 2025-11-29T08:50:29Z|00937|binding|INFO|Releasing lport 11597451-36e9-418c-90e9-7e14b6b4a3a5 from this chassis (sb_readonly=0)
Nov 29 08:50:29 compute-2 nova_compute[232428]: 2025-11-29 08:50:29.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.601 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43d9ebc4-86fe-4a98-9913-ad59ccd9ad79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43d9ebc4-86fe-4a98-9913-ad59ccd9ad79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.603 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[26857b74-2e79-455c-b211-68025132ff2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.604 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/43d9ebc4-86fe-4a98-9913-ad59ccd9ad79.pid.haproxy
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 43d9ebc4-86fe-4a98-9913-ad59ccd9ad79
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:50:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:29.606 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'env', 'PROCESS_TAG=haproxy-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43d9ebc4-86fe-4a98-9913-ad59ccd9ad79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:50:30 compute-2 podman[327565]: 2025-11-29 08:50:30.056515058 +0000 UTC m=+0.081258718 container create 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:50:30 compute-2 podman[327565]: 2025-11-29 08:50:30.010383793 +0000 UTC m=+0.035127513 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:50:30 compute-2 systemd[1]: Started libpod-conmon-564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19.scope.
Nov 29 08:50:30 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:50:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4dfe050f058f153ad3c91af353fcfe3a6411f2ced9db040e507d6d8d02a9c83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:50:30 compute-2 podman[327565]: 2025-11-29 08:50:30.193799638 +0000 UTC m=+0.218543348 container init 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:50:30 compute-2 podman[327565]: 2025-11-29 08:50:30.198884976 +0000 UTC m=+0.223628646 container start 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:50:30 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [NOTICE]   (327584) : New worker (327586) forked
Nov 29 08:50:30 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [NOTICE]   (327584) : Loading success.
Nov 29 08:50:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:50:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:30.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:50:30 compute-2 ceph-mon[77138]: pgmap v3446: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 45 KiB/s wr, 25 op/s
Nov 29 08:50:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2991538308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:31 compute-2 ovn_controller[134375]: 2025-11-29T08:50:31Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:6a:e8 10.100.0.27
Nov 29 08:50:31 compute-2 ovn_controller[134375]: 2025-11-29T08:50:31Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:6a:e8 10.100.0.27
Nov 29 08:50:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1372590960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.484 232432 DEBUG nova.compute.manager [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.485 232432 DEBUG oslo_concurrency.lockutils [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.485 232432 DEBUG oslo_concurrency.lockutils [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.485 232432 DEBUG oslo_concurrency.lockutils [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.486 232432 DEBUG nova.compute.manager [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:31 compute-2 nova_compute[232428]: 2025-11-29 08:50:31.486 232432 WARNING nova.compute.manager [req-7447d160-a593-4124-b0e2-329fff587f29 req-e13d0cb0-ef41-4109-9d9b-6f6d21fc4893 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a for instance with vm_state active and task_state None.
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.287 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.288 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.288 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.289 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.290 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:50:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:32.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.379 232432 DEBUG nova.network.neutron [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updated VIF entry in instance network info cache for port 6f147e13-c230-43cc-b999-71bff624665a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.381 232432 DEBUG nova.network.neutron [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.400 232432 DEBUG oslo_concurrency.lockutils [req-cfcce88c-bac3-49b4-92d2-e48ee9e84506 req-be53f42f-fb80-4d58-92e2-54230b7074a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.462 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:32 compute-2 ceph-mon[77138]: pgmap v3447: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 142 KiB/s rd, 33 KiB/s wr, 12 op/s
Nov 29 08:50:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:50:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/848878754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.751 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.905 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-6f147e13-c230-43cc-b999-71bff624665a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.906 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-6f147e13-c230-43cc-b999-71bff624665a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.931 232432 DEBUG nova.objects.instance [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'flavor' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.935 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.935 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.953 232432 DEBUG nova.virt.libvirt.vif [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.953 232432 DEBUG nova.network.os_vif_util [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.955 232432 DEBUG nova.network.os_vif_util [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.960 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.963 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.967 232432 DEBUG nova.virt.libvirt.driver [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Attempting to detach device tap6f147e13-c2 from instance 8520b12e-fae5-487c-afe3-0cf9b569d761 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.968 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:0b:6a:e8"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <target dev="tap6f147e13-c2"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]: </interface>
Nov 29 08:50:32 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.977 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.982 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface>not found in domain: <domain type='kvm' id='97'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <name>instance-000000c7</name>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <uuid>8520b12e-fae5-487c-afe3-0cf9b569d761</uuid>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-1429235908</nova:name>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:50:29</nova:creationTime>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:port uuid="fc56688b-60dd-4706-ac5a-9f0570b47503">
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <nova:port uuid="6f147e13-c230-43cc-b999-71bff624665a">
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:50:32 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <system>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='serial'>8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='uuid'>8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </system>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <os>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </os>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <features>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </features>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk' index='2'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       </source>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config' index='1'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       </source>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:fc:9b:a2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target dev='tapfc56688b-60'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:0b:6a:e8'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target dev='tap6f147e13-c2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='net1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log' append='off'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       </target>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log' append='off'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </console>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <video>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </video>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c233,c437</label>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c233,c437</imagelabel>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:50:32 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:50:32 compute-2 nova_compute[232428]: </domain>
Nov 29 08:50:32 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.982 232432 INFO nova.virt.libvirt.driver [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully detached device tap6f147e13-c2 from instance 8520b12e-fae5-487c-afe3-0cf9b569d761 from the persistent domain config.
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.983 232432 DEBUG nova.virt.libvirt.driver [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] (1/8): Attempting to detach device tap6f147e13-c2 with device alias net1 from instance 8520b12e-fae5-487c-afe3-0cf9b569d761 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:50:32 compute-2 nova_compute[232428]: 2025-11-29 08:50:32.983 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:0b:6a:e8"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]:   <target dev="tap6f147e13-c2"/>
Nov 29 08:50:32 compute-2 nova_compute[232428]: </interface>
Nov 29 08:50:32 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:50:33 compute-2 kernel: tap6f147e13-c2 (unregistering): left promiscuous mode
Nov 29 08:50:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:33.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:33 compute-2 NetworkManager[48993]: <info>  [1764406233.1262] device (tap6f147e13-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.134 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764406233.133882, 8520b12e-fae5-487c-afe3-0cf9b569d761 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.138 232432 DEBUG nova.virt.libvirt.driver [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Start waiting for the detach event from libvirt for device tap6f147e13-c2 with device alias net1 for instance 8520b12e-fae5-487c-afe3-0cf9b569d761 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:50:33 compute-2 ovn_controller[134375]: 2025-11-29T08:50:33Z|00938|binding|INFO|Releasing lport 6f147e13-c230-43cc-b999-71bff624665a from this chassis (sb_readonly=0)
Nov 29 08:50:33 compute-2 ovn_controller[134375]: 2025-11-29T08:50:33Z|00939|binding|INFO|Setting lport 6f147e13-c230-43cc-b999-71bff624665a down in Southbound
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.139 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:50:33 compute-2 ovn_controller[134375]: 2025-11-29T08:50:33Z|00940|binding|INFO|Removing iface tap6f147e13-c2 ovn-installed in OVS
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.141 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.149 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0b:6a:e8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6f147e13-c2"/></interface>not found in domain: <domain type='kvm' id='97'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <name>instance-000000c7</name>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <uuid>8520b12e-fae5-487c-afe3-0cf9b569d761</uuid>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-1429235908</nova:name>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:50:29</nova:creationTime>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:port uuid="fc56688b-60dd-4706-ac5a-9f0570b47503">
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:port uuid="6f147e13-c230-43cc-b999-71bff624665a">
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:50:33 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <system>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='serial'>8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='uuid'>8520b12e-fae5-487c-afe3-0cf9b569d761</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </system>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <os>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </os>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <features>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </features>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk' index='2'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       </source>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/8520b12e-fae5-487c-afe3-0cf9b569d761_disk.config' index='1'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       </source>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:fc:9b:a2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target dev='tapfc56688b-60'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log' append='off'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       </target>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761/console.log' append='off'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </console>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </input>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <video>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </video>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c233,c437</label>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c233,c437</imagelabel>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:50:33 compute-2 nova_compute[232428]: </domain>
Nov 29 08:50:33 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.149 232432 INFO nova.virt.libvirt.driver [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully detached device tap6f147e13-c2 from instance 8520b12e-fae5-487c-afe3-0cf9b569d761 from the live domain config.
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.151 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:6a:e8 10.100.0.27'], port_security=['fa:16:3e:0b:6a:e8 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '8520b12e-fae5-487c-afe3-0cf9b569d761', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77935487-36f3-42ac-9707-dc086662769f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=6f147e13-c230-43cc-b999-71bff624665a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.151 232432 DEBUG nova.virt.libvirt.vif [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.151 232432 DEBUG nova.network.os_vif_util [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "6f147e13-c230-43cc-b999-71bff624665a", "address": "fa:16:3e:0b:6a:e8", "network": {"id": "43d9ebc4-86fe-4a98-9913-ad59ccd9ad79", "bridge": "br-int", "label": "tempest-network-smoke--190683003", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f147e13-c2", "ovs_interfaceid": "6f147e13-c230-43cc-b999-71bff624665a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.152 232432 DEBUG nova.network.os_vif_util [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.153 232432 DEBUG os_vif [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.156 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 6f147e13-c230-43cc-b999-71bff624665a in datapath 43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 unbound from our chassis
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.156 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.157 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f147e13-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.160 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.160 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.162 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.161 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4f834972-a5c9-4e3d-8714-341e2dab9808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.164 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 namespace which is not needed anymore
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.168 232432 INFO os_vif [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:6a:e8,bridge_name='br-int',has_traffic_filtering=True,id=6f147e13-c230-43cc-b999-71bff624665a,network=Network(43d9ebc4-86fe-4a98-9913-ad59ccd9ad79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f147e13-c2')
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.169 232432 DEBUG nova.virt.libvirt.guest [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-1429235908</nova:name>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:50:33</nova:creationTime>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     <nova:port uuid="fc56688b-60dd-4706-ac5a-9f0570b47503">
Nov 29 08:50:33 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:50:33 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:50:33 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:50:33 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:50:33 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:50:33 compute-2 virtqemud[231977]: An error occurred, but the cause is unknown
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.274 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.275 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3968MB free_disk=20.94256591796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.276 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.276 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:33 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [NOTICE]   (327584) : haproxy version is 2.8.14-c23fe91
Nov 29 08:50:33 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [NOTICE]   (327584) : path to executable is /usr/sbin/haproxy
Nov 29 08:50:33 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [WARNING]  (327584) : Exiting Master process...
Nov 29 08:50:33 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [ALERT]    (327584) : Current worker (327586) exited with code 143 (Terminated)
Nov 29 08:50:33 compute-2 neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79[327580]: [WARNING]  (327584) : All workers exited. Exiting... (0)
Nov 29 08:50:33 compute-2 systemd[1]: libpod-564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19.scope: Deactivated successfully.
Nov 29 08:50:33 compute-2 podman[327642]: 2025-11-29 08:50:33.319845681 +0000 UTC m=+0.043964568 container died 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:50:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19-userdata-shm.mount: Deactivated successfully.
Nov 29 08:50:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-b4dfe050f058f153ad3c91af353fcfe3a6411f2ced9db040e507d6d8d02a9c83-merged.mount: Deactivated successfully.
Nov 29 08:50:33 compute-2 podman[327642]: 2025-11-29 08:50:33.380347164 +0000 UTC m=+0.104466051 container cleanup 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:50:33 compute-2 systemd[1]: libpod-conmon-564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19.scope: Deactivated successfully.
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.417 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 8520b12e-fae5-487c-afe3-0cf9b569d761 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.418 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.418 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.461 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:50:33 compute-2 podman[327672]: 2025-11-29 08:50:33.479086034 +0000 UTC m=+0.057657474 container remove 564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.485 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2a3585-d31f-49de-baa8-6c31ca0659c3]: (4, ('Sat Nov 29 08:50:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 (564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19)\n564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19\nSat Nov 29 08:50:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 (564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19)\n564a0eaf4e18283c757c7792da081130ea4797f5904cc121b29cf2b80eb17a19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.486 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[297b299c-8da2-4237-88fc-ab0d49ebeb38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.487 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d9ebc4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:33 compute-2 kernel: tap43d9ebc4-80: left promiscuous mode
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.512 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[249ef3b6-e727-43ab-961d-29b55ca03ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.526 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[949887ba-5684-4a82-a148-025746498b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.527 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e34989-8c4d-4f4b-b9b6-54220ee36c15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.548 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e833c8d-4a51-4e16-8e18-ab4216ccbe56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919545, 'reachable_time': 17209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327688, 'error': None, 'target': 'ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.552 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43d9ebc4-86fe-4a98-9913-ad59ccd9ad79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:50:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:33.552 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[3ecc238b-0d8f-47e4-bdd8-958af6087d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:33 compute-2 systemd[1]: run-netns-ovnmeta\x2d43d9ebc4\x2d86fe\x2d4a98\x2d9913\x2dad59ccd9ad79.mount: Deactivated successfully.
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.581 232432 DEBUG nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-unplugged-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.582 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.582 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.582 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.582 232432 DEBUG nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-unplugged-6f147e13-c230-43cc-b999-71bff624665a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.582 232432 WARNING nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-unplugged-6f147e13-c230-43cc-b999-71bff624665a for instance with vm_state active and task_state None.
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 DEBUG nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 DEBUG oslo_concurrency.lockutils [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 DEBUG nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.583 232432 WARNING nova.compute.manager [req-b261e7c4-2066-4407-a128-d4d4ceef4edb req-34c840e2-3e3f-4957-a8d9-27e1225827d6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-plugged-6f147e13-c230-43cc-b999-71bff624665a for instance with vm_state active and task_state None.
Nov 29 08:50:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/848878754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.730 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.731 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.731 232432 DEBUG nova.network.neutron [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:50:33 compute-2 sudo[327708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:33 compute-2 sudo[327708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:33 compute-2 sudo[327708]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:33 compute-2 sudo[327739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:33 compute-2 sudo[327739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:33 compute-2 sudo[327739]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:50:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3828781751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:33 compute-2 podman[327732]: 2025-11-29 08:50:33.989768867 +0000 UTC m=+0.141995607 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 29 08:50:33 compute-2 nova_compute[232428]: 2025-11-29 08:50:33.999 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:50:34 compute-2 nova_compute[232428]: 2025-11-29 08:50:34.006 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:50:34 compute-2 nova_compute[232428]: 2025-11-29 08:50:34.027 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:50:34 compute-2 nova_compute[232428]: 2025-11-29 08:50:34.055 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:50:34 compute-2 nova_compute[232428]: 2025-11-29 08:50:34.056 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:34.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:34 compute-2 ceph-mon[77138]: pgmap v3448: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 27 KiB/s wr, 3 op/s
Nov 29 08:50:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3828781751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.077 232432 INFO nova.network.neutron [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Port 6f147e13-c230-43cc-b999-71bff624665a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.078 232432 DEBUG nova.network.neutron [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [{"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.096 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:35.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.131 232432 DEBUG oslo_concurrency.lockutils [None req-999bbcce-daa7-4ca9-a61b-d4cb26e3ebec 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-8520b12e-fae5-487c-afe3-0cf9b569d761-6f147e13-c230-43cc-b999-71bff624665a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:35 compute-2 ovn_controller[134375]: 2025-11-29T08:50:35Z|00941|binding|INFO|Releasing lport 5aeb81c5-8d05-4127-b095-49ab41849fe5 from this chassis (sb_readonly=0)
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:35 compute-2 nova_compute[232428]: 2025-11-29 08:50:35.671 232432 DEBUG nova.compute.manager [req-1b11f033-f621-4a29-baa6-12dd253831d1 req-b6e46540-7ba2-4ed6-9b55-12b739f77b1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-deleted-6f147e13-c230-43cc-b999-71bff624665a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:35 compute-2 ceph-mon[77138]: pgmap v3449: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 20 KiB/s wr, 5 op/s
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.002 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.002 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.003 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.003 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.004 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.006 232432 INFO nova.compute.manager [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Terminating instance
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.009 232432 DEBUG nova.compute.manager [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.056 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:36 compute-2 kernel: tapfc56688b-60 (unregistering): left promiscuous mode
Nov 29 08:50:36 compute-2 NetworkManager[48993]: <info>  [1764406236.0671] device (tapfc56688b-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:50:36 compute-2 ovn_controller[134375]: 2025-11-29T08:50:36Z|00942|binding|INFO|Releasing lport fc56688b-60dd-4706-ac5a-9f0570b47503 from this chassis (sb_readonly=0)
Nov 29 08:50:36 compute-2 ovn_controller[134375]: 2025-11-29T08:50:36Z|00943|binding|INFO|Setting lport fc56688b-60dd-4706-ac5a-9f0570b47503 down in Southbound
Nov 29 08:50:36 compute-2 ovn_controller[134375]: 2025-11-29T08:50:36Z|00944|binding|INFO|Removing iface tapfc56688b-60 ovn-installed in OVS
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.072 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.085 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:9b:a2 10.100.0.3'], port_security=['fa:16:3e:fc:9b:a2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8520b12e-fae5-487c-afe3-0cf9b569d761', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d7dedf4-5b50-4817-8e83-2bc5e963fef6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9e8934-e2c9-4c71-9edb-881a58049801, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fc56688b-60dd-4706-ac5a-9f0570b47503) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.087 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fc56688b-60dd-4706-ac5a-9f0570b47503 in datapath a3d2cdb4-1226-4823-8b1e-558c7decebb4 unbound from our chassis
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.089 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3d2cdb4-1226-4823-8b1e-558c7decebb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.090 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6f5e71-c157-420f-97f0-d09e79b58d25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.091 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4 namespace which is not needed anymore
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.110 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Nov 29 08:50:36 compute-2 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c7.scope: Consumed 16.556s CPU time.
Nov 29 08:50:36 compute-2 systemd-machined[194747]: Machine qemu-97-instance-000000c7 terminated.
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:50:36 compute-2 NetworkManager[48993]: <info>  [1764406236.2410] manager: (tapfc56688b-60): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.262 232432 INFO nova.virt.libvirt.driver [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance destroyed successfully.
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.262 232432 DEBUG nova.objects.instance [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 8520b12e-fae5-487c-afe3-0cf9b569d761 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.277 232432 DEBUG nova.virt.libvirt.vif [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1429235908',display_name='tempest-TestNetworkBasicOps-server-1429235908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1429235908',id=199,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5h3kV+vxxfQjo5yfo5LItqz0TEbZw8GTF+lcyvMlL4ERBtM/4OkmMPG3wxZUsTFQJbv7d117WCq71jMh6M6s8u+sXhGqHXUFC3IFmK5apwPJdmFunWkpWIuHb4cHshWA==',key_name='tempest-TestNetworkBasicOps-31307540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-rzonevl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=8520b12e-fae5-487c-afe3-0cf9b569d761,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.278 232432 DEBUG nova.network.os_vif_util [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fc56688b-60dd-4706-ac5a-9f0570b47503", "address": "fa:16:3e:fc:9b:a2", "network": {"id": "a3d2cdb4-1226-4823-8b1e-558c7decebb4", "bridge": "br-int", "label": "tempest-network-smoke--428872885", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc56688b-60", "ovs_interfaceid": "fc56688b-60dd-4706-ac5a-9f0570b47503", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.279 232432 DEBUG nova.network.os_vif_util [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.279 232432 DEBUG os_vif [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.284 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.285 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc56688b-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.291 232432 DEBUG nova.compute.manager [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-unplugged-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.292 232432 DEBUG oslo_concurrency.lockutils [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.292 232432 DEBUG oslo_concurrency.lockutils [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.292 232432 DEBUG oslo_concurrency.lockutils [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.293 232432 DEBUG nova.compute.manager [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-unplugged-fc56688b-60dd-4706-ac5a-9f0570b47503 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.293 232432 DEBUG nova.compute.manager [req-df5b2cfe-e7be-4cf0-8978-81fb9e13e3d1 req-1598573c-1489-4b09-87fc-d738fea6a9ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-unplugged-fc56688b-60dd-4706-ac5a-9f0570b47503 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.294 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.296 232432 INFO os_vif [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:9b:a2,bridge_name='br-int',has_traffic_filtering=True,id=fc56688b-60dd-4706-ac5a-9f0570b47503,network=Network(a3d2cdb4-1226-4823-8b1e-558c7decebb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc56688b-60')
Nov 29 08:50:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:36 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [NOTICE]   (327353) : haproxy version is 2.8.14-c23fe91
Nov 29 08:50:36 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [NOTICE]   (327353) : path to executable is /usr/sbin/haproxy
Nov 29 08:50:36 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [WARNING]  (327353) : Exiting Master process...
Nov 29 08:50:36 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [ALERT]    (327353) : Current worker (327355) exited with code 143 (Terminated)
Nov 29 08:50:36 compute-2 neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4[327327]: [WARNING]  (327353) : All workers exited. Exiting... (0)
Nov 29 08:50:36 compute-2 systemd[1]: libpod-4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a.scope: Deactivated successfully.
Nov 29 08:50:36 compute-2 podman[327808]: 2025-11-29 08:50:36.343470399 +0000 UTC m=+0.078944117 container died 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:50:36 compute-2 systemd[1]: var-lib-containers-storage-overlay-88c16ea50cfb68b1a4988684b596b5a93b2f315ddea4ce25fdf3adee12db7654-merged.mount: Deactivated successfully.
Nov 29 08:50:36 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:50:36 compute-2 podman[327808]: 2025-11-29 08:50:36.388168399 +0000 UTC m=+0.123642107 container cleanup 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:50:36 compute-2 systemd[1]: libpod-conmon-4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a.scope: Deactivated successfully.
Nov 29 08:50:36 compute-2 podman[327866]: 2025-11-29 08:50:36.478268581 +0000 UTC m=+0.059359537 container remove 4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.488 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa74d8d-04ec-4b26-8bdc-021af0189f0a]: (4, ('Sat Nov 29 08:50:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4 (4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a)\n4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a\nSat Nov 29 08:50:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4 (4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a)\n4499d6fe69e3f04dc714efffb8c4acf67ba4366a529647e2f56c23301aa46a2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.492 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[35b57e0b-babc-467d-9387-3b1563b7cb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.493 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3d2cdb4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 kernel: tapa3d2cdb4-10: left promiscuous mode
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.526 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[53decb45-d8e1-4ed1-99e1-5e32d7854809]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.540 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3cf160-c2c2-4ed2-88f6-f8c4fe81f74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.542 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8714e89e-08ca-49ce-8127-844a11a94723]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.567 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b300b803-dfbc-4c24-861d-381750237a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916458, 'reachable_time': 21213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327882, 'error': None, 'target': 'ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 systemd[1]: run-netns-ovnmeta\x2da3d2cdb4\x2d1226\x2d4823\x2d8b1e\x2d558c7decebb4.mount: Deactivated successfully.
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.570 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3d2cdb4-1226-4823-8b1e-558c7decebb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:50:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:50:36.570 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[0e39f0c0-9819-46b0-80b7-51550d44473f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.755 232432 INFO nova.virt.libvirt.driver [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Deleting instance files /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761_del
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.756 232432 INFO nova.virt.libvirt.driver [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Deletion of /var/lib/nova/instances/8520b12e-fae5-487c-afe3-0cf9b569d761_del complete
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.827 232432 INFO nova.compute.manager [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.827 232432 DEBUG oslo.service.loopingcall [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.828 232432 DEBUG nova.compute.manager [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:50:36 compute-2 nova_compute[232428]: 2025-11-29 08:50:36.828 232432 DEBUG nova.network.neutron [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:50:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2854528638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:37.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.716 232432 DEBUG nova.network.neutron [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.731 232432 INFO nova.compute.manager [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Took 0.90 seconds to deallocate network for instance.
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.778 232432 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.778 232432 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing instance network info cache due to event network-changed-fc56688b-60dd-4706-ac5a-9f0570b47503. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.779 232432 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.779 232432 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.780 232432 DEBUG nova.network.neutron [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Refreshing network info cache for port fc56688b-60dd-4706-ac5a-9f0570b47503 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.791 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.792 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:37 compute-2 nova_compute[232428]: 2025-11-29 08:50:37.845 232432 DEBUG oslo_concurrency.processutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.170 232432 DEBUG nova.network.neutron [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:50:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:50:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/362747389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:38 compute-2 ceph-mon[77138]: pgmap v3450: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 16 KiB/s wr, 5 op/s
Nov 29 08:50:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3791662641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.335 232432 DEBUG oslo_concurrency.processutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.344 232432 DEBUG nova.compute.provider_tree [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.382 232432 DEBUG nova.scheduler.client.report [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.412 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.449 232432 DEBUG nova.compute.manager [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.449 232432 DEBUG oslo_concurrency.lockutils [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.450 232432 DEBUG oslo_concurrency.lockutils [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.450 232432 DEBUG oslo_concurrency.lockutils [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.451 232432 DEBUG nova.compute.manager [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] No waiting events found dispatching network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.451 232432 WARNING nova.compute.manager [req-a4f19fbf-a256-40c2-932b-ac95c0cc93c1 req-d79d8d0c-ae7e-494b-b511-69b4749a001a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received unexpected event network-vif-plugged-fc56688b-60dd-4706-ac5a-9f0570b47503 for instance with vm_state deleted and task_state None.
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.473 232432 INFO nova.scheduler.client.report [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 8520b12e-fae5-487c-afe3-0cf9b569d761
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.557 232432 DEBUG oslo_concurrency.lockutils [None req-d3c48fd2-28a5-4325-859a-d2b201fb4b7b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "8520b12e-fae5-487c-afe3-0cf9b569d761" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.596 232432 DEBUG nova.network.neutron [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:50:38 compute-2 nova_compute[232428]: 2025-11-29 08:50:38.621 232432 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8520b12e-fae5-487c-afe3-0cf9b569d761" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:50:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/362747389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1581305535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:50:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1245714797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:50:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1245714797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:50:39 compute-2 nova_compute[232428]: 2025-11-29 08:50:39.910 232432 DEBUG nova.compute.manager [req-59aae2e2-2082-4358-8f4b-5b536ef05817 req-90ad283f-1479-4146-b243-0fa0ee83d0f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Received event network-vif-deleted-fc56688b-60dd-4706-ac5a-9f0570b47503 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:50:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:40 compute-2 ceph-mon[77138]: pgmap v3451: 305 pgs: 305 active+clean; 255 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 17 KiB/s wr, 18 op/s
Nov 29 08:50:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:41 compute-2 nova_compute[232428]: 2025-11-29 08:50:41.289 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:42 compute-2 nova_compute[232428]: 2025-11-29 08:50:42.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:42 compute-2 ceph-mon[77138]: pgmap v3452: 305 pgs: 305 active+clean; 141 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 16 KiB/s wr, 65 op/s
Nov 29 08:50:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:43 compute-2 sudo[327908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:43 compute-2 sudo[327908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:43 compute-2 sudo[327908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:43 compute-2 sudo[327933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:50:43 compute-2 sudo[327933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:43 compute-2 sudo[327933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:43 compute-2 sudo[327958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:43 compute-2 sudo[327958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:43 compute-2 sudo[327958]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:43 compute-2 sudo[327983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:50:43 compute-2 sudo[327983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:43 compute-2 podman[328007]: 2025-11-29 08:50:43.888719401 +0000 UTC m=+0.092859309 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 08:50:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:44 compute-2 sudo[327983]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:44 compute-2 ceph-mon[77138]: pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 13 KiB/s wr, 65 op/s
Nov 29 08:50:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:50:44 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:50:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:45.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:50:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:50:45 compute-2 ceph-mon[77138]: pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 8.0 KiB/s wr, 64 op/s
Nov 29 08:50:46 compute-2 nova_compute[232428]: 2025-11-29 08:50:46.095 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:46 compute-2 nova_compute[232428]: 2025-11-29 08:50:46.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:46 compute-2 nova_compute[232428]: 2025-11-29 08:50:46.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:47.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:47 compute-2 nova_compute[232428]: 2025-11-29 08:50:47.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:48.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:48 compute-2 ceph-mon[77138]: pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 08:50:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:49.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:50 compute-2 ceph-mon[77138]: pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 08:50:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:51 compute-2 nova_compute[232428]: 2025-11-29 08:50:51.259 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406236.2580357, 8520b12e-fae5-487c-afe3-0cf9b569d761 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:50:51 compute-2 nova_compute[232428]: 2025-11-29 08:50:51.260 232432 INFO nova.compute.manager [-] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] VM Stopped (Lifecycle Event)
Nov 29 08:50:51 compute-2 nova_compute[232428]: 2025-11-29 08:50:51.292 232432 DEBUG nova.compute.manager [None req-205af92a-7c86-4050-be42-89600fae9efd - - - - - -] [instance: 8520b12e-fae5-487c-afe3-0cf9b569d761] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:50:51 compute-2 nova_compute[232428]: 2025-11-29 08:50:51.294 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:51 compute-2 sudo[328062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:51 compute-2 sudo[328062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:51 compute-2 sudo[328062]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:51 compute-2 sudo[328087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:50:51 compute-2 sudo[328087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:51 compute-2 sudo[328087]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:52 compute-2 nova_compute[232428]: 2025-11-29 08:50:52.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:50:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:50:52 compute-2 ceph-mon[77138]: pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 2.0 KiB/s wr, 47 op/s
Nov 29 08:50:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:53.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:53 compute-2 ceph-mon[77138]: pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Nov 29 08:50:54 compute-2 sudo[328114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:54 compute-2 sudo[328114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:54 compute-2 sudo[328114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:54 compute-2 sudo[328140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:50:54 compute-2 sudo[328140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:50:54 compute-2 sudo[328140]: pam_unix(sudo:session): session closed for user root
Nov 29 08:50:54 compute-2 podman[328138]: 2025-11-29 08:50:54.184974314 +0000 UTC m=+0.101804677 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:50:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:50:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:50:56 compute-2 ceph-mon[77138]: pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:50:56 compute-2 nova_compute[232428]: 2025-11-29 08:50:56.297 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:50:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:50:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:57 compute-2 nova_compute[232428]: 2025-11-29 08:50:57.473 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:50:58 compute-2 ceph-mon[77138]: pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:50:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:50:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:50:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:50:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:50:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:00 compute-2 ceph-mon[77138]: pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:51:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:01 compute-2 nova_compute[232428]: 2025-11-29 08:51:01.301 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:02.360 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:51:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:02.361 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:51:02 compute-2 nova_compute[232428]: 2025-11-29 08:51:02.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:02 compute-2 ceph-mon[77138]: pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:51:02 compute-2 nova_compute[232428]: 2025-11-29 08:51:02.477 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:03.355 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:03.356 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:03.356 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:03 compute-2 nova_compute[232428]: 2025-11-29 08:51:03.926 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:03 compute-2 nova_compute[232428]: 2025-11-29 08:51:03.927 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:03 compute-2 nova_compute[232428]: 2025-11-29 08:51:03.946 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.039 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.039 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.050 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.050 232432 INFO nova.compute.claims [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.226 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:04 compute-2 ceph-mon[77138]: pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:51:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:51:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/209175919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.743 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:04 compute-2 podman[328210]: 2025-11-29 08:51:04.756634043 +0000 UTC m=+0.152101432 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.756 232432 DEBUG nova.compute.provider_tree [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.832 232432 DEBUG nova.scheduler.client.report [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.906 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.907 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.973 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.974 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:51:04 compute-2 nova_compute[232428]: 2025-11-29 08:51:04.995 232432 INFO nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.014 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.102 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.103 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.104 232432 INFO nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Creating image(s)
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.143 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.186 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.225 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.231 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.296 232432 DEBUG nova.policy [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.336 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.337 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.339 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.340 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.378 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.384 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7702bbc1-7949-4182-8b0f-e338d38a1269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/209175919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.776 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7702bbc1-7949-4182-8b0f-e338d38a1269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:05 compute-2 nova_compute[232428]: 2025-11-29 08:51:05.898 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.055 232432 DEBUG nova.objects.instance [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 7702bbc1-7949-4182-8b0f-e338d38a1269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.092 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.093 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Ensure instance console log exists: /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.094 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.095 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.095 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.278 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Successfully created port: fd43a273-a708-4e86-9c8d-9ad34d9d9dac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:51:06 compute-2 nova_compute[232428]: 2025-11-29 08:51:06.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:06 compute-2 ceph-mon[77138]: pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:51:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:07.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.382 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Successfully updated port: fd43a273-a708-4e86-9c8d-9ad34d9d9dac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.417 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.417 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.418 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.479 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.509 232432 DEBUG nova.compute.manager [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.509 232432 DEBUG nova.compute.manager [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing instance network info cache due to event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.510 232432 DEBUG oslo_concurrency.lockutils [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:51:07 compute-2 nova_compute[232428]: 2025-11-29 08:51:07.552 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:51:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.455 232432 DEBUG nova.network.neutron [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.475 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.475 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Instance network_info: |[{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.476 232432 DEBUG oslo_concurrency.lockutils [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.476 232432 DEBUG nova.network.neutron [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.483 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Start _get_guest_xml network_info=[{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.491 232432 WARNING nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.498 232432 DEBUG nova.virt.libvirt.host [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.499 232432 DEBUG nova.virt.libvirt.host [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.509 232432 DEBUG nova.virt.libvirt.host [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.509 232432 DEBUG nova.virt.libvirt.host [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.511 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.512 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.512 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.513 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.513 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.513 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.514 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.514 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.514 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.515 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.515 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.516 232432 DEBUG nova.virt.hardware [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:51:08 compute-2 nova_compute[232428]: 2025-11-29 08:51:08.520 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:08 compute-2 ceph-mon[77138]: pgmap v3465: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:51:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:51:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2023472573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.003 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.047 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.054 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:51:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2945347860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.469 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.473 232432 DEBUG nova.virt.libvirt.vif [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-698503088',display_name='tempest-TestNetworkBasicOps-server-698503088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-698503088',id=200,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgyHnORnOauL4leJ7s/Xm8nQo2pkbiEkcEbpf4KQbTIUr4R9BLA7L3pENbp2D3341bdfN0NTaOcR8nR9p3T5QwiJq563ZWaxxerHHvp7NQE0WTAlSbA3aVV/IKiTmyrMw==',key_name='tempest-TestNetworkBasicOps-186055993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-hvtpraa0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:05Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=7702bbc1-7949-4182-8b0f-e338d38a1269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.474 232432 DEBUG nova.network.os_vif_util [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.476 232432 DEBUG nova.network.os_vif_util [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.478 232432 DEBUG nova.objects.instance [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7702bbc1-7949-4182-8b0f-e338d38a1269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.502 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <uuid>7702bbc1-7949-4182-8b0f-e338d38a1269</uuid>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <name>instance-000000c8</name>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-698503088</nova:name>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:51:08</nova:creationTime>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <nova:port uuid="fd43a273-a708-4e86-9c8d-9ad34d9d9dac">
Nov 29 08:51:09 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <system>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="serial">7702bbc1-7949-4182-8b0f-e338d38a1269</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="uuid">7702bbc1-7949-4182-8b0f-e338d38a1269</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </system>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <os>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </os>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <features>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </features>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7702bbc1-7949-4182-8b0f-e338d38a1269_disk">
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config">
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:51:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:77:3f:70"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <target dev="tapfd43a273-a7"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/console.log" append="off"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <video>
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </video>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:51:09 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:51:09 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:51:09 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:51:09 compute-2 nova_compute[232428]: </domain>
Nov 29 08:51:09 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.504 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Preparing to wait for external event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.505 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.506 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.506 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.507 232432 DEBUG nova.virt.libvirt.vif [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-698503088',display_name='tempest-TestNetworkBasicOps-server-698503088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-698503088',id=200,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgyHnORnOauL4leJ7s/Xm8nQo2pkbiEkcEbpf4KQbTIUr4R9BLA7L3pENbp2D3341bdfN0NTaOcR8nR9p3T5QwiJq563ZWaxxerHHvp7NQE0WTAlSbA3aVV/IKiTmyrMw==',key_name='tempest-TestNetworkBasicOps-186055993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-hvtpraa0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:05Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=7702bbc1-7949-4182-8b0f-e338d38a1269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.508 232432 DEBUG nova.network.os_vif_util [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.509 232432 DEBUG nova.network.os_vif_util [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.510 232432 DEBUG os_vif [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.512 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.513 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.514 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.521 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd43a273-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.522 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd43a273-a7, col_values=(('external_ids', {'iface-id': 'fd43a273-a708-4e86-9c8d-9ad34d9d9dac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:3f:70', 'vm-uuid': '7702bbc1-7949-4182-8b0f-e338d38a1269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:09 compute-2 NetworkManager[48993]: <info>  [1764406269.5255] manager: (tapfd43a273-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.538 232432 INFO os_vif [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7')
Nov 29 08:51:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2023472573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2945347860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.630 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.631 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.631 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:77:3f:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.632 232432 INFO nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Using config drive
Nov 29 08:51:09 compute-2 nova_compute[232428]: 2025-11-29 08:51:09.674 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.132 232432 DEBUG nova.network.neutron [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updated VIF entry in instance network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.135 232432 DEBUG nova.network.neutron [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.152 232432 DEBUG oslo_concurrency.lockutils [req-96398a38-f47d-49b8-99cd-d5cceea271c1 req-47157231-292d-40ea-9383-867dfaf19f5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.330 232432 INFO nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Creating config drive at /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config
Nov 29 08:51:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:10.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.339 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdepqlqq7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.502 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdepqlqq7" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.553 232432 DEBUG nova.storage.rbd_utils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.559 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config 7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:10 compute-2 ceph-mon[77138]: pgmap v3466: 305 pgs: 305 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.6 KiB/s rd, 528 KiB/s wr, 12 op/s
Nov 29 08:51:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.814 232432 DEBUG oslo_concurrency.processutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config 7702bbc1-7949-4182-8b0f-e338d38a1269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.816 232432 INFO nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Deleting local config drive /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269/disk.config because it was imported into RBD.
Nov 29 08:51:10 compute-2 kernel: tapfd43a273-a7: entered promiscuous mode
Nov 29 08:51:10 compute-2 NetworkManager[48993]: <info>  [1764406270.8979] manager: (tapfd43a273-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/449)
Nov 29 08:51:10 compute-2 ovn_controller[134375]: 2025-11-29T08:51:10Z|00945|binding|INFO|Claiming lport fd43a273-a708-4e86-9c8d-9ad34d9d9dac for this chassis.
Nov 29 08:51:10 compute-2 ovn_controller[134375]: 2025-11-29T08:51:10Z|00946|binding|INFO|fd43a273-a708-4e86-9c8d-9ad34d9d9dac: Claiming fa:16:3e:77:3f:70 10.100.0.9
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.903 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:10 compute-2 nova_compute[232428]: 2025-11-29 08:51:10.914 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.925 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:3f:70 10.100.0.9'], port_security=['fa:16:3e:77:3f:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7702bbc1-7949-4182-8b0f-e338d38a1269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '064b1bf7-39d0-4170-ba25-f3de0111ddc7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=167c3583-8559-4b69-8d8b-b842ba19be53, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fd43a273-a708-4e86-9c8d-9ad34d9d9dac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.926 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fd43a273-a708-4e86-9c8d-9ad34d9d9dac in datapath f41ffe1c-22ed-416f-b3d7-fe073a4b4077 bound to our chassis
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.927 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f41ffe1c-22ed-416f-b3d7-fe073a4b4077
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.947 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[188d419e-7708-4e20-b1e1-04df4a910013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.948 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf41ffe1c-21 in ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.950 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf41ffe1c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.950 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4c65904c-0442-4ff0-9440-8153318e9131]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.951 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5a057b15-506c-4ec5-b4d6-ebf1730468a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:10 compute-2 systemd-machined[194747]: New machine qemu-98-instance-000000c8.
Nov 29 08:51:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:10.973 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[ba455113-4402-43ce-aaae-d0d43335bf91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:10 compute-2 systemd[1]: Started Virtual Machine qemu-98-instance-000000c8.
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.004 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1d103fee-04e4-46eb-bd3f-7feec9835c9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 systemd-udevd[328548]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.016 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 ovn_controller[134375]: 2025-11-29T08:51:11Z|00947|binding|INFO|Setting lport fd43a273-a708-4e86-9c8d-9ad34d9d9dac ovn-installed in OVS
Nov 29 08:51:11 compute-2 ovn_controller[134375]: 2025-11-29T08:51:11Z|00948|binding|INFO|Setting lport fd43a273-a708-4e86-9c8d-9ad34d9d9dac up in Southbound
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.025 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 NetworkManager[48993]: <info>  [1764406271.0371] device (tapfd43a273-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:51:11 compute-2 NetworkManager[48993]: <info>  [1764406271.0401] device (tapfd43a273-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.045 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec7d470-b88e-4b49-8d76-6f2952648499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.053 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a13e6742-7693-40d1-b385-d968fc0bcfd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 NetworkManager[48993]: <info>  [1764406271.0560] manager: (tapf41ffe1c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/450)
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.111 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f787237e-77dd-402a-a99e-b481ef250882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.114 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[705ef60d-1d51-430c-8a21-a05ebd596c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 NetworkManager[48993]: <info>  [1764406271.1520] device (tapf41ffe1c-20): carrier: link connected
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.162 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[436a9172-bfd5-474c-9a2a-91dece43d634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.187 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab7bc81-e2d3-461e-abdb-2603fa74b4af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf41ffe1c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:62:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 289], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923742, 'reachable_time': 44479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328577, 'error': None, 'target': 'ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.212 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e156dba5-e3e9-4b6c-8305-156281bf527f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:628e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 923742, 'tstamp': 923742}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328578, 'error': None, 'target': 'ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.238 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[61f88f02-49fa-4ae4-b0c3-54d6ebe1ee5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf41ffe1c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:62:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 289], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923742, 'reachable_time': 44479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328579, 'error': None, 'target': 'ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.292 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9483baaf-b305-4313-9cc4-cad848bef898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.313 232432 DEBUG nova.compute.manager [req-3ae35306-0710-4e9f-897f-d00e1af4b2fe req-2d6f1116-0b40-4deb-819d-604d98dd1501 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.313 232432 DEBUG oslo_concurrency.lockutils [req-3ae35306-0710-4e9f-897f-d00e1af4b2fe req-2d6f1116-0b40-4deb-819d-604d98dd1501 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.313 232432 DEBUG oslo_concurrency.lockutils [req-3ae35306-0710-4e9f-897f-d00e1af4b2fe req-2d6f1116-0b40-4deb-819d-604d98dd1501 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.314 232432 DEBUG oslo_concurrency.lockutils [req-3ae35306-0710-4e9f-897f-d00e1af4b2fe req-2d6f1116-0b40-4deb-819d-604d98dd1501 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.314 232432 DEBUG nova.compute.manager [req-3ae35306-0710-4e9f-897f-d00e1af4b2fe req-2d6f1116-0b40-4deb-819d-604d98dd1501 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Processing event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.387 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0c21338c-3fcb-4072-896e-db6e52001aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.389 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf41ffe1c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.390 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.390 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf41ffe1c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.393 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 NetworkManager[48993]: <info>  [1764406271.3941] manager: (tapf41ffe1c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Nov 29 08:51:11 compute-2 kernel: tapf41ffe1c-20: entered promiscuous mode
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.398 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.400 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf41ffe1c-20, col_values=(('external_ids', {'iface-id': '59355777-532b-4880-8ace-46fa32341f85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.402 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 ovn_controller[134375]: 2025-11-29T08:51:11Z|00949|binding|INFO|Releasing lport 59355777-532b-4880-8ace-46fa32341f85 from this chassis (sb_readonly=0)
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.432 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.433 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f41ffe1c-22ed-416f-b3d7-fe073a4b4077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f41ffe1c-22ed-416f-b3d7-fe073a4b4077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.434 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ce977109-962c-4f92-b505-a424f42a462a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.435 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-f41ffe1c-22ed-416f-b3d7-fe073a4b4077
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/f41ffe1c-22ed-416f-b3d7-fe073a4b4077.pid.haproxy
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID f41ffe1c-22ed-416f-b3d7-fe073a4b4077
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:51:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:11.438 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'env', 'PROCESS_TAG=haproxy-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f41ffe1c-22ed-416f-b3d7-fe073a4b4077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.903 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406271.9032032, 7702bbc1-7949-4182-8b0f-e338d38a1269 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.904 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] VM Started (Lifecycle Event)
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.907 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.913 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.917 232432 INFO nova.virt.libvirt.driver [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Instance spawned successfully.
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.918 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.944 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:51:11 compute-2 podman[328652]: 2025-11-29 08:51:11.951876512 +0000 UTC m=+0.099811865 container create dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.953 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.958 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.959 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.960 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.961 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.962 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.963 232432 DEBUG nova.virt.libvirt.driver [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:51:11 compute-2 podman[328652]: 2025-11-29 08:51:11.900659429 +0000 UTC m=+0.048594722 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.996 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.997 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406271.904366, 7702bbc1-7949-4182-8b0f-e338d38a1269 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:51:11 compute-2 nova_compute[232428]: 2025-11-29 08:51:11.998 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] VM Paused (Lifecycle Event)
Nov 29 08:51:12 compute-2 systemd[1]: Started libpod-conmon-dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a.scope.
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.029 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.034 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406271.912909, 7702bbc1-7949-4182-8b0f-e338d38a1269 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.036 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] VM Resumed (Lifecycle Event)
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.041 232432 INFO nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Took 6.94 seconds to spawn the instance on the hypervisor.
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.042 232432 DEBUG nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:51:12 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.060 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:51:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da51cd7354831b34294c7c3d43b316e1c1eb5a3b2ed82b9d3c22108ab61bb9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.065 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:51:12 compute-2 podman[328652]: 2025-11-29 08:51:12.089942667 +0000 UTC m=+0.237877970 container init dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.094 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:51:12 compute-2 podman[328652]: 2025-11-29 08:51:12.101212607 +0000 UTC m=+0.249147850 container start dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.138 232432 INFO nova.compute.manager [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Took 8.13 seconds to build instance.
Nov 29 08:51:12 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [NOTICE]   (328673) : New worker (328675) forked
Nov 29 08:51:12 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [NOTICE]   (328673) : Loading success.
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.158 232432 DEBUG oslo_concurrency.lockutils [None req-cee4f05e-6047-4240-95b9-4c2d0faa6453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:12.363 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:12 compute-2 nova_compute[232428]: 2025-11-29 08:51:12.481 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:12 compute-2 ceph-mon[77138]: pgmap v3467: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:51:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.478 232432 DEBUG nova.compute.manager [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.479 232432 DEBUG oslo_concurrency.lockutils [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.480 232432 DEBUG oslo_concurrency.lockutils [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.480 232432 DEBUG oslo_concurrency.lockutils [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.481 232432 DEBUG nova.compute.manager [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] No waiting events found dispatching network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:51:13 compute-2 nova_compute[232428]: 2025-11-29 08:51:13.482 232432 WARNING nova.compute.manager [req-e3baf711-8eb2-4319-a5f0-90090a97b86c req-d7a5d28a-b2e0-4c08-aa24-d7a7e069f6ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received unexpected event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac for instance with vm_state active and task_state None.
Nov 29 08:51:14 compute-2 sudo[328686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:14 compute-2 sudo[328686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:14 compute-2 sudo[328686]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:14 compute-2 sudo[328712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:14 compute-2 sudo[328712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:14 compute-2 sudo[328712]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:14 compute-2 podman[328710]: 2025-11-29 08:51:14.42826145 +0000 UTC m=+0.104708147 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 08:51:14 compute-2 nova_compute[232428]: 2025-11-29 08:51:14.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:14 compute-2 ceph-mon[77138]: pgmap v3468: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 08:51:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:16 compute-2 ceph-mon[77138]: pgmap v3469: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 08:51:17 compute-2 NetworkManager[48993]: <info>  [1764406277.0475] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 29 08:51:17 compute-2 NetworkManager[48993]: <info>  [1764406277.0490] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.046 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:17 compute-2 ovn_controller[134375]: 2025-11-29T08:51:17Z|00950|binding|INFO|Releasing lport 59355777-532b-4880-8ace-46fa32341f85 from this chassis (sb_readonly=0)
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.222 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.468 232432 DEBUG nova.compute.manager [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.468 232432 DEBUG nova.compute.manager [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing instance network info cache due to event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.469 232432 DEBUG oslo_concurrency.lockutils [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.469 232432 DEBUG oslo_concurrency.lockutils [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.470 232432 DEBUG nova.network.neutron [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:51:17 compute-2 nova_compute[232428]: 2025-11-29 08:51:17.483 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:18 compute-2 ceph-mon[77138]: pgmap v3470: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 08:51:18 compute-2 nova_compute[232428]: 2025-11-29 08:51:18.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:18 compute-2 nova_compute[232428]: 2025-11-29 08:51:18.520 232432 DEBUG nova.network.neutron [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updated VIF entry in instance network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:51:18 compute-2 nova_compute[232428]: 2025-11-29 08:51:18.521 232432 DEBUG nova.network.neutron [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:18 compute-2 nova_compute[232428]: 2025-11-29 08:51:18.537 232432 DEBUG oslo_concurrency.lockutils [req-6db55d1d-2ba2-4f1f-805a-af0c2d8754ef req-93c9d898-1303-4423-86b5-e15c0f519692 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:51:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:19 compute-2 nova_compute[232428]: 2025-11-29 08:51:19.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:20 compute-2 ceph-mon[77138]: pgmap v3471: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 08:51:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:20.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:21.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:22 compute-2 ceph-mon[77138]: pgmap v3472: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Nov 29 08:51:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1206935863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:22 compute-2 nova_compute[232428]: 2025-11-29 08:51:22.485 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:23 compute-2 nova_compute[232428]: 2025-11-29 08:51:23.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:24 compute-2 ceph-mon[77138]: pgmap v3473: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 681 KiB/s wr, 85 op/s
Nov 29 08:51:24 compute-2 nova_compute[232428]: 2025-11-29 08:51:24.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:24 compute-2 podman[328759]: 2025-11-29 08:51:24.71796565 +0000 UTC m=+0.109290330 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 08:51:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:51:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.381 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.382 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.382 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:51:26 compute-2 nova_compute[232428]: 2025-11-29 08:51:26.382 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7702bbc1-7949-4182-8b0f-e338d38a1269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:51:26 compute-2 ceph-mon[77138]: pgmap v3474: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Nov 29 08:51:26 compute-2 ovn_controller[134375]: 2025-11-29T08:51:26Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:3f:70 10.100.0.9
Nov 29 08:51:26 compute-2 ovn_controller[134375]: 2025-11-29T08:51:26Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:3f:70 10.100.0.9
Nov 29 08:51:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:27 compute-2 nova_compute[232428]: 2025-11-29 08:51:27.488 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:51:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2114564151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:51:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:51:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2114564151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:51:28 compute-2 nova_compute[232428]: 2025-11-29 08:51:28.349 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:28 compute-2 nova_compute[232428]: 2025-11-29 08:51:28.380 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:51:28 compute-2 nova_compute[232428]: 2025-11-29 08:51:28.380 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:51:28 compute-2 nova_compute[232428]: 2025-11-29 08:51:28.381 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:28 compute-2 ceph-mon[77138]: pgmap v3475: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Nov 29 08:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1202105207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2114564151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2114564151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:51:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2013506896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:51:29 compute-2 nova_compute[232428]: 2025-11-29 08:51:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:29 compute-2 nova_compute[232428]: 2025-11-29 08:51:29.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:29 compute-2 nova_compute[232428]: 2025-11-29 08:51:29.533 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:30 compute-2 ceph-mon[77138]: pgmap v3476: 305 pgs: 305 active+clean; 239 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 3.8 MiB/s wr, 80 op/s
Nov 29 08:51:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:31.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3498394224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:51:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Cumulative writes: 15K writes, 76K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1546 writes, 7309 keys, 1546 commit groups, 1.0 writes per commit group, ingest: 15.72 MB, 0.03 MB/s
                                           Interval WAL: 1547 writes, 1547 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.5      2.09              0.43        47    0.044       0      0       0.0       0.0
                                             L6      1/0   12.49 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.1     86.1     73.6      6.41              1.66        46    0.139    351K    24K       0.0       0.0
                                            Sum      1/0   12.49 MB   0.0      0.5     0.1      0.4       0.6      0.1       0.0   6.1     65.0     66.5      8.49              2.09        93    0.091    351K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5    102.6    105.0      0.65              0.24        10    0.065     51K   2590       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0     86.1     73.6      6.41              1.66        46    0.139    351K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     46.4      2.00              0.43        46    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.091, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.55 GB write, 0.09 MB/s write, 0.54 GB read, 0.09 MB/s read, 8.5 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 63.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000469 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3502,61.36 MB,20.1831%) FilterBlock(93,965.23 KB,0.31007%) IndexBlock(93,1.56 MB,0.513915%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.236 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.236 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.321 232432 INFO nova.compute.manager [None req-ebddb240-f6e9-41b4-a435-2e94bd9c569a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Get console output
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.333 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:51:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.490 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:32 compute-2 ceph-mon[77138]: pgmap v3477: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 08:51:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3463144033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:51:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2099741164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.777 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.810 232432 INFO nova.compute.manager [None req-8e9578f9-034f-4011-ba15-00bf55dbad44 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Get console output
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.816 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.863 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:51:32 compute-2 nova_compute[232428]: 2025-11-29 08:51:32.863 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.089 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.091 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4000MB free_disk=20.922138214111328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.091 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.092 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.176 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 7702bbc1-7949-4182-8b0f-e338d38a1269 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.176 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.177 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:51:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.233 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:51:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1449608723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.715 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.723 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:51:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2099741164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1449608723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.809 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.832 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.832 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.997 232432 DEBUG nova.compute.manager [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.997 232432 DEBUG nova.compute.manager [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing instance network info cache due to event network-changed-fd43a273-a708-4e86-9c8d-9ad34d9d9dac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.998 232432 DEBUG oslo_concurrency.lockutils [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.999 232432 DEBUG oslo_concurrency.lockutils [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:51:33 compute-2 nova_compute[232428]: 2025-11-29 08:51:33.999 232432 DEBUG nova.network.neutron [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Refreshing network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.107 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.107 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.108 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.108 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.108 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.109 232432 INFO nova.compute.manager [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Terminating instance
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.110 232432 DEBUG nova.compute.manager [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:51:34 compute-2 kernel: tapfd43a273-a7 (unregistering): left promiscuous mode
Nov 29 08:51:34 compute-2 NetworkManager[48993]: <info>  [1764406294.1845] device (tapfd43a273-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:51:34 compute-2 ovn_controller[134375]: 2025-11-29T08:51:34Z|00951|binding|INFO|Releasing lport fd43a273-a708-4e86-9c8d-9ad34d9d9dac from this chassis (sb_readonly=0)
Nov 29 08:51:34 compute-2 ovn_controller[134375]: 2025-11-29T08:51:34Z|00952|binding|INFO|Setting lport fd43a273-a708-4e86-9c8d-9ad34d9d9dac down in Southbound
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.200 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 ovn_controller[134375]: 2025-11-29T08:51:34Z|00953|binding|INFO|Removing iface tapfd43a273-a7 ovn-installed in OVS
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.205 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.209 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:3f:70 10.100.0.9'], port_security=['fa:16:3e:77:3f:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7702bbc1-7949-4182-8b0f-e338d38a1269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '064b1bf7-39d0-4170-ba25-f3de0111ddc7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=167c3583-8559-4b69-8d8b-b842ba19be53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=fd43a273-a708-4e86-9c8d-9ad34d9d9dac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.215 143801 INFO neutron.agent.ovn.metadata.agent [-] Port fd43a273-a708-4e86-9c8d-9ad34d9d9dac in datapath f41ffe1c-22ed-416f-b3d7-fe073a4b4077 unbound from our chassis
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.218 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f41ffe1c-22ed-416f-b3d7-fe073a4b4077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.220 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b0e3f4-764d-4368-b8ae-ef63b8c5de2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.221 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077 namespace which is not needed anymore
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Nov 29 08:51:34 compute-2 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c8.scope: Consumed 15.155s CPU time.
Nov 29 08:51:34 compute-2 systemd-machined[194747]: Machine qemu-98-instance-000000c8 terminated.
Nov 29 08:51:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.357 232432 INFO nova.virt.libvirt.driver [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Instance destroyed successfully.
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.358 232432 DEBUG nova.objects.instance [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 7702bbc1-7949-4182-8b0f-e338d38a1269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.394 232432 DEBUG nova.virt.libvirt.vif [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-698503088',display_name='tempest-TestNetworkBasicOps-server-698503088',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-698503088',id=200,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgyHnORnOauL4leJ7s/Xm8nQo2pkbiEkcEbpf4KQbTIUr4R9BLA7L3pENbp2D3341bdfN0NTaOcR8nR9p3T5QwiJq563ZWaxxerHHvp7NQE0WTAlSbA3aVV/IKiTmyrMw==',key_name='tempest-TestNetworkBasicOps-186055993',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:51:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-hvtpraa0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:51:12Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=7702bbc1-7949-4182-8b0f-e338d38a1269,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.396 232432 DEBUG nova.network.os_vif_util [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.397 232432 DEBUG nova.network.os_vif_util [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.398 232432 DEBUG os_vif [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.401 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.402 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd43a273-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.414 232432 INFO os_vif [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:3f:70,bridge_name='br-int',has_traffic_filtering=True,id=fd43a273-a708-4e86-9c8d-9ad34d9d9dac,network=Network(f41ffe1c-22ed-416f-b3d7-fe073a4b4077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd43a273-a7')
Nov 29 08:51:34 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [NOTICE]   (328673) : haproxy version is 2.8.14-c23fe91
Nov 29 08:51:34 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [NOTICE]   (328673) : path to executable is /usr/sbin/haproxy
Nov 29 08:51:34 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [WARNING]  (328673) : Exiting Master process...
Nov 29 08:51:34 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [ALERT]    (328673) : Current worker (328675) exited with code 143 (Terminated)
Nov 29 08:51:34 compute-2 neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077[328669]: [WARNING]  (328673) : All workers exited. Exiting... (0)
Nov 29 08:51:34 compute-2 systemd[1]: libpod-dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a.scope: Deactivated successfully.
Nov 29 08:51:34 compute-2 podman[328857]: 2025-11-29 08:51:34.442396459 +0000 UTC m=+0.077488361 container died dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:51:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-8da51cd7354831b34294c7c3d43b316e1c1eb5a3b2ed82b9d3c22108ab61bb9e-merged.mount: Deactivated successfully.
Nov 29 08:51:34 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:51:34 compute-2 podman[328857]: 2025-11-29 08:51:34.504019685 +0000 UTC m=+0.139111537 container cleanup dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:51:34 compute-2 systemd[1]: libpod-conmon-dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a.scope: Deactivated successfully.
Nov 29 08:51:34 compute-2 sudo[328884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:34 compute-2 sudo[328884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:34 compute-2 sudo[328884]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:34 compute-2 podman[328931]: 2025-11-29 08:51:34.590071702 +0000 UTC m=+0.058012836 container remove dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.596 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6ac439-fc79-47dd-a373-d2c2a4634fbc]: (4, ('Sat Nov 29 08:51:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077 (dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a)\ndcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a\nSat Nov 29 08:51:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077 (dcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a)\ndcdb226053de4b919bde3328c23dca5c29e2ac40c7a13c312ca10fd3db89ab6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.598 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1911f570-9ba9-4a18-bf79-a9a007c28706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.599 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf41ffe1c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:34 compute-2 kernel: tapf41ffe1c-20: left promiscuous mode
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.601 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.609 232432 DEBUG nova.compute.manager [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-unplugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.609 232432 DEBUG oslo_concurrency.lockutils [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.610 232432 DEBUG oslo_concurrency.lockutils [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.610 232432 DEBUG oslo_concurrency.lockutils [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.610 232432 DEBUG nova.compute.manager [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] No waiting events found dispatching network-vif-unplugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.611 232432 DEBUG nova.compute.manager [req-7ec41d08-d64c-491b-a380-6fa2ff6e0628 req-54ec11a8-d86d-4e45-8a92-ce3f53745247 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-unplugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:51:34 compute-2 nova_compute[232428]: 2025-11-29 08:51:34.619 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:34 compute-2 sudo[328944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:34 compute-2 sudo[328944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.624 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc88a10-8789-482e-afc0-1307c9ee6a7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 sudo[328944]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.638 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c43fcb-d711-48c8-a19a-42cc77aae1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.639 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[628ba77c-e76a-4c2b-bd56-730012e8caa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.661 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a512d7-c346-45bd-9020-5f90ffbb6adc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923730, 'reachable_time': 21283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328971, 'error': None, 'target': 'ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.663 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f41ffe1c-22ed-416f-b3d7-fe073a4b4077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:51:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:34.663 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[64b54a67-1575-4004-b460-418dd8541e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:51:34 compute-2 systemd[1]: run-netns-ovnmeta\x2df41ffe1c\x2d22ed\x2d416f\x2db3d7\x2dfe073a4b4077.mount: Deactivated successfully.
Nov 29 08:51:34 compute-2 ceph-mon[77138]: pgmap v3478: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 29 08:51:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.410 232432 INFO nova.virt.libvirt.driver [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Deleting instance files /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269_del
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.412 232432 INFO nova.virt.libvirt.driver [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Deletion of /var/lib/nova/instances/7702bbc1-7949-4182-8b0f-e338d38a1269_del complete
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.420 232432 DEBUG nova.network.neutron [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updated VIF entry in instance network info cache for port fd43a273-a708-4e86-9c8d-9ad34d9d9dac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.420 232432 DEBUG nova.network.neutron [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [{"id": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "address": "fa:16:3e:77:3f:70", "network": {"id": "f41ffe1c-22ed-416f-b3d7-fe073a4b4077", "bridge": "br-int", "label": "tempest-network-smoke--2054676557", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd43a273-a7", "ovs_interfaceid": "fd43a273-a708-4e86-9c8d-9ad34d9d9dac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.446 232432 DEBUG oslo_concurrency.lockutils [req-763783a9-ab99-48e7-b046-0b0de11f8501 req-077aa8c9-ea40-4405-b495-60b1431ec226 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7702bbc1-7949-4182-8b0f-e338d38a1269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.480 232432 INFO nova.compute.manager [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Took 1.37 seconds to destroy the instance on the hypervisor.
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.481 232432 DEBUG oslo.service.loopingcall [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.481 232432 DEBUG nova.compute.manager [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.482 232432 DEBUG nova.network.neutron [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:51:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:35 compute-2 podman[328973]: 2025-11-29 08:51:35.727767235 +0000 UTC m=+0.124008757 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:51:35 compute-2 ceph-mon[77138]: pgmap v3479: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 156 op/s
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.981 232432 DEBUG nova.network.neutron [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:51:35 compute-2 nova_compute[232428]: 2025-11-29 08:51:35.997 232432 INFO nova.compute.manager [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Took 0.52 seconds to deallocate network for instance.
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.048 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.048 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.111 232432 DEBUG oslo_concurrency.processutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.158 232432 DEBUG nova.compute.manager [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-deleted-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:51:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1466738283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.543 232432 DEBUG oslo_concurrency.processutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.552 232432 DEBUG nova.compute.provider_tree [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.595 232432 DEBUG nova.scheduler.client.report [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.726 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.823 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.918 232432 DEBUG nova.compute.manager [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.918 232432 DEBUG oslo_concurrency.lockutils [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.919 232432 DEBUG oslo_concurrency.lockutils [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.919 232432 DEBUG oslo_concurrency.lockutils [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.920 232432 DEBUG nova.compute.manager [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] No waiting events found dispatching network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:51:36 compute-2 nova_compute[232428]: 2025-11-29 08:51:36.920 232432 WARNING nova.compute.manager [req-a5efead0-7f90-4537-a3eb-ae46370492f7 req-38fbad93-909a-4ae9-bdb8-f128dc5ef337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Received unexpected event network-vif-plugged-fd43a273-a708-4e86-9c8d-9ad34d9d9dac for instance with vm_state deleted and task_state None.
Nov 29 08:51:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1466738283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.027 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.064 232432 INFO nova.scheduler.client.report [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 7702bbc1-7949-4182-8b0f-e338d38a1269
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:51:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.493 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:37 compute-2 nova_compute[232428]: 2025-11-29 08:51:37.563 232432 DEBUG oslo_concurrency.lockutils [None req-56be39ef-ac0a-4b7c-837a-d1a94409fd4c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "7702bbc1-7949-4182-8b0f-e338d38a1269" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:51:38 compute-2 ceph-mon[77138]: pgmap v3480: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 715 KiB/s wr, 111 op/s
Nov 29 08:51:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3855196618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:39 compute-2 nova_compute[232428]: 2025-11-29 08:51:39.405 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/482374199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:51:40 compute-2 ceph-mon[77138]: pgmap v3481: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 715 KiB/s wr, 120 op/s
Nov 29 08:51:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:41.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.360410) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302360466, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1595, "num_deletes": 257, "total_data_size": 3554690, "memory_usage": 3618384, "flush_reason": "Manual Compaction"}
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Nov 29 08:51:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:42 compute-2 nova_compute[232428]: 2025-11-29 08:51:42.496 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302632278, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2333108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75300, "largest_seqno": 76890, "table_properties": {"data_size": 2326562, "index_size": 3680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14156, "raw_average_key_size": 19, "raw_value_size": 2313252, "raw_average_value_size": 3235, "num_data_blocks": 162, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406166, "oldest_key_time": 1764406166, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 271978 microseconds, and 10200 cpu microseconds.
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.632369) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2333108 bytes OK
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.632407) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.666186) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.666215) EVENT_LOG_v1 {"time_micros": 1764406302666205, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.666239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 3547384, prev total WAL file size 3549347, number of live WAL files 2.
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.668029) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373638' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2278KB)], [150(12MB)]
Nov 29 08:51:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302668099, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 15435022, "oldest_snapshot_seqno": -1}
Nov 29 08:51:42 compute-2 ceph-mon[77138]: pgmap v3482: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 178 KiB/s wr, 118 op/s
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10382 keys, 15300625 bytes, temperature: kUnknown
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406303067016, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 15300625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15231049, "index_size": 42519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273911, "raw_average_key_size": 26, "raw_value_size": 15046634, "raw_average_value_size": 1449, "num_data_blocks": 1629, "num_entries": 10382, "num_filter_entries": 10382, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.067269) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 15300625 bytes
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.145899) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.7 rd, 38.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(13.2) write-amplify(6.6) OK, records in: 10909, records dropped: 527 output_compression: NoCompression
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.145920) EVENT_LOG_v1 {"time_micros": 1764406303145909, "job": 96, "event": "compaction_finished", "compaction_time_micros": 398996, "compaction_time_cpu_micros": 63299, "output_level": 6, "num_output_files": 1, "total_output_size": 15300625, "num_input_records": 10909, "num_output_records": 10382, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:42.667923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.146059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.146067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.146071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.146075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:43.146079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406303148768, "job": 0, "event": "table_file_deletion", "file_number": 152}
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:43 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406303153573, "job": 0, "event": "table_file_deletion", "file_number": 150}
Nov 29 08:51:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:43.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:44 compute-2 nova_compute[232428]: 2025-11-29 08:51:44.408 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:44 compute-2 podman[329029]: 2025-11-29 08:51:44.690263469 +0000 UTC m=+0.086844342 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:51:44 compute-2 ceph-mon[77138]: pgmap v3483: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 164 KiB/s wr, 100 op/s
Nov 29 08:51:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:45.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:45 compute-2 nova_compute[232428]: 2025-11-29 08:51:45.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:45 compute-2 nova_compute[232428]: 2025-11-29 08:51:45.481 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:45 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Nov 29 08:51:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:46.188 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:51:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:46.190 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:51:46 compute-2 nova_compute[232428]: 2025-11-29 08:51:46.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:46 compute-2 ceph-mon[77138]: pgmap v3484: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 99 op/s
Nov 29 08:51:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:47.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:47 compute-2 nova_compute[232428]: 2025-11-29 08:51:47.498 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:48 compute-2 ceph-mon[77138]: pgmap v3485: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Nov 29 08:51:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:49.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:49 compute-2 nova_compute[232428]: 2025-11-29 08:51:49.352 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406294.3515809, 7702bbc1-7949-4182-8b0f-e338d38a1269 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:51:49 compute-2 nova_compute[232428]: 2025-11-29 08:51:49.353 232432 INFO nova.compute.manager [-] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] VM Stopped (Lifecycle Event)
Nov 29 08:51:49 compute-2 nova_compute[232428]: 2025-11-29 08:51:49.379 232432 DEBUG nova.compute.manager [None req-2c4f0d81-0967-4964-a8ec-834a3f8d6458 - - - - - -] [instance: 7702bbc1-7949-4182-8b0f-e338d38a1269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:51:49 compute-2 nova_compute[232428]: 2025-11-29 08:51:49.410 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:51 compute-2 ceph-mon[77138]: pgmap v3486: 305 pgs: 305 active+clean; 190 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 160 KiB/s rd, 2.0 MiB/s wr, 55 op/s
Nov 29 08:51:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:51.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:51 compute-2 sudo[329052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:51 compute-2 sudo[329052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:51 compute-2 sudo[329052]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:51 compute-2 sudo[329077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:51:51 compute-2 sudo[329077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:51 compute-2 sudo[329077]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:51 compute-2 sudo[329102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:51 compute-2 sudo[329102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:51 compute-2 sudo[329102]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:51 compute-2 sudo[329128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:51:51 compute-2 sudo[329128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:51 compute-2 ceph-mon[77138]: pgmap v3487: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 29 08:51:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:52.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:52 compute-2 nova_compute[232428]: 2025-11-29 08:51:52.498 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:52 compute-2 sudo[329128]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:54 compute-2 ceph-mon[77138]: pgmap v3488: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 08:51:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:51:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:51:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:54.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:54 compute-2 nova_compute[232428]: 2025-11-29 08:51:54.411 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:54 compute-2 sudo[329184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:54 compute-2 sudo[329184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:54 compute-2 sudo[329184]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:54 compute-2 sudo[329209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:51:54 compute-2 sudo[329209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:51:54 compute-2 sudo[329209]: pam_unix(sudo:session): session closed for user root
Nov 29 08:51:54 compute-2 podman[329233]: 2025-11-29 08:51:54.951540425 +0000 UTC m=+0.093030735 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:51:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:51:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:51:56 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:51:56.192 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:51:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:56.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:56 compute-2 ceph-mon[77138]: pgmap v3489: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 2.0 MiB/s wr, 59 op/s
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.107854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317107914, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 419, "num_deletes": 251, "total_data_size": 444719, "memory_usage": 452688, "flush_reason": "Manual Compaction"}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317113366, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 277327, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76895, "largest_seqno": 77309, "table_properties": {"data_size": 274910, "index_size": 516, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6547, "raw_average_key_size": 20, "raw_value_size": 270037, "raw_average_value_size": 851, "num_data_blocks": 22, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406302, "oldest_key_time": 1764406302, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 5564 microseconds, and 2442 cpu microseconds.
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.113416) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 277327 bytes OK
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.113438) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115212) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115234) EVENT_LOG_v1 {"time_micros": 1764406317115227, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115255) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 442051, prev total WAL file size 442051, number of live WAL files 2.
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353137' seq:72057594037927935, type:22 .. '6D6772737461740032373639' seq:0, type:0; will stop at (end)
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(270KB)], [153(14MB)]
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317115928, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15577952, "oldest_snapshot_seqno": -1}
Nov 29 08:51:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:51:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10186 keys, 11737260 bytes, temperature: kUnknown
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317273503, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 11737260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11673737, "index_size": 36978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 270034, "raw_average_key_size": 26, "raw_value_size": 11497329, "raw_average_value_size": 1128, "num_data_blocks": 1397, "num_entries": 10186, "num_filter_entries": 10186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.273913) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11737260 bytes
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.275699) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.7 rd, 74.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(98.5) write-amplify(42.3) OK, records in: 10699, records dropped: 513 output_compression: NoCompression
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.275719) EVENT_LOG_v1 {"time_micros": 1764406317275709, "job": 98, "event": "compaction_finished", "compaction_time_micros": 157797, "compaction_time_cpu_micros": 60452, "output_level": 6, "num_output_files": 1, "total_output_size": 11737260, "num_input_records": 10699, "num_output_records": 10186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317276301, "job": 98, "event": "table_file_deletion", "file_number": 155}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317279996, "job": 98, "event": "table_file_deletion", "file_number": 153}
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.280263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.280271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.280275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.280278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:51:57.280281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:51:57 compute-2 nova_compute[232428]: 2025-11-29 08:51:57.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:51:58 compute-2 ceph-mon[77138]: pgmap v3490: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 213 KiB/s wr, 35 op/s
Nov 29 08:51:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:51:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:51:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:51:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:51:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:51:59 compute-2 nova_compute[232428]: 2025-11-29 08:51:59.413 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:00 compute-2 ceph-mon[77138]: pgmap v3491: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 213 KiB/s wr, 35 op/s
Nov 29 08:52:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:00 compute-2 sudo[329259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:00 compute-2 sudo[329259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:00 compute-2 sudo[329259]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:01 compute-2 sudo[329284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:52:01 compute-2 sudo[329284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:01 compute-2 sudo[329284]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:52:01 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:52:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:02.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:02 compute-2 nova_compute[232428]: 2025-11-29 08:52:02.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:02 compute-2 ceph-mon[77138]: pgmap v3492: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 159 KiB/s rd, 101 KiB/s wr, 29 op/s
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.030 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.030 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.047 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.126 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.127 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.135 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.135 232432 INFO nova.compute.claims [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.234 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:03.356 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:03.357 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:03.357 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:52:03 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1009348249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.709 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.719 232432 DEBUG nova.compute.provider_tree [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.739 232432 DEBUG nova.scheduler.client.report [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.775 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.777 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.839 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.840 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.859 232432 INFO nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.882 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.997 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:52:03 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.999 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:03.999 232432 INFO nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Creating image(s)
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.043 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.089 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.134 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.139 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.225 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.226 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.227 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.228 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.270 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.276 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.328 232432 DEBUG nova.policy [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:52:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:04.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.415 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:04 compute-2 ceph-mon[77138]: pgmap v3493: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Nov 29 08:52:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1009348249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.666 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.770 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.922 232432 DEBUG nova.objects.instance [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 20c17446-b5ab-4a14-acb9-c68d36bc66c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.942 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.943 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Ensure instance console log exists: /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.944 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.944 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:04 compute-2 nova_compute[232428]: 2025-11-29 08:52:04.945 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:05.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:06 compute-2 nova_compute[232428]: 2025-11-29 08:52:06.313 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Successfully created port: c9494384-5d68-4ec4-85b7-0dc1c0905989 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:52:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:06.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:06 compute-2 podman[329500]: 2025-11-29 08:52:06.757014925 +0000 UTC m=+0.143463253 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:52:06 compute-2 ceph-mon[77138]: pgmap v3494: 305 pgs: 305 active+clean; 220 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 455 KiB/s wr, 24 op/s
Nov 29 08:52:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.505 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.585 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Successfully updated port: c9494384-5d68-4ec4-85b7-0dc1c0905989 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.603 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.603 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.603 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.696 232432 DEBUG nova.compute.manager [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.696 232432 DEBUG nova.compute.manager [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing instance network info cache due to event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.697 232432 DEBUG oslo_concurrency.lockutils [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:52:07 compute-2 nova_compute[232428]: 2025-11-29 08:52:07.917 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:52:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:08.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.678 232432 DEBUG nova.network.neutron [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.698 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.699 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Instance network_info: |[{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.699 232432 DEBUG oslo_concurrency.lockutils [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.700 232432 DEBUG nova.network.neutron [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.705 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Start _get_guest_xml network_info=[{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.713 232432 WARNING nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.722 232432 DEBUG nova.virt.libvirt.host [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.723 232432 DEBUG nova.virt.libvirt.host [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.728 232432 DEBUG nova.virt.libvirt.host [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.729 232432 DEBUG nova.virt.libvirt.host [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.731 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.731 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.732 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.733 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.733 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.734 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.734 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.735 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.735 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.736 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.736 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.737 232432 DEBUG nova.virt.hardware [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:52:08 compute-2 nova_compute[232428]: 2025-11-29 08:52:08.742 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:08 compute-2 ceph-mon[77138]: pgmap v3495: 305 pgs: 305 active+clean; 220 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 443 KiB/s wr, 24 op/s
Nov 29 08:52:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:52:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/472677157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.242 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.286 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.291 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.418 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:52:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2877377738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.734 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.736 232432 DEBUG nova.virt.libvirt.vif [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2040911676',display_name='tempest-TestNetworkBasicOps-server-2040911676',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2040911676',id=202,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPzlJj7B26ifWrdDd5DJfInpX9JJyaVZJ28IW6gWC/MW47NlQsxSf5wOgsSPjxRP3FA4PqHj4XmbEdqD+wWcpQDWH07WTO3WVIIvrIcQRDTC/2ETy15mw14mASDZvkkt/w==',key_name='tempest-TestNetworkBasicOps-1097965341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-7hcdf4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:03Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=20c17446-b5ab-4a14-acb9-c68d36bc66c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.737 232432 DEBUG nova.network.os_vif_util [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.739 232432 DEBUG nova.network.os_vif_util [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.741 232432 DEBUG nova.objects.instance [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20c17446-b5ab-4a14-acb9-c68d36bc66c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.758 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <uuid>20c17446-b5ab-4a14-acb9-c68d36bc66c1</uuid>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <name>instance-000000ca</name>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-2040911676</nova:name>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:52:08</nova:creationTime>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <nova:port uuid="c9494384-5d68-4ec4-85b7-0dc1c0905989">
Nov 29 08:52:09 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <system>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="serial">20c17446-b5ab-4a14-acb9-c68d36bc66c1</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="uuid">20c17446-b5ab-4a14-acb9-c68d36bc66c1</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </system>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <os>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </os>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <features>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </features>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk">
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config">
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </source>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:52:09 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:b6:4b:0c"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <target dev="tapc9494384-5d"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/console.log" append="off"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <video>
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </video>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:52:09 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:52:09 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:52:09 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:52:09 compute-2 nova_compute[232428]: </domain>
Nov 29 08:52:09 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.759 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Preparing to wait for external event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.760 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.761 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.761 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.763 232432 DEBUG nova.virt.libvirt.vif [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2040911676',display_name='tempest-TestNetworkBasicOps-server-2040911676',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2040911676',id=202,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPzlJj7B26ifWrdDd5DJfInpX9JJyaVZJ28IW6gWC/MW47NlQsxSf5wOgsSPjxRP3FA4PqHj4XmbEdqD+wWcpQDWH07WTO3WVIIvrIcQRDTC/2ETy15mw14mASDZvkkt/w==',key_name='tempest-TestNetworkBasicOps-1097965341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-7hcdf4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:03Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=20c17446-b5ab-4a14-acb9-c68d36bc66c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.763 232432 DEBUG nova.network.os_vif_util [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.764 232432 DEBUG nova.network.os_vif_util [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.765 232432 DEBUG os_vif [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.767 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.767 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.772 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.773 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9494384-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.774 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9494384-5d, col_values=(('external_ids', {'iface-id': 'c9494384-5d68-4ec4-85b7-0dc1c0905989', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:4b:0c', 'vm-uuid': '20c17446-b5ab-4a14-acb9-c68d36bc66c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:09 compute-2 NetworkManager[48993]: <info>  [1764406329.7768] manager: (tapc9494384-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.778 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.786 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.787 232432 INFO os_vif [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d')
Nov 29 08:52:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.859 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.860 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.860 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:b6:4b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:52:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/472677157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:09 compute-2 ceph-mon[77138]: pgmap v3496: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 743 KiB/s wr, 25 op/s
Nov 29 08:52:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2877377738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.861 232432 INFO nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Using config drive
Nov 29 08:52:09 compute-2 nova_compute[232428]: 2025-11-29 08:52:09.905 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:10.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.404 232432 INFO nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Creating config drive at /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.416 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapk9lr3g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.580 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapk9lr3g" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.631 232432 DEBUG nova.storage.rbd_utils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.637 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.685 232432 DEBUG nova.network.neutron [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updated VIF entry in instance network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.688 232432 DEBUG nova.network.neutron [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.710 232432 DEBUG oslo_concurrency.lockutils [req-62978494-b2d2-4b83-877a-0c7a400ee9e9 req-43cb1965-4cde-43bc-acfc-9d5abec3c0fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.856 232432 DEBUG oslo_concurrency.processutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config 20c17446-b5ab-4a14-acb9-c68d36bc66c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.858 232432 INFO nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Deleting local config drive /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1/disk.config because it was imported into RBD.
Nov 29 08:52:10 compute-2 ceph-mon[77138]: osdmap e417: 3 total, 3 up, 3 in
Nov 29 08:52:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 08:52:10 compute-2 kernel: tapc9494384-5d: entered promiscuous mode
Nov 29 08:52:10 compute-2 NetworkManager[48993]: <info>  [1764406330.9384] manager: (tapc9494384-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Nov 29 08:52:10 compute-2 nova_compute[232428]: 2025-11-29 08:52:10.941 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:10 compute-2 ovn_controller[134375]: 2025-11-29T08:52:10Z|00954|binding|INFO|Claiming lport c9494384-5d68-4ec4-85b7-0dc1c0905989 for this chassis.
Nov 29 08:52:10 compute-2 ovn_controller[134375]: 2025-11-29T08:52:10Z|00955|binding|INFO|c9494384-5d68-4ec4-85b7-0dc1c0905989: Claiming fa:16:3e:b6:4b:0c 10.100.0.11
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.963 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:4b:0c 10.100.0.11'], port_security=['fa:16:3e:b6:4b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '20c17446-b5ab-4a14-acb9-c68d36bc66c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4229292-80a4-45ff-9b7c-102752939760', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3a8e4424-24e5-4ff1-8f2b-4708d013f40d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88f9c966-7d41-46f1-8106-7095d470a6ef, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c9494384-5d68-4ec4-85b7-0dc1c0905989) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.966 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c9494384-5d68-4ec4-85b7-0dc1c0905989 in datapath d4229292-80a4-45ff-9b7c-102752939760 bound to our chassis
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.968 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d4229292-80a4-45ff-9b7c-102752939760
Nov 29 08:52:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:10 compute-2 systemd-machined[194747]: New machine qemu-99-instance-000000ca.
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.988 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2a072a-7e7b-4e74-a1e0-38d40d04f111]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.989 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd4229292-81 in ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.993 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd4229292-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.993 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[09e4b377-240a-49f8-b4d9-a4af2e27381d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:10.994 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[312c9bae-1807-4fd6-b5b8-4775543bb45f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 systemd[1]: Started Virtual Machine qemu-99-instance-000000ca.
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.011 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c681ed88-7539-4501-97fd-772bae9d764c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 systemd-udevd[329667]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 ovn_controller[134375]: 2025-11-29T08:52:11Z|00956|binding|INFO|Setting lport c9494384-5d68-4ec4-85b7-0dc1c0905989 ovn-installed in OVS
Nov 29 08:52:11 compute-2 ovn_controller[134375]: 2025-11-29T08:52:11Z|00957|binding|INFO|Setting lport c9494384-5d68-4ec4-85b7-0dc1c0905989 up in Southbound
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.046 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4b6139-7967-4c10-b776-21ee359e7428]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.047 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 NetworkManager[48993]: <info>  [1764406331.0501] device (tapc9494384-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:52:11 compute-2 NetworkManager[48993]: <info>  [1764406331.0515] device (tapc9494384-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.086 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9fbea7-97b5-439e-aaaf-291530947bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 systemd-udevd[329673]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:52:11 compute-2 NetworkManager[48993]: <info>  [1764406331.0934] manager: (tapd4229292-80): new Veth device (/org/freedesktop/NetworkManager/Devices/456)
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.091 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6350920c-8596-41d5-aa89-4b412fdb9bf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.145 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9dc875-4d77-4c87-8cc3-5c7bf6daf368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.149 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[293d19de-0147-46ca-a46d-8abf32037c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 NetworkManager[48993]: <info>  [1764406331.1804] device (tapd4229292-80): carrier: link connected
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.190 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8d008d-1776-471a-9dc3-df77f4592206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.226 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee79376c-509e-487c-aca4-5d5cbe5268cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4229292-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:00:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 292], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929744, 'reachable_time': 29048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329699, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.250 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0546b79d-6318-4395-99cb-484b4b4d37c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:90'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 929744, 'tstamp': 929744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329700, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.280 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[48d8d4b9-1b11-4571-baa7-d91dccf5a157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4229292-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:00:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 292], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929744, 'reachable_time': 29048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329701, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.334 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b790087b-8ad9-4191-aa4c-81cdd1056074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.356 232432 DEBUG nova.compute.manager [req-24fe7221-c375-41ab-b806-1a91e037aa7e req-07d0ac00-afeb-42e5-b715-c28e2c4b1253 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.357 232432 DEBUG oslo_concurrency.lockutils [req-24fe7221-c375-41ab-b806-1a91e037aa7e req-07d0ac00-afeb-42e5-b715-c28e2c4b1253 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.358 232432 DEBUG oslo_concurrency.lockutils [req-24fe7221-c375-41ab-b806-1a91e037aa7e req-07d0ac00-afeb-42e5-b715-c28e2c4b1253 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.358 232432 DEBUG oslo_concurrency.lockutils [req-24fe7221-c375-41ab-b806-1a91e037aa7e req-07d0ac00-afeb-42e5-b715-c28e2c4b1253 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.359 232432 DEBUG nova.compute.manager [req-24fe7221-c375-41ab-b806-1a91e037aa7e req-07d0ac00-afeb-42e5-b715-c28e2c4b1253 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Processing event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.423 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[27f1d59a-7fe1-4a00-9599-6d30027e649d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.425 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4229292-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.426 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.426 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4229292-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.429 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 NetworkManager[48993]: <info>  [1764406331.4302] manager: (tapd4229292-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 29 08:52:11 compute-2 kernel: tapd4229292-80: entered promiscuous mode
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.432 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.434 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd4229292-80, col_values=(('external_ids', {'iface-id': '6864f002-8984-4f83-802c-5987f0f90af9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.435 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 ovn_controller[134375]: 2025-11-29T08:52:11Z|00958|binding|INFO|Releasing lport 6864f002-8984-4f83-802c-5987f0f90af9 from this chassis (sb_readonly=0)
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.454 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.458 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.459 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:52:11 compute-2 sshd-session[329678]: Invalid user pbanx from 45.148.10.240 port 47990
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.461 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4989a22b-c6ca-4f3e-a77e-121ed18f1fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.462 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-d4229292-80a4-45ff-9b7c-102752939760
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID d4229292-80a4-45ff-9b7c-102752939760
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:52:11 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:11.463 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'env', 'PROCESS_TAG=haproxy-d4229292-80a4-45ff-9b7c-102752939760', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d4229292-80a4-45ff-9b7c-102752939760.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.464 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406331.463987, 20c17446-b5ab-4a14-acb9-c68d36bc66c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.465 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] VM Started (Lifecycle Event)
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.469 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.482 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.485 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.490 232432 INFO nova.virt.libvirt.driver [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Instance spawned successfully.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.491 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.495 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.521 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.522 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406331.4642544, 20c17446-b5ab-4a14-acb9-c68d36bc66c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.522 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] VM Paused (Lifecycle Event)
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.531 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.532 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.533 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.534 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.535 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.536 232432 DEBUG nova.virt.libvirt.driver [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:52:11 compute-2 sshd-session[329678]: Connection closed by invalid user pbanx 45.148.10.240 port 47990 [preauth]
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.545 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.550 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406331.4808764, 20c17446-b5ab-4a14-acb9-c68d36bc66c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.550 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] VM Resumed (Lifecycle Event)
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.573 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.579 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.603 232432 INFO nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Took 7.61 seconds to spawn the instance on the hypervisor.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.603 232432 DEBUG nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.605 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.684 232432 INFO nova.compute.manager [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Took 8.58 seconds to build instance.
Nov 29 08:52:11 compute-2 nova_compute[232428]: 2025-11-29 08:52:11.703 232432 DEBUG oslo_concurrency.lockutils [None req-35221abb-cd59-4fb7-adcb-147ad98f3d5e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 08:52:11 compute-2 ceph-mon[77138]: osdmap e418: 3 total, 3 up, 3 in
Nov 29 08:52:11 compute-2 ceph-mon[77138]: pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 MiB/s wr, 53 op/s
Nov 29 08:52:11 compute-2 podman[329774]: 2025-11-29 08:52:11.939134626 +0000 UTC m=+0.096977498 container create d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:52:11 compute-2 podman[329774]: 2025-11-29 08:52:11.887174519 +0000 UTC m=+0.045017441 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:52:12 compute-2 systemd[1]: Started libpod-conmon-d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978.scope.
Nov 29 08:52:12 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:52:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8adf7b9bd2bb65685a70147efa7b01379131a37c500ce0f3b8aeba3cbf4a1a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:52:12 compute-2 podman[329774]: 2025-11-29 08:52:12.078719517 +0000 UTC m=+0.236562439 container init d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:52:12 compute-2 podman[329774]: 2025-11-29 08:52:12.09234117 +0000 UTC m=+0.250184012 container start d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:52:12 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [NOTICE]   (329795) : New worker (329797) forked
Nov 29 08:52:12 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [NOTICE]   (329795) : Loading success.
Nov 29 08:52:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:12 compute-2 nova_compute[232428]: 2025-11-29 08:52:12.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:12 compute-2 ceph-mon[77138]: osdmap e419: 3 total, 3 up, 3 in
Nov 29 08:52:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.537 232432 DEBUG nova.compute.manager [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.537 232432 DEBUG oslo_concurrency.lockutils [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.538 232432 DEBUG oslo_concurrency.lockutils [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.539 232432 DEBUG oslo_concurrency.lockutils [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.539 232432 DEBUG nova.compute.manager [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] No waiting events found dispatching network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:52:13 compute-2 nova_compute[232428]: 2025-11-29 08:52:13.540 232432 WARNING nova.compute.manager [req-a5f186b5-861f-48df-803e-56b5c06a4c4b req-50b104c2-a5d0-40c3-9b47-a052d26031c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received unexpected event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 for instance with vm_state active and task_state None.
Nov 29 08:52:13 compute-2 ceph-mon[77138]: pgmap v3501: 305 pgs: 305 active+clean; 274 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 104 op/s
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.000 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:14 compute-2 NetworkManager[48993]: <info>  [1764406334.0042] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Nov 29 08:52:14 compute-2 NetworkManager[48993]: <info>  [1764406334.0062] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:14 compute-2 ovn_controller[134375]: 2025-11-29T08:52:14Z|00959|binding|INFO|Releasing lport 6864f002-8984-4f83-802c-5987f0f90af9 from this chassis (sb_readonly=0)
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.139 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.529 232432 DEBUG nova.compute.manager [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.529 232432 DEBUG nova.compute.manager [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing instance network info cache due to event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.529 232432 DEBUG oslo_concurrency.lockutils [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.530 232432 DEBUG oslo_concurrency.lockutils [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.530 232432 DEBUG nova.network.neutron [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:52:14 compute-2 nova_compute[232428]: 2025-11-29 08:52:14.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:14 compute-2 sudo[329808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:14 compute-2 sudo[329808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:14 compute-2 sudo[329808]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:15 compute-2 sudo[329839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:15 compute-2 sudo[329839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:15 compute-2 sudo[329839]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:15 compute-2 podman[329832]: 2025-11-29 08:52:15.113175471 +0000 UTC m=+0.112988575 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 08:52:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:15 compute-2 nova_compute[232428]: 2025-11-29 08:52:15.805 232432 DEBUG nova.network.neutron [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updated VIF entry in instance network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:52:15 compute-2 nova_compute[232428]: 2025-11-29 08:52:15.806 232432 DEBUG nova.network.neutron [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:52:15 compute-2 nova_compute[232428]: 2025-11-29 08:52:15.833 232432 DEBUG oslo_concurrency.lockutils [req-2fa7bee6-045c-4e13-b51a-b44be4f818db req-facc68ef-a0d1-4e99-9b31-072fd3428dbf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:52:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:16 compute-2 ceph-mon[77138]: pgmap v3502: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 9.9 MiB/s wr, 332 op/s
Nov 29 08:52:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 08:52:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:17 compute-2 nova_compute[232428]: 2025-11-29 08:52:17.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:18 compute-2 ceph-mon[77138]: osdmap e420: 3 total, 3 up, 3 in
Nov 29 08:52:18 compute-2 ceph-mon[77138]: pgmap v3504: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 7.3 MiB/s wr, 292 op/s
Nov 29 08:52:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:52:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 73K writes, 289K keys, 73K commit groups, 1.0 writes per commit group, ingest: 0.29 GB, 0.05 MB/s
                                           Cumulative WAL: 73K writes, 27K syncs, 2.69 writes per sync, written: 0.29 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5186 writes, 21K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 26.25 MB, 0.04 MB/s
                                           Interval WAL: 5186 writes, 1999 syncs, 2.59 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 08:52:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3478243950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:19 compute-2 nova_compute[232428]: 2025-11-29 08:52:19.778 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:20 compute-2 nova_compute[232428]: 2025-11-29 08:52:20.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:20 compute-2 ceph-mon[77138]: pgmap v3505: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 233 op/s
Nov 29 08:52:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:22 compute-2 ceph-mon[77138]: pgmap v3506: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 203 op/s
Nov 29 08:52:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:22 compute-2 nova_compute[232428]: 2025-11-29 08:52:22.512 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:24 compute-2 nova_compute[232428]: 2025-11-29 08:52:24.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:24 compute-2 ceph-mon[77138]: pgmap v3507: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Nov 29 08:52:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:24 compute-2 ovn_controller[134375]: 2025-11-29T08:52:24Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:4b:0c 10.100.0.11
Nov 29 08:52:24 compute-2 ovn_controller[134375]: 2025-11-29T08:52:24Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:4b:0c 10.100.0.11
Nov 29 08:52:24 compute-2 nova_compute[232428]: 2025-11-29 08:52:24.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:25 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2247192750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:25 compute-2 podman[329882]: 2025-11-29 08:52:25.727757784 +0000 UTC m=+0.113683276 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 08:52:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:26 compute-2 ceph-mon[77138]: pgmap v3508: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 29 08:52:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1369877008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:52:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.542 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.542 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.543 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:52:27 compute-2 nova_compute[232428]: 2025-11-29 08:52:27.543 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20c17446-b5ab-4a14-acb9-c68d36bc66c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:52:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:28.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:28 compute-2 ceph-mon[77138]: pgmap v3509: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 29 08:52:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/389682077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:52:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/389682077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:52:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:29 compute-2 nova_compute[232428]: 2025-11-29 08:52:29.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:30.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:30 compute-2 ceph-mon[77138]: pgmap v3510: 305 pgs: 305 active+clean; 354 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 29 08:52:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:31 compute-2 nova_compute[232428]: 2025-11-29 08:52:31.447 232432 INFO nova.compute.manager [None req-faf972d3-b87d-443b-9f7e-fe9da384ce75 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Get console output
Nov 29 08:52:31 compute-2 nova_compute[232428]: 2025-11-29 08:52:31.456 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.289 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.310 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.311 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.312 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.313 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:32 compute-2 ceph-mon[77138]: pgmap v3511: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 08:52:32 compute-2 nova_compute[232428]: 2025-11-29 08:52:32.516 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:33 compute-2 nova_compute[232428]: 2025-11-29 08:52:33.302 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:33.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.240 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.240 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.241 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:34 compute-2 ceph-mon[77138]: pgmap v3512: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Nov 29 08:52:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1531876958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:52:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3418474369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:34.737 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.738 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:34.740 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.749 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.785 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.853 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.854 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.878 232432 DEBUG nova.compute.manager [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.878 232432 DEBUG nova.compute.manager [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing instance network info cache due to event network-changed-c9494384-5d68-4ec4-85b7-0dc1c0905989. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.879 232432 DEBUG oslo_concurrency.lockutils [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.879 232432 DEBUG oslo_concurrency.lockutils [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:52:34 compute-2 nova_compute[232428]: 2025-11-29 08:52:34.880 232432 DEBUG nova.network.neutron [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Refreshing network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.081 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.082 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3956MB free_disk=20.897113800048828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.083 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.083 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 20c17446-b5ab-4a14-acb9-c68d36bc66c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.172 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:52:35 compute-2 sudo[329930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:35 compute-2 sudo[329930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.214 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:52:35 compute-2 sudo[329930]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:35 compute-2 sudo[329956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:35 compute-2 sudo[329956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:35 compute-2 sudo[329956]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:35.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3418474369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1941697903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:52:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1635104142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.680 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.690 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.712 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.743 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:52:35 compute-2 nova_compute[232428]: 2025-11-29 08:52:35.743 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:52:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:52:35.743 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:52:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:36.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:36 compute-2 ceph-mon[77138]: pgmap v3513: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Nov 29 08:52:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1635104142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:36 compute-2 nova_compute[232428]: 2025-11-29 08:52:36.743 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.259 232432 DEBUG nova.network.neutron [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updated VIF entry in instance network info cache for port c9494384-5d68-4ec4-85b7-0dc1c0905989. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.260 232432 DEBUG nova.network.neutron [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [{"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.306 232432 DEBUG oslo_concurrency.lockutils [req-bc6a1cc9-a301-4081-8816-fdabb85d6c10 req-65f41a4d-3371-4294-940a-9014292ce724 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-20c17446-b5ab-4a14-acb9-c68d36bc66c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:52:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:37.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:37 compute-2 nova_compute[232428]: 2025-11-29 08:52:37.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:37 compute-2 podman[330003]: 2025-11-29 08:52:37.762139145 +0000 UTC m=+0.152270907 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 08:52:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:52:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:52:38 compute-2 ceph-mon[77138]: pgmap v3514: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 84 op/s
Nov 29 08:52:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2611578934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:39 compute-2 nova_compute[232428]: 2025-11-29 08:52:39.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:40 compute-2 ceph-mon[77138]: pgmap v3515: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 84 op/s
Nov 29 08:52:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/275755828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:42 compute-2 nova_compute[232428]: 2025-11-29 08:52:42.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:42 compute-2 ceph-mon[77138]: pgmap v3516: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 74 op/s
Nov 29 08:52:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3050799729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:52:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:44 compute-2 ceph-mon[77138]: pgmap v3517: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 167 KiB/s wr, 31 op/s
Nov 29 08:52:44 compute-2 nova_compute[232428]: 2025-11-29 08:52:44.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:45.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:45 compute-2 podman[330035]: 2025-11-29 08:52:45.700462534 +0000 UTC m=+0.092082674 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:52:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:46 compute-2 ceph-mon[77138]: pgmap v3518: 305 pgs: 305 active+clean; 413 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 29 08:52:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:47.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:47 compute-2 nova_compute[232428]: 2025-11-29 08:52:47.526 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:48 compute-2 ceph-mon[77138]: pgmap v3519: 305 pgs: 305 active+clean; 413 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 29 08:52:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/784458687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1615450652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:52:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:49.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:49 compute-2 nova_compute[232428]: 2025-11-29 08:52:49.792 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:50 compute-2 ceph-mon[77138]: pgmap v3520: 305 pgs: 305 active+clean; 419 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 79 op/s
Nov 29 08:52:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:51.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:52 compute-2 ceph-mon[77138]: pgmap v3521: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 83 op/s
Nov 29 08:52:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:52.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:52 compute-2 nova_compute[232428]: 2025-11-29 08:52:52.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:54 compute-2 ceph-mon[77138]: pgmap v3522: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 95 op/s
Nov 29 08:52:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:54.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:54 compute-2 nova_compute[232428]: 2025-11-29 08:52:54.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:52:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:52:55 compute-2 sudo[330061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:55 compute-2 sudo[330061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:55 compute-2 sudo[330061]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:55 compute-2 sudo[330086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:52:55 compute-2 sudo[330086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:52:55 compute-2 sudo[330086]: pam_unix(sudo:session): session closed for user root
Nov 29 08:52:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:52:56 compute-2 ceph-mon[77138]: pgmap v3523: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.386793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376386880, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 882, "num_deletes": 252, "total_data_size": 1668411, "memory_usage": 1693504, "flush_reason": "Manual Compaction"}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376399408, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1099793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77314, "largest_seqno": 78191, "table_properties": {"data_size": 1095705, "index_size": 1803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9360, "raw_average_key_size": 19, "raw_value_size": 1087379, "raw_average_value_size": 2298, "num_data_blocks": 79, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406318, "oldest_key_time": 1764406318, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 12699 microseconds, and 6940 cpu microseconds.
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399488) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1099793 bytes OK
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399516) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402435) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402456) EVENT_LOG_v1 {"time_micros": 1764406376402450, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1663929, prev total WAL file size 1663929, number of live WAL files 2.
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.403519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1074KB)], [156(11MB)]
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376403649, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12837053, "oldest_snapshot_seqno": -1}
Nov 29 08:52:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10140 keys, 10787608 bytes, temperature: kUnknown
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376512288, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10787608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10725154, "index_size": 35973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 269805, "raw_average_key_size": 26, "raw_value_size": 10550444, "raw_average_value_size": 1040, "num_data_blocks": 1349, "num_entries": 10140, "num_filter_entries": 10140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.513001) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10787608 bytes
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.514834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.0 rd, 99.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(21.5) write-amplify(9.8) OK, records in: 10659, records dropped: 519 output_compression: NoCompression
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.514864) EVENT_LOG_v1 {"time_micros": 1764406376514850, "job": 100, "event": "compaction_finished", "compaction_time_micros": 108789, "compaction_time_cpu_micros": 52906, "output_level": 6, "num_output_files": 1, "total_output_size": 10787608, "num_input_records": 10659, "num_output_records": 10140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376515606, "job": 100, "event": "table_file_deletion", "file_number": 158}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376519819, "job": 100, "event": "table_file_deletion", "file_number": 156}
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.403165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.519979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.519988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.519991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.519994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:52:56.519997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:52:56 compute-2 podman[330112]: 2025-11-29 08:52:56.682174894 +0000 UTC m=+0.082380084 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:52:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:57 compute-2 nova_compute[232428]: 2025-11-29 08:52:57.532 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:52:58 compute-2 ceph-mon[77138]: pgmap v3524: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 201 KiB/s wr, 172 op/s
Nov 29 08:52:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:52:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:52:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:52:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:52:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:52:59 compute-2 nova_compute[232428]: 2025-11-29 08:52:59.797 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:00 compute-2 ceph-mon[77138]: pgmap v3525: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 201 KiB/s wr, 240 op/s
Nov 29 08:53:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:01 compute-2 sudo[330136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:01 compute-2 sudo[330136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:01 compute-2 sudo[330136]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:01 compute-2 sudo[330161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:53:01 compute-2 sudo[330161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:01 compute-2 sudo[330161]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:01 compute-2 sudo[330186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:01 compute-2 sudo[330186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:01 compute-2 sudo[330186]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:01 compute-2 sudo[330211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:53:01 compute-2 sudo[330211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:02 compute-2 sudo[330211]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:02 compute-2 ceph-mon[77138]: pgmap v3526: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 66 KiB/s wr, 249 op/s
Nov 29 08:53:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:02.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:02 compute-2 nova_compute[232428]: 2025-11-29 08:53:02.534 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:03.357 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:03.358 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:03.358 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:53:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:53:03 compute-2 ceph-mon[77138]: pgmap v3527: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 191 KiB/s wr, 253 op/s
Nov 29 08:53:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:04.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:04 compute-2 nova_compute[232428]: 2025-11-29 08:53:04.799 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:53:04 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:53:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:05.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:06 compute-2 ceph-mon[77138]: pgmap v3528: 305 pgs: 305 active+clean; 457 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 283 op/s
Nov 29 08:53:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:07.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:07 compute-2 nova_compute[232428]: 2025-11-29 08:53:07.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:08 compute-2 ceph-mon[77138]: pgmap v3529: 305 pgs: 305 active+clean; 457 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 519 KiB/s rd, 2.3 MiB/s wr, 150 op/s
Nov 29 08:53:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:08 compute-2 podman[330271]: 2025-11-29 08:53:08.754615897 +0000 UTC m=+0.147260350 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:53:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:09.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 08:53:09 compute-2 nova_compute[232428]: 2025-11-29 08:53:09.802 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:10 compute-2 ceph-mon[77138]: pgmap v3530: 305 pgs: 305 active+clean; 459 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 636 KiB/s rd, 2.3 MiB/s wr, 168 op/s
Nov 29 08:53:10 compute-2 ceph-mon[77138]: osdmap e421: 3 total, 3 up, 3 in
Nov 29 08:53:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:53:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:53:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:10 compute-2 sudo[330299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:10 compute-2 sudo[330299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:10 compute-2 sudo[330299]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:10 compute-2 sudo[330324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:53:10 compute-2 sudo[330324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:10 compute-2 sudo[330324]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 08:53:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:11.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 08:53:12 compute-2 ceph-mon[77138]: osdmap e422: 3 total, 3 up, 3 in
Nov 29 08:53:12 compute-2 ceph-mon[77138]: pgmap v3533: 305 pgs: 305 active+clean; 486 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 128 op/s
Nov 29 08:53:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:12 compute-2 nova_compute[232428]: 2025-11-29 08:53:12.539 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:13 compute-2 ceph-mon[77138]: osdmap e423: 3 total, 3 up, 3 in
Nov 29 08:53:13 compute-2 nova_compute[232428]: 2025-11-29 08:53:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:13 compute-2 nova_compute[232428]: 2025-11-29 08:53:13.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:53:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:13.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:14 compute-2 ceph-mon[77138]: pgmap v3535: 305 pgs: 305 active+clean; 507 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 218 op/s
Nov 29 08:53:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:14.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:14 compute-2 nova_compute[232428]: 2025-11-29 08:53:14.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 08:53:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:15.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:15 compute-2 sudo[330351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:15 compute-2 sudo[330351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:15 compute-2 sudo[330351]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:15 compute-2 sudo[330376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:15 compute-2 sudo[330376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:15 compute-2 sudo[330376]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:16 compute-2 nova_compute[232428]: 2025-11-29 08:53:16.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:16 compute-2 nova_compute[232428]: 2025-11-29 08:53:16.221 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:53:16 compute-2 ceph-mon[77138]: osdmap e424: 3 total, 3 up, 3 in
Nov 29 08:53:16 compute-2 ceph-mon[77138]: pgmap v3537: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 256 op/s
Nov 29 08:53:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/42309310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:16 compute-2 nova_compute[232428]: 2025-11-29 08:53:16.236 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:53:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:16.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:16 compute-2 podman[330402]: 2025-11-29 08:53:16.694566668 +0000 UTC m=+0.086312855 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:53:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.253 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.253 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.254 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.254 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.254 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.256 232432 INFO nova.compute.manager [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Terminating instance
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.257 232432 DEBUG nova.compute.manager [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:53:17 compute-2 kernel: tapc9494384-5d (unregistering): left promiscuous mode
Nov 29 08:53:17 compute-2 NetworkManager[48993]: <info>  [1764406397.3102] device (tapc9494384-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.330 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 ovn_controller[134375]: 2025-11-29T08:53:17Z|00960|binding|INFO|Releasing lport c9494384-5d68-4ec4-85b7-0dc1c0905989 from this chassis (sb_readonly=0)
Nov 29 08:53:17 compute-2 ovn_controller[134375]: 2025-11-29T08:53:17Z|00961|binding|INFO|Setting lport c9494384-5d68-4ec4-85b7-0dc1c0905989 down in Southbound
Nov 29 08:53:17 compute-2 ovn_controller[134375]: 2025-11-29T08:53:17Z|00962|binding|INFO|Removing iface tapc9494384-5d ovn-installed in OVS
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.339 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:4b:0c 10.100.0.11'], port_security=['fa:16:3e:b6:4b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '20c17446-b5ab-4a14-acb9-c68d36bc66c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4229292-80a4-45ff-9b7c-102752939760', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a8e4424-24e5-4ff1-8f2b-4708d013f40d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88f9c966-7d41-46f1-8106-7095d470a6ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c9494384-5d68-4ec4-85b7-0dc1c0905989) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.342 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c9494384-5d68-4ec4-85b7-0dc1c0905989 in datapath d4229292-80a4-45ff-9b7c-102752939760 unbound from our chassis
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.345 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4229292-80a4-45ff-9b7c-102752939760, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.346 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5792a5ef-daf2-47be-a641-9da42e6b76dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.348 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 namespace which is not needed anymore
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.354 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Nov 29 08:53:17 compute-2 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Consumed 16.755s CPU time.
Nov 29 08:53:17 compute-2 systemd-machined[194747]: Machine qemu-99-instance-000000ca terminated.
Nov 29 08:53:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:17.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.507 232432 INFO nova.virt.libvirt.driver [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Instance destroyed successfully.
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.508 232432 DEBUG nova.objects.instance [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 20c17446-b5ab-4a14-acb9-c68d36bc66c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [NOTICE]   (329795) : haproxy version is 2.8.14-c23fe91
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [NOTICE]   (329795) : path to executable is /usr/sbin/haproxy
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [WARNING]  (329795) : Exiting Master process...
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [WARNING]  (329795) : Exiting Master process...
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [ALERT]    (329795) : Current worker (329797) exited with code 143 (Terminated)
Nov 29 08:53:17 compute-2 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[329791]: [WARNING]  (329795) : All workers exited. Exiting... (0)
Nov 29 08:53:17 compute-2 systemd[1]: libpod-d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978.scope: Deactivated successfully.
Nov 29 08:53:17 compute-2 podman[330448]: 2025-11-29 08:53:17.523354194 +0000 UTC m=+0.069204453 container died d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.528 232432 DEBUG nova.virt.libvirt.vif [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2040911676',display_name='tempest-TestNetworkBasicOps-server-2040911676',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2040911676',id=202,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPzlJj7B26ifWrdDd5DJfInpX9JJyaVZJ28IW6gWC/MW47NlQsxSf5wOgsSPjxRP3FA4PqHj4XmbEdqD+wWcpQDWH07WTO3WVIIvrIcQRDTC/2ETy15mw14mASDZvkkt/w==',key_name='tempest-TestNetworkBasicOps-1097965341',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:52:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-7hcdf4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:52:11Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=20c17446-b5ab-4a14-acb9-c68d36bc66c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.551 232432 DEBUG nova.network.os_vif_util [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "address": "fa:16:3e:b6:4b:0c", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9494384-5d", "ovs_interfaceid": "c9494384-5d68-4ec4-85b7-0dc1c0905989", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.553 232432 DEBUG nova.network.os_vif_util [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.554 232432 DEBUG os_vif [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.556 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.558 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9494384-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.565 232432 INFO os_vif [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:4b:0c,bridge_name='br-int',has_traffic_filtering=True,id=c9494384-5d68-4ec4-85b7-0dc1c0905989,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9494384-5d')
Nov 29 08:53:17 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978-userdata-shm.mount: Deactivated successfully.
Nov 29 08:53:17 compute-2 systemd[1]: var-lib-containers-storage-overlay-c8adf7b9bd2bb65685a70147efa7b01379131a37c500ce0f3b8aeba3cbf4a1a3-merged.mount: Deactivated successfully.
Nov 29 08:53:17 compute-2 podman[330448]: 2025-11-29 08:53:17.587692955 +0000 UTC m=+0.133543234 container cleanup d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.592 232432 DEBUG nova.compute.manager [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-unplugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.592 232432 DEBUG oslo_concurrency.lockutils [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.593 232432 DEBUG oslo_concurrency.lockutils [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.593 232432 DEBUG oslo_concurrency.lockutils [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.593 232432 DEBUG nova.compute.manager [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] No waiting events found dispatching network-vif-unplugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.593 232432 DEBUG nova.compute.manager [req-3e555364-f52f-40f1-b77b-a9fbdaa19e4a req-49ee5810-f4ff-4ed2-a113-06acfa053a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-unplugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:53:17 compute-2 systemd[1]: libpod-conmon-d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978.scope: Deactivated successfully.
Nov 29 08:53:17 compute-2 podman[330504]: 2025-11-29 08:53:17.670573703 +0000 UTC m=+0.052066320 container remove d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.680 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ee51eb67-d89f-4228-99e8-b77bbb88bd1c]: (4, ('Sat Nov 29 08:53:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 (d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978)\nd0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978\nSat Nov 29 08:53:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 (d0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978)\nd0371aae1f249cc004e0b4b1f36bbf5f60747d25e9d9618f8d6d6d7082637978\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.682 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[43aa45d4-f5e6-4ea8-9d7c-0c54eff3ad85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.683 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4229292-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.684 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 kernel: tapd4229292-80: left promiscuous mode
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.714 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3a2825-e2c9-4a61-8553-78cc9f525696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.735 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[de7cb296-8bc8-421c-b724-a244dd699a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.736 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6b66c8fb-f495-43c0-9649-611fb3a35301]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.762 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba13507-a08f-47e0-820c-15c405ac3c54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929734, 'reachable_time': 43560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330520, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 systemd[1]: run-netns-ovnmeta\x2dd4229292\x2d80a4\x2d45ff\x2d9b7c\x2d102752939760.mount: Deactivated successfully.
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.767 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:53:17 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:17.768 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc63f27-493a-4e63-b060-b1a8e8810b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.984 232432 INFO nova.virt.libvirt.driver [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Deleting instance files /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1_del
Nov 29 08:53:17 compute-2 nova_compute[232428]: 2025-11-29 08:53:17.986 232432 INFO nova.virt.libvirt.driver [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Deletion of /var/lib/nova/instances/20c17446-b5ab-4a14-acb9-c68d36bc66c1_del complete
Nov 29 08:53:18 compute-2 nova_compute[232428]: 2025-11-29 08:53:18.043 232432 INFO nova.compute.manager [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 29 08:53:18 compute-2 nova_compute[232428]: 2025-11-29 08:53:18.044 232432 DEBUG oslo.service.loopingcall [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:53:18 compute-2 nova_compute[232428]: 2025-11-29 08:53:18.045 232432 DEBUG nova.compute.manager [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:53:18 compute-2 nova_compute[232428]: 2025-11-29 08:53:18.045 232432 DEBUG nova.network.neutron [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:53:18 compute-2 ceph-mon[77138]: osdmap e425: 3 total, 3 up, 3 in
Nov 29 08:53:18 compute-2 ceph-mon[77138]: pgmap v3539: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 12 MiB/s wr, 201 op/s
Nov 29 08:53:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:18 compute-2 nova_compute[232428]: 2025-11-29 08:53:18.875 232432 DEBUG nova.network.neutron [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.096 232432 INFO nova.compute.manager [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Took 1.05 seconds to deallocate network for instance.
Nov 29 08:53:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.698 232432 DEBUG nova.compute.manager [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-deleted-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.700 232432 DEBUG nova.compute.manager [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.701 232432 DEBUG oslo_concurrency.lockutils [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.702 232432 DEBUG oslo_concurrency.lockutils [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.702 232432 DEBUG oslo_concurrency.lockutils [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.703 232432 DEBUG nova.compute.manager [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] No waiting events found dispatching network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.704 232432 WARNING nova.compute.manager [req-8bde8f58-5939-4b0b-9293-5a9259313cb1 req-27e81c33-94b0-4dcf-a1fe-9e32fa79d6b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Received unexpected event network-vif-plugged-c9494384-5d68-4ec4-85b7-0dc1c0905989 for instance with vm_state active and task_state deleting.
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.746 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.747 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:19 compute-2 nova_compute[232428]: 2025-11-29 08:53:19.801 232432 DEBUG oslo_concurrency.processutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:53:20 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/105516729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.319 232432 DEBUG oslo_concurrency.processutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.326 232432 DEBUG nova.compute.provider_tree [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.385 232432 DEBUG nova.scheduler.client.report [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.422 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.442 232432 INFO nova.scheduler.client.report [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 20c17446-b5ab-4a14-acb9-c68d36bc66c1
Nov 29 08:53:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:20 compute-2 nova_compute[232428]: 2025-11-29 08:53:20.503 232432 DEBUG oslo_concurrency.lockutils [None req-4ae3b6bb-ca65-4673-a0e3-7c1b2a6cb453 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "20c17446-b5ab-4a14-acb9-c68d36bc66c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:20 compute-2 ceph-mon[77138]: pgmap v3540: 305 pgs: 305 active+clean; 435 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 9.7 MiB/s wr, 211 op/s
Nov 29 08:53:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/396321426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/105516729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:20 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 08:53:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:22 compute-2 ceph-mon[77138]: pgmap v3541: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 174 op/s
Nov 29 08:53:22 compute-2 ceph-mon[77138]: osdmap e426: 3 total, 3 up, 3 in
Nov 29 08:53:22 compute-2 nova_compute[232428]: 2025-11-29 08:53:22.558 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 08:53:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:24 compute-2 ceph-mon[77138]: osdmap e427: 3 total, 3 up, 3 in
Nov 29 08:53:24 compute-2 ceph-mon[77138]: pgmap v3544: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 7.1 KiB/s wr, 158 op/s
Nov 29 08:53:24 compute-2 nova_compute[232428]: 2025-11-29 08:53:24.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:24.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:25 compute-2 nova_compute[232428]: 2025-11-29 08:53:25.388 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:25 compute-2 nova_compute[232428]: 2025-11-29 08:53:25.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:26.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:26 compute-2 ceph-mon[77138]: pgmap v3545: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 6.9 KiB/s wr, 153 op/s
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.309 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:53:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:27.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:27 compute-2 nova_compute[232428]: 2025-11-29 08:53:27.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:27 compute-2 podman[330550]: 2025-11-29 08:53:27.718724227 +0000 UTC m=+0.113545402 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:53:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:53:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44019060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:53:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:53:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44019060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:53:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:28.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:28 compute-2 ceph-mon[77138]: pgmap v3546: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 5.9 KiB/s wr, 115 op/s
Nov 29 08:53:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/44019060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:53:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/44019060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:53:29 compute-2 nova_compute[232428]: 2025-11-29 08:53:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:30 compute-2 nova_compute[232428]: 2025-11-29 08:53:30.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:30 compute-2 ceph-mon[77138]: pgmap v3547: 305 pgs: 305 active+clean; 186 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 46 op/s
Nov 29 08:53:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4030366184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:31 compute-2 nova_compute[232428]: 2025-11-29 08:53:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:31.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 08:53:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:32 compute-2 nova_compute[232428]: 2025-11-29 08:53:32.504 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406397.5028539, 20c17446-b5ab-4a14-acb9-c68d36bc66c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:53:32 compute-2 nova_compute[232428]: 2025-11-29 08:53:32.505 232432 INFO nova.compute.manager [-] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] VM Stopped (Lifecycle Event)
Nov 29 08:53:32 compute-2 nova_compute[232428]: 2025-11-29 08:53:32.527 232432 DEBUG nova.compute.manager [None req-fda79a2c-2ad7-44ca-8d14-64174f85ad78 - - - - - -] [instance: 20c17446-b5ab-4a14-acb9-c68d36bc66c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:53:32 compute-2 nova_compute[232428]: 2025-11-29 08:53:32.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:32 compute-2 ceph-mon[77138]: pgmap v3548: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 08:53:32 compute-2 ceph-mon[77138]: osdmap e428: 3 total, 3 up, 3 in
Nov 29 08:53:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:33.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:34.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:34 compute-2 ceph-mon[77138]: pgmap v3550: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Nov 29 08:53:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2304781969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:34.969 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:53:34 compute-2 nova_compute[232428]: 2025-11-29 08:53:34.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:34 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:34.971 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.234 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.236 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.237 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:53:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4209505043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.717 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2876622984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4209505043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:35 compute-2 sudo[330594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:35 compute-2 sudo[330594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:35 compute-2 sudo[330594]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:35 compute-2 sudo[330622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:35 compute-2 sudo[330622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:35 compute-2 sudo[330622]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.925 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.927 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4159MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.928 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:35 compute-2 nova_compute[232428]: 2025-11-29 08:53:35.928 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.093 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.094 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.128 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:36.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:53:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1397109885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.561 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.569 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.593 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.632 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:53:36 compute-2 nova_compute[232428]: 2025-11-29 08:53:36.633 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:36 compute-2 ceph-mon[77138]: pgmap v3551: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 08:53:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1397109885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:37.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:37 compute-2 nova_compute[232428]: 2025-11-29 08:53:37.565 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:38.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:38 compute-2 nova_compute[232428]: 2025-11-29 08:53:38.634 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:38 compute-2 nova_compute[232428]: 2025-11-29 08:53:38.672 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:38 compute-2 nova_compute[232428]: 2025-11-29 08:53:38.672 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:53:38 compute-2 nova_compute[232428]: 2025-11-29 08:53:38.673 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:53:38 compute-2 ceph-mon[77138]: pgmap v3552: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 08:53:38 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:53:38.974 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:39.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:39 compute-2 podman[330671]: 2025-11-29 08:53:39.753737938 +0000 UTC m=+0.144150425 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:53:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:40.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:40 compute-2 ceph-mon[77138]: pgmap v3553: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Nov 29 08:53:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2740549270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:42.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:42 compute-2 nova_compute[232428]: 2025-11-29 08:53:42.568 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:43 compute-2 ceph-mon[77138]: pgmap v3554: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3476609685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:53:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:43.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:53:44 compute-2 ceph-mon[77138]: pgmap v3555: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:46 compute-2 ceph-mon[77138]: pgmap v3556: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.483 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.483 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.570 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.572 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.651 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:53:47 compute-2 podman[330702]: 2025-11-29 08:53:47.681289283 +0000 UTC m=+0.079061670 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.745 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.746 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.753 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.754 232432 INFO nova.compute.claims [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:53:47 compute-2 nova_compute[232428]: 2025-11-29 08:53:47.939 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:53:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/48449930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:48 compute-2 nova_compute[232428]: 2025-11-29 08:53:48.417 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:48 compute-2 nova_compute[232428]: 2025-11-29 08:53:48.426 232432 DEBUG nova.compute.provider_tree [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:53:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:48.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:48 compute-2 ceph-mon[77138]: pgmap v3557: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/48449930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:53:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:49.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:49 compute-2 nova_compute[232428]: 2025-11-29 08:53:49.993 232432 DEBUG nova.scheduler.client.report [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:53:50 compute-2 nova_compute[232428]: 2025-11-29 08:53:50.366 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:50 compute-2 nova_compute[232428]: 2025-11-29 08:53:50.368 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:53:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:50.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:50 compute-2 ceph-mon[77138]: pgmap v3558: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:51 compute-2 nova_compute[232428]: 2025-11-29 08:53:51.988 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:53:51 compute-2 nova_compute[232428]: 2025-11-29 08:53:51.988 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.023 232432 INFO nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.069 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.377 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.379 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.379 232432 INFO nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Creating image(s)
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.425 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:52.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:52 compute-2 ceph-mon[77138]: pgmap v3559: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.883 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.915 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.919 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:52 compute-2 nova_compute[232428]: 2025-11-29 08:53:52.967 232432 DEBUG nova.policy [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.022 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.022 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.023 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.023 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.049 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:53 compute-2 nova_compute[232428]: 2025-11-29 08:53:53.052 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:54 compute-2 ceph-mon[77138]: pgmap v3560: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.407 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:54.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.524 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.669 232432 DEBUG nova.objects.instance [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.674 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Successfully created port: 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.694 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.694 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Ensure instance console log exists: /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.695 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.696 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:54 compute-2 nova_compute[232428]: 2025-11-29 08:53:54.696 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:55 compute-2 sudo[330917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:55 compute-2 sudo[330917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:55 compute-2 sudo[330917]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:53:56 compute-2 sudo[330943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:53:56 compute-2 sudo[330943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:53:56 compute-2 sudo[330943]: pam_unix(sudo:session): session closed for user root
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.308 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Successfully updated port: 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:53:56 compute-2 ceph-mon[77138]: pgmap v3561: 305 pgs: 305 active+clean; 163 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.329 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.329 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.329 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.434 232432 DEBUG nova.compute.manager [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.435 232432 DEBUG nova.compute.manager [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing instance network info cache due to event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.435 232432 DEBUG oslo_concurrency.lockutils [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:53:56 compute-2 nova_compute[232428]: 2025-11-29 08:53:56.493 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:53:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:56.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:53:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.663 232432 DEBUG nova.network.neutron [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.684 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.685 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Instance network_info: |[{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.686 232432 DEBUG oslo_concurrency.lockutils [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.686 232432 DEBUG nova.network.neutron [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.692 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Start _get_guest_xml network_info=[{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.698 232432 WARNING nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.704 232432 DEBUG nova.virt.libvirt.host [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.705 232432 DEBUG nova.virt.libvirt.host [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.708 232432 DEBUG nova.virt.libvirt.host [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.709 232432 DEBUG nova.virt.libvirt.host [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.710 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.711 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.712 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.712 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.713 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.713 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.713 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.714 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.714 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.715 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.715 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.716 232432 DEBUG nova.virt.hardware [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.720 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:57 compute-2 nova_compute[232428]: 2025-11-29 08:53:57.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:53:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1816794793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.219 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.265 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.271 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:58 compute-2 ceph-mon[77138]: pgmap v3562: 305 pgs: 305 active+clean; 163 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 08:53:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1816794793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:53:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:58 compute-2 podman[331030]: 2025-11-29 08:53:58.701965795 +0000 UTC m=+0.095984595 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:53:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:53:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1336060656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.795 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.798 232432 DEBUG nova.virt.libvirt.vif [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:53:52Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.798 232432 DEBUG nova.network.os_vif_util [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.800 232432 DEBUG nova.network.os_vif_util [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.801 232432 DEBUG nova.objects.instance [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.825 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <uuid>2c409efc-d2fd-4ab1-813e-cb64784e0e69</uuid>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <name>instance-000000cd</name>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:53:57</nova:creationTime>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:53:58 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <system>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="serial">2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="uuid">2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </system>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <os>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </os>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <features>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </features>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk">
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config">
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:53:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:a0:8e:13"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <target dev="tap4bfded6b-48"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log" append="off"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <video>
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </video>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:53:58 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:53:58 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:53:58 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:53:58 compute-2 nova_compute[232428]: </domain>
Nov 29 08:53:58 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.826 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Preparing to wait for external event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.826 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.827 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.827 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.828 232432 DEBUG nova.virt.libvirt.vif [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:53:52Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.828 232432 DEBUG nova.network.os_vif_util [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.829 232432 DEBUG nova.network.os_vif_util [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.829 232432 DEBUG os_vif [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.830 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.831 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.838 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bfded6b-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.839 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bfded6b-48, col_values=(('external_ids', {'iface-id': '4bfded6b-4829-43d9-aed2-fc22d2a1bc63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:8e:13', 'vm-uuid': '2c409efc-d2fd-4ab1-813e-cb64784e0e69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.840 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:58 compute-2 NetworkManager[48993]: <info>  [1764406438.8417] manager: (tap4bfded6b-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.842 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.848 232432 INFO os_vif [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48')
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.903 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.903 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.903 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:a0:8e:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.904 232432 INFO nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Using config drive
Nov 29 08:53:58 compute-2 nova_compute[232428]: 2025-11-29 08:53:58.936 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.347 232432 DEBUG nova.network.neutron [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated VIF entry in instance network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.348 232432 DEBUG nova.network.neutron [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:53:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1336060656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.389 232432 DEBUG oslo_concurrency.lockutils [req-03361044-dcd8-43f4-891a-58f70093e794 req-46848e46-b175-4bf7-8a7d-5ad2ae1765f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:53:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:53:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:53:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.510 232432 INFO nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Creating config drive at /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.514 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6f0bfvu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.672 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6f0bfvu" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.716 232432 DEBUG nova.storage.rbd_utils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.720 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.952 232432 DEBUG oslo_concurrency.processutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config 2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:53:59 compute-2 nova_compute[232428]: 2025-11-29 08:53:59.953 232432 INFO nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Deleting local config drive /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/disk.config because it was imported into RBD.
Nov 29 08:54:00 compute-2 kernel: tap4bfded6b-48: entered promiscuous mode
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.0361] manager: (tap4bfded6b-48): new Tun device (/org/freedesktop/NetworkManager/Devices/461)
Nov 29 08:54:00 compute-2 ovn_controller[134375]: 2025-11-29T08:54:00Z|00963|binding|INFO|Claiming lport 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 for this chassis.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.036 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_controller[134375]: 2025-11-29T08:54:00Z|00964|binding|INFO|4bfded6b-4829-43d9-aed2-fc22d2a1bc63: Claiming fa:16:3e:a0:8e:13 10.100.0.3
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.059 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:8e:13 10.100.0.3'], port_security=['fa:16:3e:a0:8e:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2c409efc-d2fd-4ab1-813e-cb64784e0e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96164d04-a7ec-4231-906f-66c0410686bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf54ccd2-3d8f-4c35-aaac-ee6de584f92a, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=4bfded6b-4829-43d9-aed2-fc22d2a1bc63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.061 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 in datapath 99ea6f62-0590-4a47-a4f0-69449e6d5084 bound to our chassis
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.063 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99ea6f62-0590-4a47-a4f0-69449e6d5084
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.084 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a84927fc-6493-4b95-875e-2c9be37dd793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.085 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99ea6f62-01 in ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.088 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99ea6f62-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.088 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[61d93ec2-6a1d-4073-8a16-54846da25854]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.090 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[cac4fa26-3633-42a2-a7ef-7e3d949c4199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 systemd-udevd[331127]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:54:00 compute-2 systemd-machined[194747]: New machine qemu-100-instance-000000cd.
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.1117] device (tap4bfded6b-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.1135] device (tap4bfded6b-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.118 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[81e63731-ead0-406a-9cfa-951d69c88bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 systemd[1]: Started Virtual Machine qemu-100-instance-000000cd.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_controller[134375]: 2025-11-29T08:54:00Z|00965|binding|INFO|Setting lport 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 ovn-installed in OVS
Nov 29 08:54:00 compute-2 ovn_controller[134375]: 2025-11-29T08:54:00Z|00966|binding|INFO|Setting lport 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 up in Southbound
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.135 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.144 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9fc1d3-b8c0-4bcc-97bd-8f2ae1af02ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.193 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e05a9cf5-ba88-4149-8ac3-a1b3bf36a8e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.198 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[eb00ab95-bb6c-4539-90f6-2fdd02a24e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.1992] manager: (tap99ea6f62-00): new Veth device (/org/freedesktop/NetworkManager/Devices/462)
Nov 29 08:54:00 compute-2 systemd-udevd[331130]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.242 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ec59f3a1-3d00-47f6-836a-2dd3d4c14312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.249 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[74b50c3f-545d-4ac5-89e1-a370b5770a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.2773] device (tap99ea6f62-00): carrier: link connected
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.283 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8af4f86e-ccf1-427c-84cc-5aa1c4f13647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.306 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2c38401e-b014-424a-8920-16ad781b57d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99ea6f62-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5e:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940654, 'reachable_time': 39631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331159, 'error': None, 'target': 'ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.329 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc897c0-4df2-4f42-8625-3b4ff43cb348]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5ee7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 940654, 'tstamp': 940654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331160, 'error': None, 'target': 'ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.352 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c3c544-3ac6-410f-aea2-b2021de03a37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99ea6f62-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5e:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940654, 'reachable_time': 39631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331161, 'error': None, 'target': 'ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ceph-mon[77138]: pgmap v3563: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.405 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7bf0e4-27ba-4354-b534-4b52da6f91cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.512 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c92907c7-6935-4ee9-af58-d727002c32b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.513 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99ea6f62-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.514 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.514 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99ea6f62-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:00 compute-2 NetworkManager[48993]: <info>  [1764406440.5173] manager: (tap99ea6f62-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 kernel: tap99ea6f62-00: entered promiscuous mode
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.527 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99ea6f62-00, col_values=(('external_ids', {'iface-id': 'dda73cdf-320d-41ae-b17d-5408980fdc26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_controller[134375]: 2025-11-29T08:54:00Z|00967|binding|INFO|Releasing lport dda73cdf-320d-41ae-b17d-5408980fdc26 from this chassis (sb_readonly=0)
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.552 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.553 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99ea6f62-0590-4a47-a4f0-69449e6d5084.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99ea6f62-0590-4a47-a4f0-69449e6d5084.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.554 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d3f4bc-818b-4bc8-9d94-584c1890e0a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.555 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-99ea6f62-0590-4a47-a4f0-69449e6d5084
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/99ea6f62-0590-4a47-a4f0-69449e6d5084.pid.haproxy
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 99ea6f62-0590-4a47-a4f0-69449e6d5084
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:54:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:00.556 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'env', 'PROCESS_TAG=haproxy-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99ea6f62-0590-4a47-a4f0-69449e6d5084.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.623 232432 DEBUG nova.compute.manager [req-2cf30287-13bf-44c8-a524-39a7751ff511 req-6aee8f60-2043-442a-8126-bcaba16b9d01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.624 232432 DEBUG oslo_concurrency.lockutils [req-2cf30287-13bf-44c8-a524-39a7751ff511 req-6aee8f60-2043-442a-8126-bcaba16b9d01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.624 232432 DEBUG oslo_concurrency.lockutils [req-2cf30287-13bf-44c8-a524-39a7751ff511 req-6aee8f60-2043-442a-8126-bcaba16b9d01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.624 232432 DEBUG oslo_concurrency.lockutils [req-2cf30287-13bf-44c8-a524-39a7751ff511 req-6aee8f60-2043-442a-8126-bcaba16b9d01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.625 232432 DEBUG nova.compute.manager [req-2cf30287-13bf-44c8-a524-39a7751ff511 req-6aee8f60-2043-442a-8126-bcaba16b9d01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Processing event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.656 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406440.6560566, 2c409efc-d2fd-4ab1-813e-cb64784e0e69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.656 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] VM Started (Lifecycle Event)
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.659 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.678 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.685 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.686 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.692 232432 INFO nova.virt.libvirt.driver [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Instance spawned successfully.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.693 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.707 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.707 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406440.656254, 2c409efc-d2fd-4ab1-813e-cb64784e0e69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.707 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] VM Paused (Lifecycle Event)
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.719 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.720 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.720 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.721 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.721 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.722 232432 DEBUG nova.virt.libvirt.driver [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.728 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.731 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406440.6659224, 2c409efc-d2fd-4ab1-813e-cb64784e0e69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.731 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] VM Resumed (Lifecycle Event)
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.753 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.757 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.776 232432 INFO nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Took 8.40 seconds to spawn the instance on the hypervisor.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.777 232432 DEBUG nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.787 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.826 232432 INFO nova.compute.manager [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Took 13.12 seconds to build instance.
Nov 29 08:54:00 compute-2 nova_compute[232428]: 2025-11-29 08:54:00.844 232432 DEBUG oslo_concurrency.lockutils [None req-37c4087e-59ba-454f-93dc-37dc193d7ca4 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:01 compute-2 podman[331232]: 2025-11-29 08:54:01.013540488 +0000 UTC m=+0.070788103 container create ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:54:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:01 compute-2 podman[331232]: 2025-11-29 08:54:00.969126156 +0000 UTC m=+0.026373811 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:54:01 compute-2 systemd[1]: Started libpod-conmon-ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416.scope.
Nov 29 08:54:01 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:54:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32206c62920229e0558bf73ba272a46b1ba6da9a0bb87a80b49c449b4f045012/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:54:01 compute-2 podman[331232]: 2025-11-29 08:54:01.139803344 +0000 UTC m=+0.197050959 container init ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:54:01 compute-2 podman[331232]: 2025-11-29 08:54:01.150647582 +0000 UTC m=+0.207895197 container start ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:54:01 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [NOTICE]   (331251) : New worker (331253) forked
Nov 29 08:54:01 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [NOTICE]   (331251) : Loading success.
Nov 29 08:54:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:01.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:02 compute-2 ceph-mon[77138]: pgmap v3564: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 08:54:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.576 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.759 232432 DEBUG nova.compute.manager [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.760 232432 DEBUG oslo_concurrency.lockutils [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.760 232432 DEBUG oslo_concurrency.lockutils [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.760 232432 DEBUG oslo_concurrency.lockutils [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.761 232432 DEBUG nova.compute.manager [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:54:02 compute-2 nova_compute[232428]: 2025-11-29 08:54:02.761 232432 WARNING nova.compute.manager [req-6a8218b8-727e-4e3f-b309-25354dca2015 req-c8e1f996-ebed-43f0-be32-68e7d16331c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 for instance with vm_state active and task_state None.
Nov 29 08:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:03.358 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:03.358 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:03.359 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:03.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:03 compute-2 nova_compute[232428]: 2025-11-29 08:54:03.841 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:04 compute-2 ceph-mon[77138]: pgmap v3565: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 08:54:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:04.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:05.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:06 compute-2 ceph-mon[77138]: pgmap v3566: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 08:54:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:07 compute-2 nova_compute[232428]: 2025-11-29 08:54:07.580 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:08.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:08 compute-2 ceph-mon[77138]: pgmap v3567: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 336 KiB/s wr, 86 op/s
Nov 29 08:54:08 compute-2 nova_compute[232428]: 2025-11-29 08:54:08.845 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:09 compute-2 ovn_controller[134375]: 2025-11-29T08:54:09Z|00968|binding|INFO|Releasing lport dda73cdf-320d-41ae-b17d-5408980fdc26 from this chassis (sb_readonly=0)
Nov 29 08:54:09 compute-2 NetworkManager[48993]: <info>  [1764406449.4087] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Nov 29 08:54:09 compute-2 NetworkManager[48993]: <info>  [1764406449.4099] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Nov 29 08:54:09 compute-2 nova_compute[232428]: 2025-11-29 08:54:09.414 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:09 compute-2 ovn_controller[134375]: 2025-11-29T08:54:09Z|00969|binding|INFO|Releasing lport dda73cdf-320d-41ae-b17d-5408980fdc26 from this chassis (sb_readonly=0)
Nov 29 08:54:09 compute-2 nova_compute[232428]: 2025-11-29 08:54:09.469 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:09 compute-2 nova_compute[232428]: 2025-11-29 08:54:09.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:10.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:10 compute-2 ceph-mon[77138]: pgmap v3568: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 336 KiB/s wr, 86 op/s
Nov 29 08:54:10 compute-2 nova_compute[232428]: 2025-11-29 08:54:10.606 232432 DEBUG nova.compute.manager [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:10 compute-2 nova_compute[232428]: 2025-11-29 08:54:10.607 232432 DEBUG nova.compute.manager [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing instance network info cache due to event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:54:10 compute-2 nova_compute[232428]: 2025-11-29 08:54:10.607 232432 DEBUG oslo_concurrency.lockutils [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:54:10 compute-2 nova_compute[232428]: 2025-11-29 08:54:10.608 232432 DEBUG oslo_concurrency.lockutils [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:54:10 compute-2 nova_compute[232428]: 2025-11-29 08:54:10.608 232432 DEBUG nova.network.neutron [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:54:10 compute-2 podman[331268]: 2025-11-29 08:54:10.753267893 +0000 UTC m=+0.136454686 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 08:54:10 compute-2 sudo[331287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:10 compute-2 sudo[331287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:10 compute-2 sudo[331287]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:10 compute-2 sudo[331319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:54:10 compute-2 sudo[331319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:10 compute-2 sudo[331319]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:10 compute-2 sudo[331344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:10 compute-2 sudo[331344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:10 compute-2 sudo[331344]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:11 compute-2 sudo[331369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:54:11 compute-2 sudo[331369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:11.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:11 compute-2 sudo[331369]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:12 compute-2 ceph-mon[77138]: pgmap v3569: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:54:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:54:12 compute-2 nova_compute[232428]: 2025-11-29 08:54:12.582 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:13 compute-2 nova_compute[232428]: 2025-11-29 08:54:13.310 232432 DEBUG nova.network.neutron [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated VIF entry in instance network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:54:13 compute-2 nova_compute[232428]: 2025-11-29 08:54:13.311 232432 DEBUG nova.network.neutron [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:54:13 compute-2 nova_compute[232428]: 2025-11-29 08:54:13.390 232432 DEBUG oslo_concurrency.lockutils [req-61d2b527-38d3-4792-bcfb-ab3976f78bd7 req-817662c9-04ee-4e01-a615-3dfc359f41d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:54:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:13 compute-2 ovn_controller[134375]: 2025-11-29T08:54:13Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:8e:13 10.100.0.3
Nov 29 08:54:13 compute-2 ovn_controller[134375]: 2025-11-29T08:54:13Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:8e:13 10.100.0.3
Nov 29 08:54:13 compute-2 nova_compute[232428]: 2025-11-29 08:54:13.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:14 compute-2 ceph-mon[77138]: pgmap v3570: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 29 08:54:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:16 compute-2 sudo[331428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:16 compute-2 sudo[331428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:16 compute-2 sudo[331428]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:16 compute-2 sudo[331453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:16 compute-2 sudo[331453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:16 compute-2 sudo[331453]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:16 compute-2 sshd-session[331475]: Connection closed by 192.155.90.220 port 11758 [preauth]
Nov 29 08:54:16 compute-2 sshd-session[331480]: Connection closed by 192.155.90.220 port 11768 [preauth]
Nov 29 08:54:16 compute-2 sshd-session[331482]: Connection closed by 192.155.90.220 port 11784 [preauth]
Nov 29 08:54:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:16 compute-2 ceph-mon[77138]: pgmap v3571: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 08:54:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:17 compute-2 nova_compute[232428]: 2025-11-29 08:54:17.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:18 compute-2 sudo[331485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:18 compute-2 sudo[331485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:18 compute-2 sudo[331485]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:18 compute-2 sudo[331511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:54:18 compute-2 sudo[331511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:18 compute-2 sudo[331511]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:18 compute-2 podman[331509]: 2025-11-29 08:54:18.458301486 +0000 UTC m=+0.102373384 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:54:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:18 compute-2 ceph-mon[77138]: pgmap v3572: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Nov 29 08:54:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:54:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:54:18 compute-2 nova_compute[232428]: 2025-11-29 08:54:18.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:19.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:20.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:20 compute-2 ceph-mon[77138]: pgmap v3573: 305 pgs: 305 active+clean; 199 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Nov 29 08:54:21 compute-2 sshd-session[331556]: Invalid user banxgg from 45.148.10.240 port 58114
Nov 29 08:54:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:21 compute-2 sshd-session[331556]: Connection closed by invalid user banxgg 45.148.10.240 port 58114 [preauth]
Nov 29 08:54:21 compute-2 nova_compute[232428]: 2025-11-29 08:54:21.324 232432 INFO nova.compute.manager [None req-34f7aa46-5337-4537-9d7c-d6de6b763d0d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Get console output
Nov 29 08:54:21 compute-2 nova_compute[232428]: 2025-11-29 08:54:21.334 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 08:54:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:21.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:22 compute-2 nova_compute[232428]: 2025-11-29 08:54:22.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:22 compute-2 nova_compute[232428]: 2025-11-29 08:54:22.589 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:22 compute-2 ceph-mon[77138]: pgmap v3574: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:54:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:23.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:23 compute-2 nova_compute[232428]: 2025-11-29 08:54:23.856 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:24 compute-2 ceph-mon[77138]: pgmap v3575: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:54:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:25.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:26 compute-2 nova_compute[232428]: 2025-11-29 08:54:26.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:26 compute-2 ceph-mon[77138]: pgmap v3576: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 08:54:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:27 compute-2 nova_compute[232428]: 2025-11-29 08:54:27.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:54:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3733380438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:54:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:54:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3733380438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:54:28 compute-2 ceph-mon[77138]: pgmap v3577: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 54 KiB/s wr, 10 op/s
Nov 29 08:54:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3733380438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:54:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3733380438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:54:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:28 compute-2 nova_compute[232428]: 2025-11-29 08:54:28.554 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:28 compute-2 nova_compute[232428]: 2025-11-29 08:54:28.554 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:28 compute-2 nova_compute[232428]: 2025-11-29 08:54:28.555 232432 DEBUG nova.objects.instance [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'flavor' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:54:28 compute-2 nova_compute[232428]: 2025-11-29 08:54:28.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.447 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.448 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.449 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.449 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:54:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.618 232432 DEBUG nova.objects.instance [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:54:29 compute-2 nova_compute[232428]: 2025-11-29 08:54:29.631 232432 DEBUG nova.network.neutron [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:54:29 compute-2 podman[331562]: 2025-11-29 08:54:29.691154408 +0000 UTC m=+0.093415086 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:54:30 compute-2 ceph-mon[77138]: pgmap v3578: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 54 KiB/s wr, 10 op/s
Nov 29 08:54:30 compute-2 nova_compute[232428]: 2025-11-29 08:54:30.525 232432 DEBUG nova.policy [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:54:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:30.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:31.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.409 232432 DEBUG nova.network.neutron [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Successfully created port: 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.444 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:54:32 compute-2 ceph-mon[77138]: pgmap v3579: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 42 KiB/s wr, 9 op/s
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.461 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.462 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.462 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:32.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:32 compute-2 nova_compute[232428]: 2025-11-29 08:54:32.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.356 232432 DEBUG nova.network.neutron [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Successfully updated port: 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.381 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.381 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.381 232432 DEBUG nova.network.neutron [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.508 232432 DEBUG nova.compute.manager [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-changed-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.509 232432 DEBUG nova.compute.manager [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing instance network info cache due to event network-changed-149b5dbf-88b0-4bb5-b415-a02c50d7bf87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.509 232432 DEBUG oslo_concurrency.lockutils [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:54:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:33.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:33 compute-2 nova_compute[232428]: 2025-11-29 08:54:33.862 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:34 compute-2 ceph-mon[77138]: pgmap v3580: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 08:54:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:34.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.356 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.357 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.357 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.357 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.358 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:54:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:54:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1330076228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.886 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.969 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:54:35 compute-2 nova_compute[232428]: 2025-11-29 08:54:35.970 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:54:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.150 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.151 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3961MB free_disk=20.942718505859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.151 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.151 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.374 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.375 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.376 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:54:36 compute-2 sudo[331609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:36 compute-2 sudo[331609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:36 compute-2 sudo[331609]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.457 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:54:36 compute-2 sudo[331634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:36 compute-2 sudo[331634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:36 compute-2 sudo[331634]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:36 compute-2 ceph-mon[77138]: pgmap v3581: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Nov 29 08:54:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1330076228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1233880893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:36.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.555 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.556 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.584 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.621 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:54:36 compute-2 nova_compute[232428]: 2025-11-29 08:54:36.687 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:54:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:54:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2549214126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.190 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.200 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.216 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.250 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.251 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.342 232432 DEBUG nova.network.neutron [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.408 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.412 232432 DEBUG oslo_concurrency.lockutils [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.412 232432 DEBUG nova.network.neutron [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing network info cache for port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.417 232432 DEBUG nova.virt.libvirt.vif [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.417 232432 DEBUG nova.network.os_vif_util [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.418 232432 DEBUG nova.network.os_vif_util [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.419 232432 DEBUG os_vif [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.420 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.421 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.421 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.425 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.425 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap149b5dbf-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.426 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap149b5dbf-88, col_values=(('external_ids', {'iface-id': '149b5dbf-88b0-4bb5-b415-a02c50d7bf87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:c7:f5', 'vm-uuid': '2c409efc-d2fd-4ab1-813e-cb64784e0e69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.427 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.4292] manager: (tap149b5dbf-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.437 232432 INFO os_vif [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88')
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.438 232432 DEBUG nova.virt.libvirt.vif [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.439 232432 DEBUG nova.network.os_vif_util [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.440 232432 DEBUG nova.network.os_vif_util [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.443 232432 DEBUG nova.virt.libvirt.guest [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] attach device xml: <interface type="ethernet">
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:33:c7:f5"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <target dev="tap149b5dbf-88"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]: </interface>
Nov 29 08:54:37 compute-2 nova_compute[232428]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:54:37 compute-2 kernel: tap149b5dbf-88: entered promiscuous mode
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.4599] manager: (tap149b5dbf-88): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Nov 29 08:54:37 compute-2 ovn_controller[134375]: 2025-11-29T08:54:37Z|00970|binding|INFO|Claiming lport 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 for this chassis.
Nov 29 08:54:37 compute-2 ovn_controller[134375]: 2025-11-29T08:54:37Z|00971|binding|INFO|149b5dbf-88b0-4bb5-b415-a02c50d7bf87: Claiming fa:16:3e:33:c7:f5 10.100.0.28
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.476 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:c7:f5 10.100.0.28'], port_security=['fa:16:3e:33:c7:f5 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '2c409efc-d2fd-4ab1-813e-cb64784e0e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=149b5dbf-88b0-4bb5-b415-a02c50d7bf87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.478 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c bound to our chassis
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.480 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbf505d1-7919-461d-b3a8-5568e119b40c
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.501 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfabcc7-5a01-4f74-b7fb-3f5a55a532bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.502 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbf505d1-71 in ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.504 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbf505d1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.504 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5148e68b-bf8a-4b23-9c7e-201e55b80c25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.505 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.506 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b4186f-997b-4553-bef0-28bc90874902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_controller[134375]: 2025-11-29T08:54:37Z|00972|binding|INFO|Setting lport 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 ovn-installed in OVS
Nov 29 08:54:37 compute-2 ovn_controller[134375]: 2025-11-29T08:54:37Z|00973|binding|INFO|Setting lport 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 up in Southbound
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:37 compute-2 systemd-udevd[331689]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.533 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e8c5c2-b659-49c7-a01e-14f3ddde2a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.5454] device (tap149b5dbf-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.5465] device (tap149b5dbf-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.559 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d90b3cae-7ba7-46c6-b2cd-aa62232579e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/365315365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2549214126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.590 232432 DEBUG nova.virt.libvirt.driver [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.590 232432 DEBUG nova.virt.libvirt.driver [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.590 232432 DEBUG nova.virt.libvirt.driver [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:a0:8e:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.591 232432 DEBUG nova.virt.libvirt.driver [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:33:c7:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.595 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7a17d7-680a-41a0-a141-c82c4b519b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.598 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.603 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc0a592-80cc-4d32-98bf-2d8b7d74f484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.6039] manager: (tapcbf505d1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/468)
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.627 232432 DEBUG nova.virt.libvirt.guest [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:54:37</nova:creationTime>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:54:37 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     <nova:port uuid="149b5dbf-88b0-4bb5-b415-a02c50d7bf87">
Nov 29 08:54:37 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 29 08:54:37 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:54:37 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:54:37 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:54:37 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.639 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9131c581-9f2a-4eb6-8575-6ab9e4d12e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.644 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e07393ad-4e79-4962-b5b4-b6fc3cc43491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.662 232432 DEBUG oslo_concurrency.lockutils [None req-4453937b-e3fa-4a43-b217-a8927d5394af 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.6773] device (tapcbf505d1-70): carrier: link connected
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.684 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f84db145-0d5b-4c9d-8031-4ddbabb25629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.709 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2cbb8a-ac31-4ab9-b1e8-6e5a45d21131]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf505d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944394, 'reachable_time': 21556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331714, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.729 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b36634-b936-459a-9cd9-1bcdd5788380]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f377'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 944394, 'tstamp': 944394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331715, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.745 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[631a5d24-cbf5-44ee-89f6-37abf6660ba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf505d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944394, 'reachable_time': 21556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331716, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.792 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1a61f64c-ba4c-4d56-9ef5-dda1c790b326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.858 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8a78a19f-530e-48cd-93ce-52fe44fadca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.860 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf505d1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.860 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.860 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf505d1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.862 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 kernel: tapcbf505d1-70: entered promiscuous mode
Nov 29 08:54:37 compute-2 NetworkManager[48993]: <info>  [1764406477.8634] manager: (tapcbf505d1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.865 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbf505d1-70, col_values=(('external_ids', {'iface-id': '6ea3e895-3f37-4fb8-8562-8f74fb8c5800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:37 compute-2 ovn_controller[134375]: 2025-11-29T08:54:37Z|00974|binding|INFO|Releasing lport 6ea3e895-3f37-4fb8-8562-8f74fb8c5800 from this chassis (sb_readonly=0)
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 nova_compute[232428]: 2025-11-29 08:54:37.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.881 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.882 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6e240d11-7584-418e-9939-ecf4cc4586e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.883 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-cbf505d1-7919-461d-b3a8-5568e119b40c
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID cbf505d1-7919-461d-b3a8-5568e119b40c
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:54:37 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:37.883 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'env', 'PROCESS_TAG=haproxy-cbf505d1-7919-461d-b3a8-5568e119b40c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbf505d1-7919-461d-b3a8-5568e119b40c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:54:38 compute-2 podman[331749]: 2025-11-29 08:54:38.332794002 +0000 UTC m=+0.065501027 container create 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:54:38 compute-2 systemd[1]: Started libpod-conmon-7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6.scope.
Nov 29 08:54:38 compute-2 podman[331749]: 2025-11-29 08:54:38.295916285 +0000 UTC m=+0.028623340 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:54:38 compute-2 systemd[1]: Started libcrun container.
Nov 29 08:54:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4855df7f702db2329f5f018548159fa3ccaef16064f00ca696ccd58fb3e2e50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:54:38 compute-2 podman[331749]: 2025-11-29 08:54:38.457081528 +0000 UTC m=+0.189788583 container init 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:54:38 compute-2 podman[331749]: 2025-11-29 08:54:38.468177643 +0000 UTC m=+0.200884668 container start 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:54:38 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [NOTICE]   (331769) : New worker (331771) forked
Nov 29 08:54:38 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [NOTICE]   (331769) : Loading success.
Nov 29 08:54:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:54:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:38.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:54:38 compute-2 ceph-mon[77138]: pgmap v3582: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.610 232432 DEBUG nova.compute.manager [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.611 232432 DEBUG oslo_concurrency.lockutils [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.611 232432 DEBUG oslo_concurrency.lockutils [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.612 232432 DEBUG oslo_concurrency.lockutils [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.612 232432 DEBUG nova.compute.manager [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:54:38 compute-2 nova_compute[232428]: 2025-11-29 08:54:38.612 232432 WARNING nova.compute.manager [req-a3421dab-ad36-41e2-b892-70597b86eadb req-3f0e0e38-12fd-45ee-84ba-9e6ac9a172c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 for instance with vm_state active and task_state None.
Nov 29 08:54:38 compute-2 ovn_controller[134375]: 2025-11-29T08:54:38Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:c7:f5 10.100.0.28
Nov 29 08:54:38 compute-2 ovn_controller[134375]: 2025-11-29T08:54:38Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:c7:f5 10.100.0.28
Nov 29 08:54:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.251 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.252 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.252 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:54:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:40.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:40 compute-2 ceph-mon[77138]: pgmap v3583: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.724 232432 DEBUG nova.compute.manager [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.725 232432 DEBUG oslo_concurrency.lockutils [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.725 232432 DEBUG oslo_concurrency.lockutils [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.726 232432 DEBUG oslo_concurrency.lockutils [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.726 232432 DEBUG nova.compute.manager [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:54:40 compute-2 nova_compute[232428]: 2025-11-29 08:54:40.726 232432 WARNING nova.compute.manager [req-f16299bc-b99a-4625-9179-82f273c24a6a req-f264874b-e0cb-4a97-802f-4d8dbbeb1aec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 for instance with vm_state active and task_state None.
Nov 29 08:54:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:41 compute-2 nova_compute[232428]: 2025-11-29 08:54:41.092 232432 DEBUG nova.network.neutron [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated VIF entry in instance network info cache for port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:54:41 compute-2 nova_compute[232428]: 2025-11-29 08:54:41.092 232432 DEBUG nova.network.neutron [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:54:41 compute-2 nova_compute[232428]: 2025-11-29 08:54:41.111 232432 DEBUG oslo_concurrency.lockutils [req-c48fe5ad-0af2-47ee-8283-51c8af0ca24b req-3522ce98-a248-4de6-97fb-0c4e551de96c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:54:41 compute-2 nova_compute[232428]: 2025-11-29 08:54:41.171 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:41.170 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:54:41 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:41.173 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:54:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:41.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:41 compute-2 podman[331781]: 2025-11-29 08:54:41.784821254 +0000 UTC m=+0.165726896 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:54:42 compute-2 nova_compute[232428]: 2025-11-29 08:54:42.428 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:42.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:42 compute-2 nova_compute[232428]: 2025-11-29 08:54:42.602 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:42 compute-2 ceph-mon[77138]: pgmap v3584: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Nov 29 08:54:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:43.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3527041133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:44.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:44 compute-2 ceph-mon[77138]: pgmap v3585: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 0 op/s
Nov 29 08:54:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1187488511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:45.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:46.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:46 compute-2 ceph-mon[77138]: pgmap v3586: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Nov 29 08:54:47 compute-2 nova_compute[232428]: 2025-11-29 08:54:47.431 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:47 compute-2 nova_compute[232428]: 2025-11-29 08:54:47.604 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:48 compute-2 ceph-mon[77138]: pgmap v3587: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.7 KiB/s wr, 0 op/s
Nov 29 08:54:48 compute-2 podman[331812]: 2025-11-29 08:54:48.692086696 +0000 UTC m=+0.098139404 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 08:54:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:54:49.176 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:54:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/926984436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:54:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:50 compute-2 ceph-mon[77138]: pgmap v3588: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 4.3 KiB/s wr, 0 op/s
Nov 29 08:54:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:51.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:52 compute-2 nova_compute[232428]: 2025-11-29 08:54:52.434 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.063001983s ======
Nov 29 08:54:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:52.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.063001983s
Nov 29 08:54:52 compute-2 nova_compute[232428]: 2025-11-29 08:54:52.640 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:52 compute-2 ceph-mon[77138]: pgmap v3589: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 661 KiB/s wr, 6 op/s
Nov 29 08:54:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:53.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:54.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:54 compute-2 ceph-mon[77138]: pgmap v3590: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 KiB/s rd, 661 KiB/s wr, 5 op/s
Nov 29 08:54:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:55.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:54:56 compute-2 sudo[331835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:56 compute-2 sudo[331835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:56 compute-2 sudo[331835]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:56.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:56 compute-2 sudo[331860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:54:56 compute-2 sudo[331860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:54:56 compute-2 sudo[331860]: pam_unix(sudo:session): session closed for user root
Nov 29 08:54:56 compute-2 ceph-mon[77138]: pgmap v3591: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 08:54:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/44141468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:54:57 compute-2 nova_compute[232428]: 2025-11-29 08:54:57.437 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:57.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:57 compute-2 nova_compute[232428]: 2025-11-29 08:54:57.642 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:54:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1740278720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:54:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:54:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:54:58 compute-2 ceph-mon[77138]: pgmap v3592: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:54:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:54:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:54:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:59.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:54:59 compute-2 ceph-mon[77138]: pgmap v3593: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 08:55:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:00.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:00 compute-2 podman[331888]: 2025-11-29 08:55:00.697683419 +0000 UTC m=+0.089555056 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:55:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:01.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:02 compute-2 nova_compute[232428]: 2025-11-29 08:55:02.440 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:02 compute-2 ceph-mon[77138]: pgmap v3594: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 08:55:02 compute-2 nova_compute[232428]: 2025-11-29 08:55:02.644 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:02.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:55:03.359 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:55:03.360 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:55:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:55:03.361 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:55:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:04 compute-2 ceph-mon[77138]: pgmap v3595: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 28 op/s
Nov 29 08:55:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:04.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:06 compute-2 ceph-mon[77138]: pgmap v3596: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 98 op/s
Nov 29 08:55:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:07 compute-2 nova_compute[232428]: 2025-11-29 08:55:07.444 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:07 compute-2 nova_compute[232428]: 2025-11-29 08:55:07.647 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:08 compute-2 ceph-mon[77138]: pgmap v3597: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Nov 29 08:55:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:10 compute-2 ceph-mon[77138]: pgmap v3598: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Nov 29 08:55:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:11.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:11 compute-2 sshd-session[331915]: Connection closed by 74.242.218.17 port 36118
Nov 29 08:55:12 compute-2 nova_compute[232428]: 2025-11-29 08:55:12.447 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:12 compute-2 nova_compute[232428]: 2025-11-29 08:55:12.650 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:12 compute-2 podman[331916]: 2025-11-29 08:55:12.758622954 +0000 UTC m=+0.156018023 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 08:55:13 compute-2 ceph-mon[77138]: pgmap v3599: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 71 op/s
Nov 29 08:55:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:14 compute-2 ceph-mon[77138]: pgmap v3600: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 29 08:55:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:16 compute-2 ceph-mon[77138]: pgmap v3601: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 08:55:16 compute-2 sudo[331944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:16 compute-2 sudo[331944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:16 compute-2 sudo[331944]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:16 compute-2 sudo[331969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:16 compute-2 sudo[331969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:16 compute-2 sudo[331969]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:17 compute-2 nova_compute[232428]: 2025-11-29 08:55:17.450 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:17 compute-2 nova_compute[232428]: 2025-11-29 08:55:17.653 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:18 compute-2 sudo[331995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:18 compute-2 sudo[331995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:18 compute-2 sudo[331995]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:18 compute-2 sudo[332020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:55:18 compute-2 sudo[332020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:18 compute-2 sudo[332020]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:18 compute-2 sudo[332045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:18 compute-2 sudo[332045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:18 compute-2 sudo[332045]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:18 compute-2 podman[332069]: 2025-11-29 08:55:18.860844549 +0000 UTC m=+0.081138795 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 29 08:55:18 compute-2 sudo[332076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:55:18 compute-2 sudo[332076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:18 compute-2 ceph-mon[77138]: pgmap v3602: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 153 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Nov 29 08:55:19 compute-2 podman[332186]: 2025-11-29 08:55:19.599259764 +0000 UTC m=+0.104811801 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:55:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:19 compute-2 podman[332186]: 2025-11-29 08:55:19.725021945 +0000 UTC m=+0.230573982 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:55:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:20 compute-2 ceph-mon[77138]: pgmap v3603: 305 pgs: 305 active+clean; 275 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Nov 29 08:55:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:20 compute-2 podman[332340]: 2025-11-29 08:55:20.735690438 +0000 UTC m=+0.090992961 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:55:20 compute-2 podman[332340]: 2025-11-29 08:55:20.753218053 +0000 UTC m=+0.108520526 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 08:55:20 compute-2 ovn_controller[134375]: 2025-11-29T08:55:20Z|00975|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 29 08:55:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:21 compute-2 podman[332406]: 2025-11-29 08:55:21.109912886 +0000 UTC m=+0.098234566 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.)
Nov 29 08:55:21 compute-2 podman[332406]: 2025-11-29 08:55:21.131987923 +0000 UTC m=+0.120309603 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, version=2.2.4, name=keepalived, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 08:55:21 compute-2 sudo[332076]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:21 compute-2 sudo[332440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:21 compute-2 sudo[332440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:21 compute-2 sudo[332440]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:21 compute-2 sudo[332465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:55:21 compute-2 sudo[332465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:21 compute-2 sudo[332465]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:21 compute-2 sudo[332490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:21 compute-2 sudo[332490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:21 compute-2 sudo[332490]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:21 compute-2 sudo[332515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:55:21 compute-2 sudo[332515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:22 compute-2 ceph-mon[77138]: pgmap v3604: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 08:55:22 compute-2 sudo[332515]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:22 compute-2 nova_compute[232428]: 2025-11-29 08:55:22.453 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:22 compute-2 nova_compute[232428]: 2025-11-29 08:55:22.658 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.043 232432 DEBUG nova.compute.manager [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-changed-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.043 232432 DEBUG nova.compute.manager [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing instance network info cache due to event network-changed-149b5dbf-88b0-4bb5-b415-a02c50d7bf87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.044 232432 DEBUG oslo_concurrency.lockutils [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.044 232432 DEBUG oslo_concurrency.lockutils [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.045 232432 DEBUG nova.network.neutron [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing network info cache for port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:55:23 compute-2 nova_compute[232428]: 2025-11-29 08:55:23.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:55:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:55:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:23.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:24 compute-2 nova_compute[232428]: 2025-11-29 08:55:24.313 232432 DEBUG nova.network.neutron [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated VIF entry in instance network info cache for port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:55:24 compute-2 nova_compute[232428]: 2025-11-29 08:55:24.314 232432 DEBUG nova.network.neutron [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:55:24 compute-2 ceph-mon[77138]: pgmap v3605: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 08:55:24 compute-2 nova_compute[232428]: 2025-11-29 08:55:24.355 232432 DEBUG oslo_concurrency.lockutils [req-944ca630-08a3-4adb-b320-4b7b676e5bd3 req-1e85f4f3-23d1-4aaa-9676-0f8b3d34df2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:55:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:24.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:26 compute-2 nova_compute[232428]: 2025-11-29 08:55:26.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:26 compute-2 ceph-mon[77138]: pgmap v3606: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 08:55:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:27 compute-2 nova_compute[232428]: 2025-11-29 08:55:27.455 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:27.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:27 compute-2 nova_compute[232428]: 2025-11-29 08:55:27.661 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:55:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712426074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:55:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:55:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712426074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:55:28 compute-2 ceph-mon[77138]: pgmap v3607: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 110 KiB/s wr, 24 op/s
Nov 29 08:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3712426074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:55:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3712426074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:55:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:29 compute-2 sudo[332576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:29 compute-2 sudo[332576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:29 compute-2 sudo[332576]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:29 compute-2 sudo[332601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:55:29 compute-2 sudo[332601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:29 compute-2 sudo[332601]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:55:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:30 compute-2 nova_compute[232428]: 2025-11-29 08:55:30.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:30 compute-2 nova_compute[232428]: 2025-11-29 08:55:30.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:55:30 compute-2 nova_compute[232428]: 2025-11-29 08:55:30.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:55:30 compute-2 ceph-mon[77138]: pgmap v3608: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 111 KiB/s wr, 24 op/s
Nov 29 08:55:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:31 compute-2 nova_compute[232428]: 2025-11-29 08:55:31.375 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:55:31 compute-2 nova_compute[232428]: 2025-11-29 08:55:31.376 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:55:31 compute-2 nova_compute[232428]: 2025-11-29 08:55:31.376 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:55:31 compute-2 nova_compute[232428]: 2025-11-29 08:55:31.376 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:55:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:31 compute-2 podman[332627]: 2025-11-29 08:55:31.722622011 +0000 UTC m=+0.111631154 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 08:55:32 compute-2 nova_compute[232428]: 2025-11-29 08:55:32.457 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:32 compute-2 ceph-mon[77138]: pgmap v3609: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 119 KiB/s rd, 104 KiB/s wr, 20 op/s
Nov 29 08:55:32 compute-2 nova_compute[232428]: 2025-11-29 08:55:32.664 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:33.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:34 compute-2 ceph-mon[77138]: pgmap v3610: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s wr, 1 op/s
Nov 29 08:55:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:35.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:36.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.718 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.738 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.738 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.739 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.740 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.740 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.763 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.764 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.764 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.765 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:55:36 compute-2 nova_compute[232428]: 2025-11-29 08:55:36.766 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:55:36 compute-2 ceph-mon[77138]: pgmap v3611: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s wr, 1 op/s
Nov 29 08:55:36 compute-2 sudo[332653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:36 compute-2 sudo[332653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:36 compute-2 sudo[332653]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:37 compute-2 sudo[332697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:37 compute-2 sudo[332697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:37 compute-2 sudo[332697]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:55:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1051668605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.233 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.311 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.312 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.459 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.554 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.555 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3936MB free_disk=20.896987915039062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.556 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.556 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:55:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.637 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.638 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.638 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.675 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:37 compute-2 nova_compute[232428]: 2025-11-29 08:55:37.702 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:55:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1051668605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:55:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3163120820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:38 compute-2 nova_compute[232428]: 2025-11-29 08:55:38.177 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:55:38 compute-2 nova_compute[232428]: 2025-11-29 08:55:38.187 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:55:38 compute-2 nova_compute[232428]: 2025-11-29 08:55:38.217 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:55:38 compute-2 nova_compute[232428]: 2025-11-29 08:55:38.221 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:55:38 compute-2 nova_compute[232428]: 2025-11-29 08:55:38.221 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:55:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:38 compute-2 ceph-mon[77138]: pgmap v3612: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 08:55:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/433257077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3163120820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:39 compute-2 nova_compute[232428]: 2025-11-29 08:55:39.212 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:39 compute-2 nova_compute[232428]: 2025-11-29 08:55:39.213 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:39.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4015313614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:40 compute-2 nova_compute[232428]: 2025-11-29 08:55:40.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:40 compute-2 nova_compute[232428]: 2025-11-29 08:55:40.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:55:40 compute-2 nova_compute[232428]: 2025-11-29 08:55:40.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:55:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:40 compute-2 ceph-mon[77138]: pgmap v3613: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 08:55:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:42 compute-2 nova_compute[232428]: 2025-11-29 08:55:42.462 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:42 compute-2 nova_compute[232428]: 2025-11-29 08:55:42.672 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:42 compute-2 ceph-mon[77138]: pgmap v3614: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 29 08:55:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:43 compute-2 podman[332749]: 2025-11-29 08:55:43.735618295 +0000 UTC m=+0.124927367 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:55:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:44.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:45 compute-2 ceph-mon[77138]: pgmap v3615: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 08:55:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1078295006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:46 compute-2 ceph-mon[77138]: pgmap v3616: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Nov 29 08:55:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1167076756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:55:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:46.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:47 compute-2 nova_compute[232428]: 2025-11-29 08:55:47.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:47.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:47 compute-2 nova_compute[232428]: 2025-11-29 08:55:47.676 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:55:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:55:48 compute-2 ceph-mon[77138]: pgmap v3617: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Nov 29 08:55:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:49.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:49 compute-2 podman[332778]: 2025-11-29 08:55:49.70828484 +0000 UTC m=+0.099767314 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:55:50 compute-2 ceph-mon[77138]: pgmap v3618: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 29 08:55:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:52 compute-2 nova_compute[232428]: 2025-11-29 08:55:52.467 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:52 compute-2 ovn_controller[134375]: 2025-11-29T08:55:52Z|00976|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 29 08:55:52 compute-2 nova_compute[232428]: 2025-11-29 08:55:52.680 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:53 compute-2 ceph-mon[77138]: pgmap v3619: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 KiB/s wr, 0 op/s
Nov 29 08:55:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:53.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:54 compute-2 ceph-mon[77138]: pgmap v3620: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 08:55:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:54.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:55:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:57 compute-2 ceph-mon[77138]: pgmap v3621: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 1 op/s
Nov 29 08:55:57 compute-2 sudo[332802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:57 compute-2 sudo[332802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:57 compute-2 sudo[332802]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:57 compute-2 sudo[332828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:55:57 compute-2 sudo[332828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:55:57 compute-2 sudo[332828]: pam_unix(sudo:session): session closed for user root
Nov 29 08:55:57 compute-2 nova_compute[232428]: 2025-11-29 08:55:57.470 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:55:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:55:57 compute-2 nova_compute[232428]: 2025-11-29 08:55:57.683 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:55:58 compute-2 ceph-mon[77138]: pgmap v3622: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 08:55:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:58.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:55:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:55:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:55:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:00 compute-2 ceph-mon[77138]: pgmap v3623: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 29 08:56:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:56:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:56:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:02 compute-2 nova_compute[232428]: 2025-11-29 08:56:02.473 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:02 compute-2 ceph-mon[77138]: pgmap v3624: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 29 08:56:02 compute-2 nova_compute[232428]: 2025-11-29 08:56:02.687 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:02 compute-2 podman[332856]: 2025-11-29 08:56:02.715810998 +0000 UTC m=+0.110784448 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 08:56:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:02.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:03.360 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:03.360 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:03.362 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:04 compute-2 ceph-mon[77138]: pgmap v3625: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 2.5 KiB/s wr, 0 op/s
Nov 29 08:56:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:04.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:06 compute-2 ceph-mon[77138]: pgmap v3626: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 12 KiB/s wr, 2 op/s
Nov 29 08:56:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:06.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:07 compute-2 nova_compute[232428]: 2025-11-29 08:56:07.476 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:07 compute-2 nova_compute[232428]: 2025-11-29 08:56:07.691 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:08 compute-2 ceph-mon[77138]: pgmap v3627: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 29 08:56:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:08.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:09.787 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:56:09 compute-2 nova_compute[232428]: 2025-11-29 08:56:09.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:09.789 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:56:10 compute-2 ceph-mon[77138]: pgmap v3628: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 11 KiB/s wr, 2 op/s
Nov 29 08:56:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:56:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:56:10 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:10.792 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:12 compute-2 nova_compute[232428]: 2025-11-29 08:56:12.479 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:12 compute-2 nova_compute[232428]: 2025-11-29 08:56:12.693 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:12.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:12 compute-2 ceph-mon[77138]: pgmap v3629: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 13 KiB/s wr, 10 op/s
Nov 29 08:56:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:13.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3467016413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.735 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-149b5dbf-88b0-4bb5-b415-a02c50d7bf87" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.736 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-149b5dbf-88b0-4bb5-b415-a02c50d7bf87" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:14.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.753 232432 DEBUG nova.objects.instance [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'flavor' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:56:14 compute-2 podman[332883]: 2025-11-29 08:56:14.756846864 +0000 UTC m=+0.148816839 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.778 232432 DEBUG nova.virt.libvirt.vif [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.779 232432 DEBUG nova.network.os_vif_util [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.780 232432 DEBUG nova.network.os_vif_util [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:56:14 compute-2 ceph-mon[77138]: pgmap v3630: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.787 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.792 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.796 232432 DEBUG nova.virt.libvirt.driver [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Attempting to detach device tap149b5dbf-88 from instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.796 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:33:c7:f5"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <target dev="tap149b5dbf-88"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </interface>
Nov 29 08:56:14 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.806 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.810 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface>not found in domain: <domain type='kvm' id='100'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <name>instance-000000cd</name>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <uuid>2c409efc-d2fd-4ab1-813e-cb64784e0e69</uuid>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:54:37</nova:creationTime>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:port uuid="149b5dbf-88b0-4bb5-b415-a02c50d7bf87">
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <system>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='serial'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='uuid'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </system>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <os>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </os>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <features>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </features>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk' index='2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config' index='1'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:a0:8e:13'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='tap4bfded6b-48'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:33:c7:f5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='tap149b5dbf-88'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='net1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </target>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </console>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <video>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </video>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c761,c763</label>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c761,c763</imagelabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </domain>
Nov 29 08:56:14 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.811 232432 INFO nova.virt.libvirt.driver [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully detached device tap149b5dbf-88 from instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 from the persistent domain config.
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.811 232432 DEBUG nova.virt.libvirt.driver [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] (1/8): Attempting to detach device tap149b5dbf-88 with device alias net1 from instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.812 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <mac address="fa:16:3e:33:c7:f5"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <model type="virtio"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <mtu size="1442"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <target dev="tap149b5dbf-88"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </interface>
Nov 29 08:56:14 compute-2 nova_compute[232428]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:56:14 compute-2 kernel: tap149b5dbf-88 (unregistering): left promiscuous mode
Nov 29 08:56:14 compute-2 NetworkManager[48993]: <info>  [1764406574.8844] device (tap149b5dbf-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:14 compute-2 ovn_controller[134375]: 2025-11-29T08:56:14Z|00977|binding|INFO|Releasing lport 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 from this chassis (sb_readonly=0)
Nov 29 08:56:14 compute-2 ovn_controller[134375]: 2025-11-29T08:56:14Z|00978|binding|INFO|Setting lport 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 down in Southbound
Nov 29 08:56:14 compute-2 ovn_controller[134375]: 2025-11-29T08:56:14Z|00979|binding|INFO|Removing iface tap149b5dbf-88 ovn-installed in OVS
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.908 232432 DEBUG nova.virt.libvirt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Received event <DeviceRemovedEvent: 1764406574.907553, 2c409efc-d2fd-4ab1-813e-cb64784e0e69 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.910 232432 DEBUG nova.virt.libvirt.driver [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Start waiting for the detach event from libvirt for device tap149b5dbf-88 with device alias net1 for instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.911 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.916 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface>not found in domain: <domain type='kvm' id='100'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <name>instance-000000cd</name>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <uuid>2c409efc-d2fd-4ab1-813e-cb64784e0e69</uuid>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:54:37</nova:creationTime>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:port uuid="149b5dbf-88b0-4bb5-b415-a02c50d7bf87">
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <system>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='serial'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='uuid'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </system>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <os>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </os>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <features>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </features>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk' index='2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config' index='1'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:56:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:14.917 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:c7:f5 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '2c409efc-d2fd-4ab1-813e-cb64784e0e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=149b5dbf-88b0-4bb5-b415-a02c50d7bf87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:a0:8e:13'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target dev='tap4bfded6b-48'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       </target>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </console>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <video>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </video>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c761,c763</label>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c761,c763</imagelabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </domain>
Nov 29 08:56:14 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.917 232432 INFO nova.virt.libvirt.driver [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully detached device tap149b5dbf-88 from instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 from the live domain config.
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.918 232432 DEBUG nova.virt.libvirt.vif [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:56:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:14.918 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c unbound from our chassis
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.919 232432 DEBUG nova.network.os_vif_util [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:56:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:14.920 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbf505d1-7919-461d-b3a8-5568e119b40c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.920 232432 DEBUG nova.network.os_vif_util [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.920 232432 DEBUG os_vif [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:56:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:14.922 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[383ba89e-1283-49fe-91e4-2a53972e969b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:14 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:14.923 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c namespace which is not needed anymore
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.924 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap149b5dbf-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.926 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.934 232432 INFO os_vif [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88')
Nov 29 08:56:14 compute-2 nova_compute[232428]: 2025-11-29 08:56:14.935 232432 DEBUG nova.virt.libvirt.guest [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:56:14</nova:creationTime>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:14 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:14 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:14 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:14 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:14 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:56:15 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [NOTICE]   (331769) : haproxy version is 2.8.14-c23fe91
Nov 29 08:56:15 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [NOTICE]   (331769) : path to executable is /usr/sbin/haproxy
Nov 29 08:56:15 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [WARNING]  (331769) : Exiting Master process...
Nov 29 08:56:15 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [ALERT]    (331769) : Current worker (331771) exited with code 143 (Terminated)
Nov 29 08:56:15 compute-2 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[331765]: [WARNING]  (331769) : All workers exited. Exiting... (0)
Nov 29 08:56:15 compute-2 systemd[1]: libpod-7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6.scope: Deactivated successfully.
Nov 29 08:56:15 compute-2 podman[332932]: 2025-11-29 08:56:15.120382481 +0000 UTC m=+0.066326914 container died 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:56:15 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6-userdata-shm.mount: Deactivated successfully.
Nov 29 08:56:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-a4855df7f702db2329f5f018548159fa3ccaef16064f00ca696ccd58fb3e2e50-merged.mount: Deactivated successfully.
Nov 29 08:56:15 compute-2 podman[332932]: 2025-11-29 08:56:15.170765398 +0000 UTC m=+0.116709821 container cleanup 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:56:15 compute-2 systemd[1]: libpod-conmon-7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6.scope: Deactivated successfully.
Nov 29 08:56:15 compute-2 podman[332961]: 2025-11-29 08:56:15.285390052 +0000 UTC m=+0.073279970 container remove 7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.295 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb1c0e-5f2f-4726-a87a-d72bda066d7b]: (4, ('Sat Nov 29 08:56:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c (7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6)\n7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6\nSat Nov 29 08:56:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c (7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6)\n7ba11a3bdeb654feffa578e54d2ebb8149a1fb5f6a44ba81f91cceaf4e883ea6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.297 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[83603093-a32a-4e03-bd0d-b97e87b498aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.298 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf505d1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.300 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:15 compute-2 kernel: tapcbf505d1-70: left promiscuous mode
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.304 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.309 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[0836d4d8-7a6a-454d-a58a-cf8e332501fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.330 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.336 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a821cea1-618e-4fb6-8ac2-26d4e1800686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.338 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[be5b97dc-6608-4c72-a0df-180467535652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.365 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[94ed778d-ca37-47ff-8d7b-422ad63e0218]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944385, 'reachable_time': 30835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332976, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 systemd[1]: run-netns-ovnmeta\x2dcbf505d1\x2d7919\x2d461d\x2db3a8\x2d5568e119b40c.mount: Deactivated successfully.
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.370 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:56:15 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:15.370 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[e7867e40-b4fe-4156-a7ef-cd225dafdbee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.591 232432 DEBUG nova.compute.manager [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-unplugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.591 232432 DEBUG oslo_concurrency.lockutils [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.592 232432 DEBUG oslo_concurrency.lockutils [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.592 232432 DEBUG oslo_concurrency.lockutils [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.592 232432 DEBUG nova.compute.manager [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-unplugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.592 232432 WARNING nova.compute.manager [req-6879142d-920d-44e9-8674-e74939349c8c req-8c2b81e2-f5ef-4740-8b2f-f5c020444aa9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-unplugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 for instance with vm_state active and task_state None.
Nov 29 08:56:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.759 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.760 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.760 232432 DEBUG nova.network.neutron [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.963 232432 DEBUG nova.compute.manager [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-deleted-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.963 232432 INFO nova.compute.manager [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Neutron deleted interface 149b5dbf-88b0-4bb5-b415-a02c50d7bf87; detaching it from the instance and deleting it from the info cache
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.964 232432 DEBUG nova.network.neutron [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:56:15 compute-2 nova_compute[232428]: 2025-11-29 08:56:15.987 232432 DEBUG nova.objects.instance [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'system_metadata' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.013 232432 DEBUG nova.objects.instance [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'flavor' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:56:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.043 232432 DEBUG nova.virt.libvirt.vif [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.043 232432 DEBUG nova.network.os_vif_util [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.045 232432 DEBUG nova.network.os_vif_util [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.048 232432 DEBUG nova.virt.libvirt.guest [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.053 232432 DEBUG nova.virt.libvirt.guest [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface>not found in domain: <domain type='kvm' id='100'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <name>instance-000000cd</name>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <uuid>2c409efc-d2fd-4ab1-813e-cb64784e0e69</uuid>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:56:14</nova:creationTime>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <system>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='serial'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='uuid'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </system>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <os>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </os>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <features>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </features>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk' index='2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config' index='1'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:a0:8e:13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='tap4bfded6b-48'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </target>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </console>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <video>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </video>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c761,c763</label>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c761,c763</imagelabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]: </domain>
Nov 29 08:56:16 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.054 232432 DEBUG nova.virt.libvirt.guest [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.059 232432 DEBUG nova.virt.libvirt.guest [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:c7:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap149b5dbf-88"/></interface>not found in domain: <domain type='kvm' id='100'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <name>instance-000000cd</name>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <uuid>2c409efc-d2fd-4ab1-813e-cb64784e0e69</uuid>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:56:14</nova:creationTime>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <memory unit='KiB'>131072</memory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <vcpu placement='static'>1</vcpu>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <resource>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <partition>/machine</partition>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </resource>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <sysinfo type='smbios'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <system>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='manufacturer'>RDO</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='product'>OpenStack Compute</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='serial'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='uuid'>2c409efc-d2fd-4ab1-813e-cb64784e0e69</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <entry name='family'>Virtual Machine</entry>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </system>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <os>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <boot dev='hd'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <smbios mode='sysinfo'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </os>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <features>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <vmcoreinfo state='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </features>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <cpu mode='custom' match='exact' check='full'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <model fallback='forbid'>Nehalem</model>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='x2apic'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='hypervisor'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <feature policy='require' name='vme'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <clock offset='utc'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='pit' tickpolicy='delay'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <timer name='hpet' present='no'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_poweroff>destroy</on_poweroff>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_reboot>restart</on_reboot>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <on_crash>destroy</on_crash>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <disk type='network' device='disk'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk' index='2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='vda' bus='virtio'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='virtio-disk0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <disk type='network' device='cdrom'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='qemu' type='raw' cache='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <auth username='openstack'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source protocol='rbd' name='vms/2c409efc-d2fd-4ab1-813e-cb64784e0e69_disk.config' index='1'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.100' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.102' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <host name='192.168.122.101' port='6789'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </source>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='sda' bus='sata'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <readonly/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='sata0-0-0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='0' model='pcie-root'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pcie.0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='1' port='0x10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='2' port='0x11'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='3' port='0x12'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='4' port='0x13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='5' port='0x14'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='6' port='0x15'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='7' port='0x16'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='8' port='0x17'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.8'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='9' port='0x18'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.9'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='10' port='0x19'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='11' port='0x1a'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.11'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='12' port='0x1b'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.12'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='13' port='0x1c'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='14' port='0x1d'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.14'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='15' port='0x1e'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.15'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='16' port='0x1f'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.16'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='17' port='0x20'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.17'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='18' port='0x21'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.18'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='19' port='0x22'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.19'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='20' port='0x23'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.20'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='21' port='0x24'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.21'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='22' port='0x25'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.22'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='23' port='0x26'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.23'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='24' port='0x27'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.24'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-root-port'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target chassis='25' port='0x28'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.25'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model name='pcie-pci-bridge'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='pci.26'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='usb'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <controller type='sata' index='0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='ide'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </controller>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <interface type='ethernet'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <mac address='fa:16:3e:a0:8e:13'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target dev='tap4bfded6b-48'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model type='virtio'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <driver name='vhost' rx_queue_size='512'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <mtu size='1442'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='net0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <serial type='pty'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target type='isa-serial' port='0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:         <model name='isa-serial'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       </target>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <console type='pty' tty='/dev/pts/0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <source path='/dev/pts/0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <log file='/var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69/console.log' append='off'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <target type='serial' port='0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='serial0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </console>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='tablet' bus='usb'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='usb' bus='0' port='1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='mouse' bus='ps2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input1'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <input type='keyboard' bus='ps2'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='input2'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </input>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <listen type='address' address='::0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </graphics>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <audio id='1' type='none'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <video>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <model type='virtio' heads='1' primary='yes'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='video0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </video>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <watchdog model='itco' action='reset'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='watchdog0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </watchdog>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <memballoon model='virtio'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <stats period='10'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='balloon0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <rng model='virtio'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <backend model='random'>/dev/urandom</backend>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <alias name='rng0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <label>system_u:system_r:svirt_t:s0:c761,c763</label>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c761,c763</imagelabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <label>+107:+107</label>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <imagelabel>+107:+107</imagelabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </seclabel>
Nov 29 08:56:16 compute-2 nova_compute[232428]: </domain>
Nov 29 08:56:16 compute-2 nova_compute[232428]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.059 232432 WARNING nova.virt.libvirt.driver [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Detaching interface fa:16:3e:33:c7:f5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap149b5dbf-88' not found.
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.060 232432 DEBUG nova.virt.libvirt.vif [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.061 232432 DEBUG nova.network.os_vif_util [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "address": "fa:16:3e:33:c7:f5", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap149b5dbf-88", "ovs_interfaceid": "149b5dbf-88b0-4bb5-b415-a02c50d7bf87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.062 232432 DEBUG nova.network.os_vif_util [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.062 232432 DEBUG os_vif [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.064 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.065 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap149b5dbf-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.065 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.067 232432 INFO os_vif [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:c7:f5,bridge_name='br-int',has_traffic_filtering=True,id=149b5dbf-88b0-4bb5-b415-a02c50d7bf87,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap149b5dbf-88')
Nov 29 08:56:16 compute-2 nova_compute[232428]: 2025-11-29 08:56:16.068 232432 DEBUG nova.virt.libvirt.guest [req-4e7b470b-b04a-4f62-b4d2-8921667f3420 req-d80c2caa-2294-45a4-8a31-b1153d4b3fd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:name>tempest-TestNetworkBasicOps-server-200959221</nova:name>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:creationTime>2025-11-29 08:56:16</nova:creationTime>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:flavor name="m1.nano">
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:memory>128</nova:memory>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:disk>1</nova:disk>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:swap>0</nova:swap>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:vcpus>1</nova:vcpus>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:flavor>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:owner>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   <nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     <nova:port uuid="4bfded6b-4829-43d9-aed2-fc22d2a1bc63">
Nov 29 08:56:16 compute-2 nova_compute[232428]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:56:16 compute-2 nova_compute[232428]:     </nova:port>
Nov 29 08:56:16 compute-2 nova_compute[232428]:   </nova:ports>
Nov 29 08:56:16 compute-2 nova_compute[232428]: </nova:instance>
Nov 29 08:56:16 compute-2 nova_compute[232428]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 29 08:56:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:16 compute-2 ceph-mon[77138]: pgmap v3631: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 29 08:56:17 compute-2 sudo[332979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:17 compute-2 sudo[332979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:17 compute-2 sudo[332979]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:17 compute-2 sudo[333004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:17 compute-2 sudo[333004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:17 compute-2 sudo[333004]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.650 232432 INFO nova.network.neutron [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Port 149b5dbf-88b0-4bb5-b415-a02c50d7bf87 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.651 232432 DEBUG nova.network.neutron [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.666 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.687 232432 DEBUG oslo_concurrency.lockutils [None req-59ee354c-d270-4736-ab6e-b2ba44e17597 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "interface-2c409efc-d2fd-4ab1-813e-cb64784e0e69-149b5dbf-88b0-4bb5-b415-a02c50d7bf87" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.696 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.740 232432 DEBUG nova.compute.manager [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.741 232432 DEBUG oslo_concurrency.lockutils [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.742 232432 DEBUG oslo_concurrency.lockutils [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.742 232432 DEBUG oslo_concurrency.lockutils [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.743 232432 DEBUG nova.compute.manager [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.743 232432 WARNING nova.compute.manager [req-1a8c4c56-8724-4e9f-afe2-decba672ec6c req-df2332f5-2aa7-4e8a-8deb-e62f31264390 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-plugged-149b5dbf-88b0-4bb5-b415-a02c50d7bf87 for instance with vm_state active and task_state None.
Nov 29 08:56:17 compute-2 ovn_controller[134375]: 2025-11-29T08:56:17Z|00980|binding|INFO|Releasing lport dda73cdf-320d-41ae-b17d-5408980fdc26 from this chassis (sb_readonly=0)
Nov 29 08:56:17 compute-2 nova_compute[232428]: 2025-11-29 08:56:17.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:18 compute-2 ceph-mon[77138]: pgmap v3632: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.883 232432 DEBUG nova.compute.manager [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.884 232432 DEBUG nova.compute.manager [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing instance network info cache due to event network-changed-4bfded6b-4829-43d9-aed2-fc22d2a1bc63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.884 232432 DEBUG oslo_concurrency.lockutils [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.884 232432 DEBUG oslo_concurrency.lockutils [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.885 232432 DEBUG nova.network.neutron [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Refreshing network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.951 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.952 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.952 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.953 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.953 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.955 232432 INFO nova.compute.manager [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Terminating instance
Nov 29 08:56:18 compute-2 nova_compute[232428]: 2025-11-29 08:56:18.957 232432 DEBUG nova.compute.manager [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:56:19 compute-2 kernel: tap4bfded6b-48 (unregistering): left promiscuous mode
Nov 29 08:56:19 compute-2 NetworkManager[48993]: <info>  [1764406579.0237] device (tap4bfded6b-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.036 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 ovn_controller[134375]: 2025-11-29T08:56:19Z|00981|binding|INFO|Releasing lport 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 from this chassis (sb_readonly=0)
Nov 29 08:56:19 compute-2 ovn_controller[134375]: 2025-11-29T08:56:19Z|00982|binding|INFO|Setting lport 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 down in Southbound
Nov 29 08:56:19 compute-2 ovn_controller[134375]: 2025-11-29T08:56:19Z|00983|binding|INFO|Removing iface tap4bfded6b-48 ovn-installed in OVS
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.047 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:8e:13 10.100.0.3'], port_security=['fa:16:3e:a0:8e:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2c409efc-d2fd-4ab1-813e-cb64784e0e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96164d04-a7ec-4231-906f-66c0410686bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf54ccd2-3d8f-4c35-aaac-ee6de584f92a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=4bfded6b-4829-43d9-aed2-fc22d2a1bc63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.052 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63 in datapath 99ea6f62-0590-4a47-a4f0-69449e6d5084 unbound from our chassis
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.054 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99ea6f62-0590-4a47-a4f0-69449e6d5084, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.056 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[81031d4c-6eff-4f0d-9d1b-cf84335b4eec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.057 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084 namespace which is not needed anymore
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.079 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Nov 29 08:56:19 compute-2 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cd.scope: Consumed 22.723s CPU time.
Nov 29 08:56:19 compute-2 systemd-machined[194747]: Machine qemu-100-instance-000000cd terminated.
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.181 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.189 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.199 232432 INFO nova.virt.libvirt.driver [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Instance destroyed successfully.
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.200 232432 DEBUG nova.objects.instance [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 2c409efc-d2fd-4ab1-813e-cb64784e0e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.219 232432 DEBUG nova.virt.libvirt.vif [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-200959221',display_name='tempest-TestNetworkBasicOps-server-200959221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-200959221',id=205,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAwLa1oD9UnYaNFUS6pe2D5kaPr56X/pZKim69t+PIFGiaTeObjY5UFNb/1Uq3CrvB7gpQHox5rjm1ur1qOMUgqylgW2z5uTB6moLWrnO6nq3pk+tq5teXYVAZUTSga3HA==',key_name='tempest-TestNetworkBasicOps-1000889299',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-5ez6cb6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2c409efc-d2fd-4ab1-813e-cb64784e0e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.220 232432 DEBUG nova.network.os_vif_util [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.223 232432 DEBUG nova.network.os_vif_util [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.223 232432 DEBUG os_vif [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.226 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.227 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bfded6b-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.228 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.229 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.232 232432 INFO os_vif [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:8e:13,bridge_name='br-int',has_traffic_filtering=True,id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63,network=Network(99ea6f62-0590-4a47-a4f0-69449e6d5084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfded6b-48')
Nov 29 08:56:19 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [NOTICE]   (331251) : haproxy version is 2.8.14-c23fe91
Nov 29 08:56:19 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [NOTICE]   (331251) : path to executable is /usr/sbin/haproxy
Nov 29 08:56:19 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [WARNING]  (331251) : Exiting Master process...
Nov 29 08:56:19 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [ALERT]    (331251) : Current worker (331253) exited with code 143 (Terminated)
Nov 29 08:56:19 compute-2 neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084[331247]: [WARNING]  (331251) : All workers exited. Exiting... (0)
Nov 29 08:56:19 compute-2 systemd[1]: libpod-ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416.scope: Deactivated successfully.
Nov 29 08:56:19 compute-2 conmon[331247]: conmon ec757daf4675945dcdde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416.scope/container/memory.events
Nov 29 08:56:19 compute-2 podman[333059]: 2025-11-29 08:56:19.264071443 +0000 UTC m=+0.057847281 container died ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:56:19 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416-userdata-shm.mount: Deactivated successfully.
Nov 29 08:56:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-32206c62920229e0558bf73ba272a46b1ba6da9a0bb87a80b49c449b4f045012-merged.mount: Deactivated successfully.
Nov 29 08:56:19 compute-2 podman[333059]: 2025-11-29 08:56:19.309382562 +0000 UTC m=+0.103158410 container cleanup ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:56:19 compute-2 systemd[1]: libpod-conmon-ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416.scope: Deactivated successfully.
Nov 29 08:56:19 compute-2 podman[333112]: 2025-11-29 08:56:19.397222754 +0000 UTC m=+0.055136346 container remove ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.407 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7e94c729-03ab-460c-85cb-3a6c79f3e350]: (4, ('Sat Nov 29 08:56:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084 (ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416)\nec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416\nSat Nov 29 08:56:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084 (ec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416)\nec757daf4675945dcddeb2d35f4ad48b5bd0f009986512a855f6b6f095e2f416\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.409 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa53bb1-09c1-49ee-bc43-44dd339ef9bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.413 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99ea6f62-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.416 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 kernel: tap99ea6f62-00: left promiscuous mode
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.421 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37212165-736e-4ff1-b93f-34cd78ffd42b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.443 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3947c57e-8e32-417a-bd6b-e0bf003ede33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.445 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dffcffc2-9040-491e-ae8e-f8834b311241]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.476 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1381a59f-4488-4494-84d3-ffa55d31644c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940645, 'reachable_time': 27550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333127, 'error': None, 'target': 'ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.480 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99ea6f62-0590-4a47-a4f0-69449e6d5084 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:56:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:56:19.480 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[843da5a8-0f26-4f3b-86af-56c3e2f2ec9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:56:19 compute-2 systemd[1]: run-netns-ovnmeta\x2d99ea6f62\x2d0590\x2d4a47\x2da4f0\x2d69449e6d5084.mount: Deactivated successfully.
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.616 232432 DEBUG nova.compute.manager [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-unplugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.617 232432 DEBUG oslo_concurrency.lockutils [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.617 232432 DEBUG oslo_concurrency.lockutils [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.617 232432 DEBUG oslo_concurrency.lockutils [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.618 232432 DEBUG nova.compute.manager [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-unplugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.618 232432 DEBUG nova.compute.manager [req-f3f7dcec-557a-4258-a734-ac5ff4756668 req-9b9eafac-36e1-4bcc-a8e5-847178782256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-unplugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:56:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.708 232432 INFO nova.virt.libvirt.driver [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Deleting instance files /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69_del
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.709 232432 INFO nova.virt.libvirt.driver [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Deletion of /var/lib/nova/instances/2c409efc-d2fd-4ab1-813e-cb64784e0e69_del complete
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.768 232432 INFO nova.compute.manager [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.769 232432 DEBUG oslo.service.loopingcall [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.770 232432 DEBUG nova.compute.manager [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:56:19 compute-2 nova_compute[232428]: 2025-11-29 08:56:19.770 232432 DEBUG nova.network.neutron [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:56:20 compute-2 podman[333130]: 2025-11-29 08:56:20.689400932 +0000 UTC m=+0.085097248 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.726 232432 DEBUG nova.network.neutron [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.750 232432 INFO nova.compute.manager [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Took 0.98 seconds to deallocate network for instance.
Nov 29 08:56:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.785 232432 DEBUG nova.network.neutron [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updated VIF entry in instance network info cache for port 4bfded6b-4829-43d9-aed2-fc22d2a1bc63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.786 232432 DEBUG nova.network.neutron [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [{"id": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "address": "fa:16:3e:a0:8e:13", "network": {"id": "99ea6f62-0590-4a47-a4f0-69449e6d5084", "bridge": "br-int", "label": "tempest-network-smoke--383503128", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfded6b-48", "ovs_interfaceid": "4bfded6b-4829-43d9-aed2-fc22d2a1bc63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.817 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.817 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.819 232432 DEBUG oslo_concurrency.lockutils [req-9182d591-caf3-4af7-96cc-fdc47465158d req-c4130f69-a644-41c6-8ff0-0081c9ed4aed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2c409efc-d2fd-4ab1-813e-cb64784e0e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:56:20 compute-2 ceph-mon[77138]: pgmap v3633: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 29 08:56:20 compute-2 nova_compute[232428]: 2025-11-29 08:56:20.890 232432 DEBUG oslo_concurrency.processutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.033 232432 DEBUG nova.compute.manager [req-23948f98-8ad7-4cf7-82e7-341f6e071cfa req-44d7a0c6-d082-410a-9ef6-dfbe3a83af96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-deleted-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.034 232432 INFO nova.compute.manager [req-23948f98-8ad7-4cf7-82e7-341f6e071cfa req-44d7a0c6-d082-410a-9ef6-dfbe3a83af96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Neutron deleted interface 4bfded6b-4829-43d9-aed2-fc22d2a1bc63; detaching it from the instance and deleting it from the info cache
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.035 232432 DEBUG nova.network.neutron [req-23948f98-8ad7-4cf7-82e7-341f6e071cfa req-44d7a0c6-d082-410a-9ef6-dfbe3a83af96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:56:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.076 232432 DEBUG nova.compute.manager [req-23948f98-8ad7-4cf7-82e7-341f6e071cfa req-44d7a0c6-d082-410a-9ef6-dfbe3a83af96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Detach interface failed, port_id=4bfded6b-4829-43d9-aed2-fc22d2a1bc63, reason: Instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:56:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:56:21 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1533711909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.407 232432 DEBUG oslo_concurrency.processutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.417 232432 DEBUG nova.compute.provider_tree [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.433 232432 DEBUG nova.scheduler.client.report [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.461 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.490 232432 INFO nova.scheduler.client.report [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 2c409efc-d2fd-4ab1-813e-cb64784e0e69
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.560 232432 DEBUG oslo_concurrency.lockutils [None req-174b1da5-52a4-4cf4-a0e7-2d6dd406835c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:21.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.725 232432 DEBUG nova.compute.manager [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.726 232432 DEBUG oslo_concurrency.lockutils [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.726 232432 DEBUG oslo_concurrency.lockutils [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.727 232432 DEBUG oslo_concurrency.lockutils [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2c409efc-d2fd-4ab1-813e-cb64784e0e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.727 232432 DEBUG nova.compute.manager [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] No waiting events found dispatching network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:56:21 compute-2 nova_compute[232428]: 2025-11-29 08:56:21.727 232432 WARNING nova.compute.manager [req-6880a5d1-7521-41c3-aa40-1dd0b96dcd9d req-5912d5e5-7dbc-431c-b184-0e2971aa35c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Received unexpected event network-vif-plugged-4bfded6b-4829-43d9-aed2-fc22d2a1bc63 for instance with vm_state deleted and task_state None.
Nov 29 08:56:21 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1533711909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:22 compute-2 nova_compute[232428]: 2025-11-29 08:56:22.699 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:56:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:56:22 compute-2 ceph-mon[77138]: pgmap v3634: 305 pgs: 305 active+clean; 134 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 4.4 KiB/s wr, 43 op/s
Nov 29 08:56:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:23.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:24 compute-2 nova_compute[232428]: 2025-11-29 08:56:24.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:24 compute-2 nova_compute[232428]: 2025-11-29 08:56:24.230 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:24 compute-2 ceph-mon[77138]: pgmap v3635: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 08:56:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:26.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:26 compute-2 ceph-mon[77138]: pgmap v3636: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 08:56:27 compute-2 nova_compute[232428]: 2025-11-29 08:56:27.702 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.926747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587926814, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2405, "num_deletes": 254, "total_data_size": 5911838, "memory_usage": 6006128, "flush_reason": "Manual Compaction"}
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587959302, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3857718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78196, "largest_seqno": 80596, "table_properties": {"data_size": 3847777, "index_size": 6370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20040, "raw_average_key_size": 20, "raw_value_size": 3828203, "raw_average_value_size": 3930, "num_data_blocks": 277, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406377, "oldest_key_time": 1764406377, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 32723 microseconds, and 17995 cpu microseconds.
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.959447) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3857718 bytes OK
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.959498) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.961581) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.961605) EVENT_LOG_v1 {"time_micros": 1764406587961598, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.961629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5901373, prev total WAL file size 5901373, number of live WAL files 2.
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.964491) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3767KB)], [159(10MB)]
Nov 29 08:56:27 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587964620, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 14645326, "oldest_snapshot_seqno": -1}
Nov 29 08:56:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:56:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2440216272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:56:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:56:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2440216272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10586 keys, 12705544 bytes, temperature: kUnknown
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588096197, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12705544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12638298, "index_size": 39677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 279824, "raw_average_key_size": 26, "raw_value_size": 12453923, "raw_average_value_size": 1176, "num_data_blocks": 1502, "num_entries": 10586, "num_filter_entries": 10586, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.096649) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12705544 bytes
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.099195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.2 rd, 96.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 11114, records dropped: 528 output_compression: NoCompression
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.099225) EVENT_LOG_v1 {"time_micros": 1764406588099212, "job": 102, "event": "compaction_finished", "compaction_time_micros": 131758, "compaction_time_cpu_micros": 64510, "output_level": 6, "num_output_files": 1, "total_output_size": 12705544, "num_input_records": 11114, "num_output_records": 10586, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588100689, "job": 102, "event": "table_file_deletion", "file_number": 161}
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588104682, "job": 102, "event": "table_file_deletion", "file_number": 159}
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:27.964249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.104777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.104783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.104786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.104789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:56:28.104791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:56:28 compute-2 nova_compute[232428]: 2025-11-29 08:56:28.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:28 compute-2 nova_compute[232428]: 2025-11-29 08:56:28.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:28 compute-2 nova_compute[232428]: 2025-11-29 08:56:28.657 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:28.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:28 compute-2 ceph-mon[77138]: pgmap v3637: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:56:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2440216272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:56:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2440216272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:56:29 compute-2 nova_compute[232428]: 2025-11-29 08:56:29.233 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:29 compute-2 sudo[333179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:29 compute-2 sudo[333179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:29 compute-2 sudo[333179]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:29 compute-2 sudo[333204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:56:29 compute-2 sudo[333204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:29 compute-2 sudo[333204]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:29 compute-2 sudo[333229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:29 compute-2 sudo[333229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:29 compute-2 sudo[333229]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:29 compute-2 sudo[333254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:56:29 compute-2 sudo[333254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:30 compute-2 sudo[333254]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:30 compute-2 ceph-mon[77138]: pgmap v3638: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:56:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:31 compute-2 nova_compute[232428]: 2025-11-29 08:56:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:31 compute-2 nova_compute[232428]: 2025-11-29 08:56:31.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:56:31 compute-2 nova_compute[232428]: 2025-11-29 08:56:31.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:56:31 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:56:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:32 compute-2 ceph-mon[77138]: pgmap v3639: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:56:32 compute-2 nova_compute[232428]: 2025-11-29 08:56:32.704 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:32.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:33 compute-2 podman[333312]: 2025-11-29 08:56:33.681532158 +0000 UTC m=+0.077942605 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:56:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:33 compute-2 nova_compute[232428]: 2025-11-29 08:56:33.800 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:56:33 compute-2 nova_compute[232428]: 2025-11-29 08:56:33.800 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:34 compute-2 nova_compute[232428]: 2025-11-29 08:56:34.195 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406579.1940527, 2c409efc-d2fd-4ab1-813e-cb64784e0e69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:56:34 compute-2 nova_compute[232428]: 2025-11-29 08:56:34.196 232432 INFO nova.compute.manager [-] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] VM Stopped (Lifecycle Event)
Nov 29 08:56:34 compute-2 nova_compute[232428]: 2025-11-29 08:56:34.236 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:34 compute-2 sshd-session[333333]: Invalid user banx from 45.148.10.240 port 34180
Nov 29 08:56:34 compute-2 sshd-session[333333]: Connection closed by invalid user banx 45.148.10.240 port 34180 [preauth]
Nov 29 08:56:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:34 compute-2 nova_compute[232428]: 2025-11-29 08:56:34.790 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:34 compute-2 ceph-mon[77138]: pgmap v3640: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Nov 29 08:56:35 compute-2 nova_compute[232428]: 2025-11-29 08:56:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.373 232432 DEBUG nova.compute.manager [None req-bf611a0b-b0bb-4487-b691-7ca4e3191846 - - - - - -] [instance: 2c409efc-d2fd-4ab1-813e-cb64784e0e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.422 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.422 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.423 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.423 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.424 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:56:36 compute-2 sudo[333356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:36 compute-2 sudo[333356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:36 compute-2 sudo[333356]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:36 compute-2 sudo[333381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:56:36 compute-2 sudo[333381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:36 compute-2 sudo[333381]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:36 compute-2 ceph-mon[77138]: pgmap v3641: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 255 B/s wr, 1 op/s
Nov 29 08:56:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:56:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:56:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:56:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3654649478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:36 compute-2 nova_compute[232428]: 2025-11-29 08:56:36.871 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.085 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.087 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4151MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.087 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.088 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.190 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.190 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.207 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:56:37 compute-2 sudo[333429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:37 compute-2 sudo[333429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:37 compute-2 sudo[333429]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:56:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3491508050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.669 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.674 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:56:37 compute-2 sudo[333454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:37 compute-2 sudo[333454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:37 compute-2 sudo[333454]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:37 compute-2 nova_compute[232428]: 2025-11-29 08:56:37.705 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3654649478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3491508050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:39 compute-2 ceph-mon[77138]: pgmap v3642: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:39 compute-2 nova_compute[232428]: 2025-11-29 08:56:39.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:40 compute-2 ceph-mon[77138]: pgmap v3643: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:40.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:41 compute-2 nova_compute[232428]: 2025-11-29 08:56:41.233 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:56:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:42 compute-2 nova_compute[232428]: 2025-11-29 08:56:42.403 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:56:42 compute-2 nova_compute[232428]: 2025-11-29 08:56:42.404 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:56:42 compute-2 ceph-mon[77138]: pgmap v3644: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2150829522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:42 compute-2 nova_compute[232428]: 2025-11-29 08:56:42.706 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3402153616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:43.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:44 compute-2 nova_compute[232428]: 2025-11-29 08:56:44.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:44 compute-2 ceph-mon[77138]: pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:44.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:45 compute-2 podman[333485]: 2025-11-29 08:56:45.741284968 +0000 UTC m=+0.136020911 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:56:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:46 compute-2 nova_compute[232428]: 2025-11-29 08:56:46.405 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:46 compute-2 nova_compute[232428]: 2025-11-29 08:56:46.406 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:56:46 compute-2 nova_compute[232428]: 2025-11-29 08:56:46.406 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:56:46 compute-2 ceph-mon[77138]: pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:46.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:47 compute-2 nova_compute[232428]: 2025-11-29 08:56:47.709 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:56:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:56:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/560779530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:48.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:49 compute-2 ceph-mon[77138]: pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/812855005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:56:49 compute-2 nova_compute[232428]: 2025-11-29 08:56:49.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:50 compute-2 ceph-mon[77138]: pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:51 compute-2 podman[333516]: 2025-11-29 08:56:51.69548799 +0000 UTC m=+0.084340274 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:56:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:52 compute-2 ceph-mon[77138]: pgmap v3649: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:52 compute-2 nova_compute[232428]: 2025-11-29 08:56:52.712 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:56:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:52.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:56:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:54 compute-2 nova_compute[232428]: 2025-11-29 08:56:54.246 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:55 compute-2 ceph-mon[77138]: pgmap v3650: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:56:56 compute-2 ceph-mon[77138]: pgmap v3651: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:57 compute-2 nova_compute[232428]: 2025-11-29 08:56:57.715 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:56:57 compute-2 sudo[333539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:57 compute-2 sudo[333539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:57 compute-2 sudo[333539]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:57 compute-2 sudo[333564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:56:57 compute-2 sudo[333564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:56:57 compute-2 sudo[333564]: pam_unix(sudo:session): session closed for user root
Nov 29 08:56:58 compute-2 ceph-mon[77138]: pgmap v3652: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:56:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:56:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:56:59 compute-2 nova_compute[232428]: 2025-11-29 08:56:59.250 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:56:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:56:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:56:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:59.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:00 compute-2 ceph-mon[77138]: pgmap v3653: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:00.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:02 compute-2 nova_compute[232428]: 2025-11-29 08:57:02.718 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:02 compute-2 ceph-mon[77138]: pgmap v3654: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:03.360 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:03.360 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:57:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:03.361 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:57:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:04 compute-2 nova_compute[232428]: 2025-11-29 08:57:04.253 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:04 compute-2 podman[333593]: 2025-11-29 08:57:04.700108777 +0000 UTC m=+0.094349365 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 08:57:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:04.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:04 compute-2 ceph-mon[77138]: pgmap v3655: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:05.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:06.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:06 compute-2 ceph-mon[77138]: pgmap v3656: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.202 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.203 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.204 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.205 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.205 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.206 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.229 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.240 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.240 232432 WARNING nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.241 232432 WARNING nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.241 232432 WARNING nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.242 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Removable base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.242 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.243 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.243 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5a014232164664518828c9a902557a9bd93a955f
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.243 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.244 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.244 232432 DEBUG nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.245 232432 INFO nova.virt.libvirt.imagecache [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 29 08:57:07 compute-2 nova_compute[232428]: 2025-11-29 08:57:07.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:07.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:08 compute-2 ceph-mon[77138]: pgmap v3657: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:08.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 08:57:09 compute-2 nova_compute[232428]: 2025-11-29 08:57:09.257 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:10.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:11 compute-2 ceph-mon[77138]: pgmap v3658: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:12 compute-2 ceph-mon[77138]: pgmap v3659: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:12 compute-2 nova_compute[232428]: 2025-11-29 08:57:12.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:12.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:14 compute-2 nova_compute[232428]: 2025-11-29 08:57:14.261 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:14 compute-2 ceph-mon[77138]: pgmap v3660: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:16 compute-2 ceph-mon[77138]: pgmap v3661: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2027208412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:16 compute-2 podman[333619]: 2025-11-29 08:57:16.729185341 +0000 UTC m=+0.126026350 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:57:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:16.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:17 compute-2 nova_compute[232428]: 2025-11-29 08:57:17.727 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:17 compute-2 sudo[333645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:17 compute-2 sudo[333645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:17 compute-2 sudo[333645]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:18 compute-2 sudo[333671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:18 compute-2 sudo[333671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:18 compute-2 sudo[333671]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:18 compute-2 ceph-mon[77138]: pgmap v3662: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:57:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:19 compute-2 nova_compute[232428]: 2025-11-29 08:57:19.264 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:20 compute-2 ceph-mon[77138]: pgmap v3663: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 134 KiB/s wr, 11 op/s
Nov 29 08:57:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:21 compute-2 ovn_controller[134375]: 2025-11-29T08:57:21Z|00984|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 08:57:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:22 compute-2 ceph-mon[77138]: pgmap v3664: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:22 compute-2 podman[333698]: 2025-11-29 08:57:22.668033955 +0000 UTC m=+0.075493099 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:57:22 compute-2 nova_compute[232428]: 2025-11-29 08:57:22.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:23 compute-2 nova_compute[232428]: 2025-11-29 08:57:23.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3857322970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:57:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:23.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:24 compute-2 nova_compute[232428]: 2025-11-29 08:57:24.240 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:24 compute-2 nova_compute[232428]: 2025-11-29 08:57:24.265 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:24 compute-2 ceph-mon[77138]: pgmap v3665: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2076290668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:57:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:26 compute-2 ceph-mon[77138]: pgmap v3666: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:27 compute-2 nova_compute[232428]: 2025-11-29 08:57:27.733 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:27.752 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:57:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:27.753 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:57:27 compute-2 nova_compute[232428]: 2025-11-29 08:57:27.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:57:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465122833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:57:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:57:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465122833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:57:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:28 compute-2 ceph-mon[77138]: pgmap v3667: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2465122833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:57:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2465122833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:57:29 compute-2 nova_compute[232428]: 2025-11-29 08:57:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:29 compute-2 nova_compute[232428]: 2025-11-29 08:57:29.268 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:57:29.755 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:57:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:30 compute-2 ceph-mon[77138]: pgmap v3668: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:31 compute-2 nova_compute[232428]: 2025-11-29 08:57:31.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:32 compute-2 nova_compute[232428]: 2025-11-29 08:57:32.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:32 compute-2 nova_compute[232428]: 2025-11-29 08:57:32.735 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:32.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:32 compute-2 ceph-mon[77138]: pgmap v3669: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 81 op/s
Nov 29 08:57:33 compute-2 nova_compute[232428]: 2025-11-29 08:57:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:33 compute-2 nova_compute[232428]: 2025-11-29 08:57:33.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:57:33 compute-2 nova_compute[232428]: 2025-11-29 08:57:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:57:33 compute-2 nova_compute[232428]: 2025-11-29 08:57:33.225 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:57:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:34 compute-2 nova_compute[232428]: 2025-11-29 08:57:34.271 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:34 compute-2 ceph-mon[77138]: pgmap v3670: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:57:35 compute-2 podman[333724]: 2025-11-29 08:57:35.705240413 +0000 UTC m=+0.098663859 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:57:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/562004726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:36 compute-2 nova_compute[232428]: 2025-11-29 08:57:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:36 compute-2 sudo[333746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:36 compute-2 sudo[333746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:36 compute-2 sudo[333746]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:37 compute-2 sudo[333771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:57:37 compute-2 sudo[333771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:37 compute-2 sudo[333771]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:37 compute-2 sudo[333796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:37 compute-2 sudo[333796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:37 compute-2 sudo[333796]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:37 compute-2 ceph-mon[77138]: pgmap v3671: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:37 compute-2 sudo[333821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:57:37 compute-2 sudo[333821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.260 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.261 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.262 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.262 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.263 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:57:37 compute-2 sudo[333821]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.737 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:57:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2602468944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:37 compute-2 nova_compute[232428]: 2025-11-29 08:57:37.760 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:57:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:37 compute-2 sudo[333888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:37 compute-2 sudo[333888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:37 compute-2 sudo[333888]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.041 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.045 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4153MB free_disk=20.986629486083984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.045 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.046 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:57:38 compute-2 sudo[333914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:57:38 compute-2 sudo[333914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:38 compute-2 sudo[333914]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 sudo[333939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:38 compute-2 sudo[333942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:38 compute-2 sudo[333942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:38 compute-2 sudo[333939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:38 compute-2 sudo[333942]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 sudo[333939]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 sudo[333990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:38 compute-2 sudo[333990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:38 compute-2 sudo[333989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:57:38 compute-2 sudo[333990]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 sudo[333989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.294 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.295 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.313 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:57:38 compute-2 ceph-mon[77138]: pgmap v3672: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 29 08:57:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:57:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2602468944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:57:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:57:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 08:57:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:57:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/153582622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.801 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.810 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:57:38 compute-2 sudo[333989]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.830 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.832 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:57:38 compute-2 nova_compute[232428]: 2025-11-29 08:57:38.833 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:57:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:39 compute-2 nova_compute[232428]: 2025-11-29 08:57:39.275 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/153582622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:57:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3955622861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:39.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:40.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:40 compute-2 ceph-mon[77138]: pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 29 08:57:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1599731928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:41.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:42 compute-2 ceph-mon[77138]: pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 99 op/s
Nov 29 08:57:42 compute-2 nova_compute[232428]: 2025-11-29 08:57:42.740 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:42 compute-2 nova_compute[232428]: 2025-11-29 08:57:42.824 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:42.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:42 compute-2 nova_compute[232428]: 2025-11-29 08:57:42.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:42 compute-2 nova_compute[232428]: 2025-11-29 08:57:42.864 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:57:42 compute-2 nova_compute[232428]: 2025-11-29 08:57:42.864 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:57:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:43.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:44 compute-2 nova_compute[232428]: 2025-11-29 08:57:44.279 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:44 compute-2 ceph-mon[77138]: pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 244 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 29 08:57:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:44.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3905176243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:57:45 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:57:45 compute-2 sudo[334095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:45 compute-2 sudo[334095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:45 compute-2 sudo[334095]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:45 compute-2 sudo[334120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:57:45 compute-2 sudo[334120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:45 compute-2 sudo[334120]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:45.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:46 compute-2 ceph-mon[77138]: pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 29 08:57:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:47 compute-2 podman[334146]: 2025-11-29 08:57:47.727095194 +0000 UTC m=+0.121913432 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:57:47 compute-2 nova_compute[232428]: 2025-11-29 08:57:47.741 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:48 compute-2 ceph-mon[77138]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 08:57:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/736689315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:49 compute-2 nova_compute[232428]: 2025-11-29 08:57:49.282 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1213015958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:57:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:49.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:50 compute-2 ceph-mon[77138]: pgmap v3678: 305 pgs: 305 active+clean; 132 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 293 KiB/s wr, 14 op/s
Nov 29 08:57:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:50.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3221452569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:57:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:52 compute-2 nova_compute[232428]: 2025-11-29 08:57:52.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:52 compute-2 ceph-mon[77138]: pgmap v3679: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4014826015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:57:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:52.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:53 compute-2 podman[334177]: 2025-11-29 08:57:53.687565201 +0000 UTC m=+0.082802666 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:57:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:53.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:54 compute-2 nova_compute[232428]: 2025-11-29 08:57:54.286 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:54 compute-2 ceph-mon[77138]: pgmap v3680: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:57:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:57:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:57:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:57:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:57:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:56.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:57:56 compute-2 ceph-mon[77138]: pgmap v3681: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 08:57:57 compute-2 nova_compute[232428]: 2025-11-29 08:57:57.748 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:58 compute-2 sudo[334199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:58 compute-2 sudo[334199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:58 compute-2 sudo[334199]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:58 compute-2 sudo[334224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:57:58 compute-2 sudo[334224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:57:58 compute-2 sudo[334224]: pam_unix(sudo:session): session closed for user root
Nov 29 08:57:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:57:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:57:58 compute-2 ceph-mon[77138]: pgmap v3682: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 08:57:59 compute-2 nova_compute[232428]: 2025-11-29 08:57:59.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:57:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:57:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 08:57:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 08:58:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:00 compute-2 ceph-mon[77138]: pgmap v3683: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 374 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 29 08:58:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:02 compute-2 nova_compute[232428]: 2025-11-29 08:58:02.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:02 compute-2 ceph-mon[77138]: pgmap v3684: 305 pgs: 305 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Nov 29 08:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:03.362 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:03.362 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:58:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:03.363 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:58:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:04 compute-2 ceph-mon[77138]: pgmap v3685: 305 pgs: 305 active+clean; 126 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 29 08:58:04 compute-2 nova_compute[232428]: 2025-11-29 08:58:04.296 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2371015855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:06 compute-2 ceph-mon[77138]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 29 08:58:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:06.246 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:58:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:06.247 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:58:06 compute-2 nova_compute[232428]: 2025-11-29 08:58:06.247 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:06 compute-2 podman[334253]: 2025-11-29 08:58:06.714752218 +0000 UTC m=+0.111699585 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:58:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:07 compute-2 nova_compute[232428]: 2025-11-29 08:58:07.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:07.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:08 compute-2 ceph-mon[77138]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Nov 29 08:58:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:09 compute-2 nova_compute[232428]: 2025-11-29 08:58:09.300 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:09.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:10 compute-2 ceph-mon[77138]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Nov 29 08:58:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:58:12.250 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:58:12 compute-2 ceph-mon[77138]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 79 op/s
Nov 29 08:58:12 compute-2 nova_compute[232428]: 2025-11-29 08:58:12.759 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:13 compute-2 nova_compute[232428]: 2025-11-29 08:58:13.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:13 compute-2 nova_compute[232428]: 2025-11-29 08:58:13.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:58:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:13.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:14 compute-2 nova_compute[232428]: 2025-11-29 08:58:14.303 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:14 compute-2 ceph-mon[77138]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Nov 29 08:58:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:14.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:15.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:16 compute-2 ceph-mon[77138]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Nov 29 08:58:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:16.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:17 compute-2 nova_compute[232428]: 2025-11-29 08:58:17.231 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:17 compute-2 nova_compute[232428]: 2025-11-29 08:58:17.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:58:17 compute-2 nova_compute[232428]: 2025-11-29 08:58:17.259 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:58:17 compute-2 nova_compute[232428]: 2025-11-29 08:58:17.762 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:17.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:18 compute-2 ceph-mon[77138]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:18 compute-2 sudo[334281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:18 compute-2 sudo[334281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:18 compute-2 sudo[334281]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:18 compute-2 sudo[334316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:18 compute-2 sudo[334316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:18 compute-2 sudo[334316]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:18 compute-2 podman[334302]: 2025-11-29 08:58:18.752649926 +0000 UTC m=+0.146959542 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:58:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:19 compute-2 nova_compute[232428]: 2025-11-29 08:58:19.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:19.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:20 compute-2 ceph-mon[77138]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:20.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:22 compute-2 ceph-mon[77138]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:22 compute-2 nova_compute[232428]: 2025-11-29 08:58:22.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:24 compute-2 nova_compute[232428]: 2025-11-29 08:58:24.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:24 compute-2 ceph-mon[77138]: pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:24 compute-2 podman[334360]: 2025-11-29 08:58:24.710582664 +0000 UTC m=+0.099944471 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:58:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:25 compute-2 nova_compute[232428]: 2025-11-29 08:58:25.229 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:25.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:26 compute-2 ceph-mon[77138]: pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:27 compute-2 nova_compute[232428]: 2025-11-29 08:58:27.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:27.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:58:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/744981440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:58:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:58:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/744981440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:58:28 compute-2 ceph-mon[77138]: pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/744981440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:58:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/744981440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:58:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:29 compute-2 nova_compute[232428]: 2025-11-29 08:58:29.313 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:29.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:30 compute-2 nova_compute[232428]: 2025-11-29 08:58:30.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:30 compute-2 ceph-mon[77138]: pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:32 compute-2 nova_compute[232428]: 2025-11-29 08:58:32.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:32 compute-2 ceph-mon[77138]: pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.216 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.217 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:33 compute-2 nova_compute[232428]: 2025-11-29 08:58:33.578 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:33.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:34 compute-2 nova_compute[232428]: 2025-11-29 08:58:34.316 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:34 compute-2 ceph-mon[77138]: pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:58:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/851681182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:34.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:36 compute-2 nova_compute[232428]: 2025-11-29 08:58:36.222 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:36 compute-2 ceph-mon[77138]: pgmap v3701: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 618 KiB/s wr, 1 op/s
Nov 29 08:58:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:36.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:37 compute-2 podman[334385]: 2025-11-29 08:58:37.675192438 +0000 UTC m=+0.078464662 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 08:58:37 compute-2 nova_compute[232428]: 2025-11-29 08:58:37.773 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.237 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.237 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.237 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.238 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:58:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:58:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3841887080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.767 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:58:38 compute-2 sudo[334425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:38 compute-2 sudo[334425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:38 compute-2 sudo[334425]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:38 compute-2 sudo[334452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:38 compute-2 sudo[334452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:38 compute-2 sudo[334452]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.966 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.967 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4190MB free_disk=20.981201171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.968 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:58:38 compute-2 nova_compute[232428]: 2025-11-29 08:58:38.968 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.171 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.221 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.318 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:39 compute-2 ceph-mon[77138]: pgmap v3702: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 618 KiB/s wr, 1 op/s
Nov 29 08:58:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/266316634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:58:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3841887080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:58:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1791433764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.693 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.700 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.722 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.723 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:58:39 compute-2 nova_compute[232428]: 2025-11-29 08:58:39.723 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:58:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/798511832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:58:40 compute-2 ceph-mon[77138]: pgmap v3703: 305 pgs: 305 active+clean; 156 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 08:58:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1791433764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2615626142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3274336590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:41.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:42 compute-2 sshd-session[334500]: Connection closed by authenticating user root 45.148.10.240 port 44400 [preauth]
Nov 29 08:58:42 compute-2 ceph-mon[77138]: pgmap v3704: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:58:42 compute-2 nova_compute[232428]: 2025-11-29 08:58:42.725 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:42 compute-2 nova_compute[232428]: 2025-11-29 08:58:42.726 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:58:42 compute-2 nova_compute[232428]: 2025-11-29 08:58:42.726 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:58:42 compute-2 nova_compute[232428]: 2025-11-29 08:58:42.776 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:44 compute-2 nova_compute[232428]: 2025-11-29 08:58:44.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:44 compute-2 ceph-mon[77138]: pgmap v3705: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 29 08:58:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:45 compute-2 sudo[334504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:45 compute-2 sudo[334504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:45 compute-2 sudo[334504]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:45 compute-2 sudo[334529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:58:45 compute-2 sudo[334529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:45 compute-2 sudo[334529]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:45.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:45 compute-2 sudo[334555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:45 compute-2 sudo[334555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:45 compute-2 sudo[334555]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:46 compute-2 sudo[334580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:58:46 compute-2 sudo[334580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:46 compute-2 ceph-mon[77138]: pgmap v3706: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 29 08:58:46 compute-2 sudo[334580]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:58:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:58:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:58:47 compute-2 nova_compute[232428]: 2025-11-29 08:58:47.779 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:48 compute-2 ceph-mon[77138]: pgmap v3707: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 81 op/s
Nov 29 08:58:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/155518144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:49 compute-2 nova_compute[232428]: 2025-11-29 08:58:49.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:49 compute-2 podman[334637]: 2025-11-29 08:58:49.772795223 +0000 UTC m=+0.162005910 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:58:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/695085484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:58:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:50 compute-2 ceph-mon[77138]: pgmap v3708: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 99 op/s
Nov 29 08:58:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:51.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.259886) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732259977, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 258, "total_data_size": 3859627, "memory_usage": 3928112, "flush_reason": "Manual Compaction"}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732279763, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2535139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80601, "largest_seqno": 82269, "table_properties": {"data_size": 2528172, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14735, "raw_average_key_size": 19, "raw_value_size": 2514096, "raw_average_value_size": 3392, "num_data_blocks": 177, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406588, "oldest_key_time": 1764406588, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 19941 microseconds, and 10515 cpu microseconds.
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.279826) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2535139 bytes OK
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.279853) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.281898) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.281927) EVENT_LOG_v1 {"time_micros": 1764406732281917, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.281952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3852014, prev total WAL file size 3852014, number of live WAL files 2.
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.284137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323734' seq:0, type:0; will stop at (end)
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2475KB)], [162(12MB)]
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732284204, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 15240683, "oldest_snapshot_seqno": -1}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10796 keys, 15100803 bytes, temperature: kUnknown
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732423248, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 15100803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15029568, "index_size": 43154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27013, "raw_key_size": 285204, "raw_average_key_size": 26, "raw_value_size": 14838947, "raw_average_value_size": 1374, "num_data_blocks": 1649, "num_entries": 10796, "num_filter_entries": 10796, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.423716) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 15100803 bytes
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.425663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.4 rd, 108.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 11327, records dropped: 531 output_compression: NoCompression
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.425694) EVENT_LOG_v1 {"time_micros": 1764406732425680, "job": 104, "event": "compaction_finished", "compaction_time_micros": 139275, "compaction_time_cpu_micros": 64697, "output_level": 6, "num_output_files": 1, "total_output_size": 15100803, "num_input_records": 11327, "num_output_records": 10796, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732427057, "job": 104, "event": "table_file_deletion", "file_number": 164}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732431591, "job": 104, "event": "table_file_deletion", "file_number": 162}
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.284037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:58:52 compute-2 nova_compute[232428]: 2025-11-29 08:58:52.781 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:52.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:53 compute-2 ceph-mon[77138]: pgmap v3709: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 518 KiB/s wr, 96 op/s
Nov 29 08:58:53 compute-2 sudo[334666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:53 compute-2 sudo[334666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:53 compute-2 sudo[334666]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:53 compute-2 sudo[334691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:58:53 compute-2 sudo[334691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:53 compute-2 sudo[334691]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:54 compute-2 nova_compute[232428]: 2025-11-29 08:58:54.331 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:58:54 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:58:54 compute-2 ceph-mon[77138]: pgmap v3710: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:58:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:55 compute-2 podman[334717]: 2025-11-29 08:58:55.696866716 +0000 UTC m=+0.091755734 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 08:58:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:58:56 compute-2 ceph-mon[77138]: pgmap v3711: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 585 KiB/s wr, 67 op/s
Nov 29 08:58:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:58:57 compute-2 nova_compute[232428]: 2025-11-29 08:58:57.784 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:57.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:58 compute-2 ceph-mon[77138]: pgmap v3712: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 613 KiB/s rd, 573 KiB/s wr, 28 op/s
Nov 29 08:58:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:58:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:58:59 compute-2 sudo[334739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:59 compute-2 sudo[334739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:59 compute-2 sudo[334739]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:59 compute-2 sudo[334764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:58:59 compute-2 sudo[334764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:58:59 compute-2 sudo[334764]: pam_unix(sudo:session): session closed for user root
Nov 29 08:58:59 compute-2 nova_compute[232428]: 2025-11-29 08:58:59.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:58:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:58:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:58:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:59.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:01 compute-2 ceph-mon[77138]: pgmap v3713: 305 pgs: 305 active+clean; 182 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Nov 29 08:59:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:02 compute-2 ceph-mon[77138]: pgmap v3714: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:59:02 compute-2 nova_compute[232428]: 2025-11-29 08:59:02.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:03.365 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:03.366 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:03.366 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:04.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:04 compute-2 nova_compute[232428]: 2025-11-29 08:59:04.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:04 compute-2 ceph-mon[77138]: pgmap v3715: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 08:59:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:04.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:06.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:06 compute-2 nova_compute[232428]: 2025-11-29 08:59:06.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:06.131 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:59:06 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:06.133 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:59:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:06 compute-2 ceph-mon[77138]: pgmap v3716: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 08:59:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:06.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:07 compute-2 nova_compute[232428]: 2025-11-29 08:59:07.789 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:08.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:08 compute-2 podman[334794]: 2025-11-29 08:59:08.713129435 +0000 UTC m=+0.108072022 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:59:08 compute-2 ceph-mon[77138]: pgmap v3717: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Nov 29 08:59:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:08.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:09.136 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:09 compute-2 nova_compute[232428]: 2025-11-29 08:59:09.342 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:10.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:10 compute-2 ceph-mon[77138]: pgmap v3718: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Nov 29 08:59:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:12.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:12 compute-2 nova_compute[232428]: 2025-11-29 08:59:12.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:12 compute-2 ceph-mon[77138]: pgmap v3719: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 873 KiB/s wr, 38 op/s
Nov 29 08:59:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:12.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:14 compute-2 nova_compute[232428]: 2025-11-29 08:59:14.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:14 compute-2 ceph-mon[77138]: pgmap v3720: 305 pgs: 305 active+clean; 173 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 17 KiB/s wr, 9 op/s
Nov 29 08:59:14 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/173704970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:16 compute-2 ceph-mon[77138]: pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Nov 29 08:59:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:17 compute-2 nova_compute[232428]: 2025-11-29 08:59:17.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:18 compute-2 ceph-mon[77138]: pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 29 08:59:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:18.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:19 compute-2 sudo[334820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:19 compute-2 sudo[334820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:19 compute-2 sudo[334820]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:19 compute-2 sudo[334845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:19 compute-2 sudo[334845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:19 compute-2 sudo[334845]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:19 compute-2 nova_compute[232428]: 2025-11-29 08:59:19.349 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:20.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:20 compute-2 podman[334871]: 2025-11-29 08:59:20.705788367 +0000 UTC m=+0.110808137 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:59:20 compute-2 ceph-mon[77138]: pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 29 08:59:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:22 compute-2 nova_compute[232428]: 2025-11-29 08:59:22.799 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:22 compute-2 ceph-mon[77138]: pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:59:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:24.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:24 compute-2 nova_compute[232428]: 2025-11-29 08:59:24.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:25 compute-2 ceph-mon[77138]: pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 08:59:25 compute-2 nova_compute[232428]: 2025-11-29 08:59:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:26 compute-2 ceph-mon[77138]: pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Nov 29 08:59:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:26 compute-2 podman[334900]: 2025-11-29 08:59:26.64202603 +0000 UTC m=+0.047481138 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 08:59:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:27 compute-2 nova_compute[232428]: 2025-11-29 08:59:27.802 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:28.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:59:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321654141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:59:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:59:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321654141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:59:28 compute-2 ceph-mon[77138]: pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1321654141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:59:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1321654141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:59:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:29 compute-2 nova_compute[232428]: 2025-11-29 08:59:29.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:30.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:30 compute-2 ceph-mon[77138]: pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:31 compute-2 nova_compute[232428]: 2025-11-29 08:59:31.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:32.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:32 compute-2 ceph-mon[77138]: pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:32 compute-2 nova_compute[232428]: 2025-11-29 08:59:32.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:32.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:33 compute-2 nova_compute[232428]: 2025-11-29 08:59:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:34.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:34 compute-2 nova_compute[232428]: 2025-11-29 08:59:34.358 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:34 compute-2 ceph-mon[77138]: pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:35 compute-2 nova_compute[232428]: 2025-11-29 08:59:35.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:35 compute-2 nova_compute[232428]: 2025-11-29 08:59:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:35 compute-2 nova_compute[232428]: 2025-11-29 08:59:35.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:59:35 compute-2 nova_compute[232428]: 2025-11-29 08:59:35.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:59:35 compute-2 nova_compute[232428]: 2025-11-29 08:59:35.308 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:59:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:36 compute-2 ceph-mon[77138]: pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:36.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:37 compute-2 nova_compute[232428]: 2025-11-29 08:59:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:37 compute-2 nova_compute[232428]: 2025-11-29 08:59:37.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:38 compute-2 ceph-mon[77138]: pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:39 compute-2 nova_compute[232428]: 2025-11-29 08:59:39.361 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:39 compute-2 sudo[334925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:39 compute-2 sudo[334925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:39 compute-2 sudo[334925]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:39 compute-2 sudo[334951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:39 compute-2 sudo[334951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:39 compute-2 sudo[334951]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:39 compute-2 podman[334949]: 2025-11-29 08:59:39.532426363 +0000 UTC m=+0.101264399 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:59:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:40.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.329 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.330 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.331 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.331 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.331 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:40 compute-2 ceph-mon[77138]: pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/688819580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:40 compute-2 nova_compute[232428]: 2025-11-29 08:59:40.858 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.036 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.038 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4190MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.038 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.038 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.243 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.244 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.302 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.406 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.407 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.421 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.440 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.463 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2141574915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3340783428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:59:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2674992832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.936 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.943 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.968 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.969 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:59:41 compute-2 nova_compute[232428]: 2025-11-29 08:59:41.969 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:42 compute-2 nova_compute[232428]: 2025-11-29 08:59:42.808 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:42 compute-2 ceph-mon[77138]: pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2674992832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:42 compute-2 nova_compute[232428]: 2025-11-29 08:59:42.969 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:42 compute-2 nova_compute[232428]: 2025-11-29 08:59:42.970 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:42 compute-2 nova_compute[232428]: 2025-11-29 08:59:42.970 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:59:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:44 compute-2 nova_compute[232428]: 2025-11-29 08:59:44.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:59:44 compute-2 nova_compute[232428]: 2025-11-29 08:59:44.365 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:44 compute-2 ceph-mon[77138]: pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:46.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:46 compute-2 ceph-mon[77138]: pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.461 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.462 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.497 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.631 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.632 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.643 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.644 232432 INFO nova.compute.claims [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Claim successful on node compute-2.ctlplane.example.com
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.810 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:47 compute-2 nova_compute[232428]: 2025-11-29 08:59:47.975 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:59:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/84481575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:48 compute-2 nova_compute[232428]: 2025-11-29 08:59:48.458 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:48 compute-2 nova_compute[232428]: 2025-11-29 08:59:48.467 232432 DEBUG nova.compute.provider_tree [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:59:48 compute-2 nova_compute[232428]: 2025-11-29 08:59:48.594 232432 DEBUG nova.scheduler.client.report [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:59:48 compute-2 nova_compute[232428]: 2025-11-29 08:59:48.700 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:48 compute-2 nova_compute[232428]: 2025-11-29 08:59:48.701 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:59:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:49 compute-2 ceph-mon[77138]: pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/84481575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:49 compute-2 nova_compute[232428]: 2025-11-29 08:59:49.131 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:59:49 compute-2 nova_compute[232428]: 2025-11-29 08:59:49.132 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:59:49 compute-2 nova_compute[232428]: 2025-11-29 08:59:49.367 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:49 compute-2 nova_compute[232428]: 2025-11-29 08:59:49.422 232432 INFO nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:59:49 compute-2 nova_compute[232428]: 2025-11-29 08:59:49.735 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:59:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 08:59:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:50.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 08:59:50 compute-2 ceph-mon[77138]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.397 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.398 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.399 232432 INFO nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Creating image(s)
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.436 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.536 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.577 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.583 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.642 232432 DEBUG nova.policy [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.687 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.688 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.690 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.690 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.733 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:50 compute-2 nova_compute[232428]: 2025-11-29 08:59:50.738 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2471491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:51 compute-2 podman[335161]: 2025-11-29 08:59:51.720769452 +0000 UTC m=+0.123595855 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.084 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:52.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.194 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.493 232432 DEBUG nova.objects.instance [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid f0392096-3cf4-4c41-93ba-5a9f1298ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.523 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.524 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Ensure instance console log exists: /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.525 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.525 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.525 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3744677558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:59:52 compute-2 nova_compute[232428]: 2025-11-29 08:59:52.813 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:53.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:53 compute-2 nova_compute[232428]: 2025-11-29 08:59:53.369 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Successfully created port: c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:59:53 compute-2 ceph-mon[77138]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 08:59:54 compute-2 sudo[335263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:54 compute-2 sudo[335263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:54 compute-2 sudo[335263]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:54.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:54 compute-2 sudo[335288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:59:54 compute-2 sudo[335288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:54 compute-2 sudo[335288]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:54 compute-2 sudo[335313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:54 compute-2 sudo[335313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:54 compute-2 sudo[335313]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:54 compute-2 sudo[335338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:59:54 compute-2 sudo[335338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:54 compute-2 nova_compute[232428]: 2025-11-29 08:59:54.371 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:54 compute-2 ceph-mon[77138]: pgmap v3740: 305 pgs: 305 active+clean; 130 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 425 KiB/s wr, 0 op/s
Nov 29 08:59:54 compute-2 sudo[335338]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:55.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.439 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Successfully updated port: c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.478 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.478 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.479 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.549 232432 DEBUG nova.compute.manager [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.550 232432 DEBUG nova.compute.manager [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing instance network info cache due to event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.551 232432 DEBUG oslo_concurrency.lockutils [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:59:55 compute-2 nova_compute[232428]: 2025-11-29 08:59:55.625 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:59:55 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:59:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:56.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:59:56 compute-2 ceph-mon[77138]: pgmap v3741: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:59:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.126 232432 DEBUG nova.network.neutron [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.163 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.164 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance network_info: |[{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.164 232432 DEBUG oslo_concurrency.lockutils [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.164 232432 DEBUG nova.network.neutron [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.170 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Start _get_guest_xml network_info=[{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.232 232432 WARNING nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.237 232432 DEBUG nova.virt.libvirt.host [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.238 232432 DEBUG nova.virt.libvirt.host [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.242 232432 DEBUG nova.virt.libvirt.host [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.242 232432 DEBUG nova.virt.libvirt.host [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.244 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.245 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.246 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.246 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.247 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.247 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.248 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.248 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.249 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.250 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.250 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.250 232432 DEBUG nova.virt.hardware [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.256 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:57 compute-2 podman[335415]: 2025-11-29 08:59:57.684533981 +0000 UTC m=+0.074755477 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 08:59:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:59:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2047433979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.780 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.822 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.828 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:57 compute-2 nova_compute[232428]: 2025-11-29 08:59:57.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 08:59:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 08:59:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:59:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3228682079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.277 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.280 232432 DEBUG nova.virt.libvirt.vif [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:59:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-373777920',display_name='tempest-TestNetworkBasicOps-server-373777920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-373777920',id=210,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGAbE1Bc/5lERhFSOyy81e6ftfTbnPic5FHz7wIIOrErmrP6yq+Ns+MCuAgmZTbCd7/ex2hB13SDnk/c67Tl5CF65XYnsn0BAG8I/91mEvEEo+BDZi5523QkbWfL6YpdLw==',key_name='tempest-TestNetworkBasicOps-256322918',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-86ohm6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:59:50Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=f0392096-3cf4-4c41-93ba-5a9f1298ce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.281 232432 DEBUG nova.network.os_vif_util [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.283 232432 DEBUG nova.network.os_vif_util [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.285 232432 DEBUG nova.objects.instance [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0392096-3cf4-4c41-93ba-5a9f1298ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.317 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <uuid>f0392096-3cf4-4c41-93ba-5a9f1298ce82</uuid>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <name>instance-000000d2</name>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <metadata>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:name>tempest-TestNetworkBasicOps-server-373777920</nova:name>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 08:59:57</nova:creationTime>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <nova:port uuid="c75d8990-417e-4bdb-b3d5-4d7ba47f3643">
Nov 29 08:59:58 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </metadata>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <system>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="serial">f0392096-3cf4-4c41-93ba-5a9f1298ce82</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="uuid">f0392096-3cf4-4c41-93ba-5a9f1298ce82</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </system>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <os>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </os>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <features>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <apic/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </features>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </clock>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </cpu>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   <devices>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk">
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config">
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </source>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 08:59:58 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       </auth>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </disk>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:b9:0b:36"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <target dev="tapc75d8990-41"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </interface>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/console.log" append="off"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </serial>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <video>
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </video>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </rng>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 08:59:58 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 08:59:58 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 08:59:58 compute-2 nova_compute[232428]:   </devices>
Nov 29 08:59:58 compute-2 nova_compute[232428]: </domain>
Nov 29 08:59:58 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.320 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Preparing to wait for external event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.320 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.321 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.321 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.322 232432 DEBUG nova.virt.libvirt.vif [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:59:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-373777920',display_name='tempest-TestNetworkBasicOps-server-373777920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-373777920',id=210,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGAbE1Bc/5lERhFSOyy81e6ftfTbnPic5FHz7wIIOrErmrP6yq+Ns+MCuAgmZTbCd7/ex2hB13SDnk/c67Tl5CF65XYnsn0BAG8I/91mEvEEo+BDZi5523QkbWfL6YpdLw==',key_name='tempest-TestNetworkBasicOps-256322918',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-86ohm6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:59:50Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=f0392096-3cf4-4c41-93ba-5a9f1298ce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.323 232432 DEBUG nova.network.os_vif_util [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.324 232432 DEBUG nova.network.os_vif_util [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.326 232432 DEBUG os_vif [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.327 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.328 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.333 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.334 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc75d8990-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.335 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc75d8990-41, col_values=(('external_ids', {'iface-id': 'c75d8990-417e-4bdb-b3d5-4d7ba47f3643', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:0b:36', 'vm-uuid': 'f0392096-3cf4-4c41-93ba-5a9f1298ce82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.337 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:58 compute-2 NetworkManager[48993]: <info>  [1764406798.3389] manager: (tapc75d8990-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.340 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.349 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.350 232432 INFO os_vif [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41')
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.439 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.440 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.440 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:b9:0b:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.441 232432 INFO nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Using config drive
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.485 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:58 compute-2 ceph-mon[77138]: pgmap v3742: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 08:59:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2047433979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:59:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3228682079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.873 232432 INFO nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Creating config drive at /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config
Nov 29 08:59:58 compute-2 nova_compute[232428]: 2025-11-29 08:59:58.884 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeb0r0cea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 08:59:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 08:59:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.046 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeb0r0cea" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.097 232432 DEBUG nova.storage.rbd_utils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.103 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.154 232432 DEBUG nova.network.neutron [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated VIF entry in instance network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.156 232432 DEBUG nova.network.neutron [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.174 232432 DEBUG oslo_concurrency.lockutils [req-91771a9c-0f4a-4dae-9f7b-ef3e40250c8a req-b75f8b1b-e93e-41e0-b4d1-fb592ae970b4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.329 232432 DEBUG oslo_concurrency.processutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config f0392096-3cf4-4c41-93ba-5a9f1298ce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.330 232432 INFO nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Deleting local config drive /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82/disk.config because it was imported into RBD.
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.4161] manager: (tapc75d8990-41): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Nov 29 08:59:59 compute-2 kernel: tapc75d8990-41: entered promiscuous mode
Nov 29 08:59:59 compute-2 ovn_controller[134375]: 2025-11-29T08:59:59Z|00985|binding|INFO|Claiming lport c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for this chassis.
Nov 29 08:59:59 compute-2 ovn_controller[134375]: 2025-11-29T08:59:59Z|00986|binding|INFO|c75d8990-417e-4bdb-b3d5-4d7ba47f3643: Claiming fa:16:3e:b9:0b:36 10.100.0.13
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.427 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.441 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0b:36 10.100.0.13'], port_security=['fa:16:3e:b9:0b:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0392096-3cf4-4c41-93ba-5a9f1298ce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf214aa3-cb83-4459-afa4-8d60262c5413', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '621713c8-d5a8-4270-89b5-5edcd88d3e97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb2c6e8-c4d9-43b8-a7a3-4ba0a1e884fb, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c75d8990-417e-4bdb-b3d5-4d7ba47f3643) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.444 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 in datapath bf214aa3-cb83-4459-afa4-8d60262c5413 bound to our chassis
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.446 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf214aa3-cb83-4459-afa4-8d60262c5413
Nov 29 08:59:59 compute-2 systemd-udevd[335550]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.465 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[93dd8e7b-fdb7-426e-a421-6b42d1bce286]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.467 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf214aa3-c1 in ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.4685] device (tapc75d8990-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.4699] device (tapc75d8990-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.470 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf214aa3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.470 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[28c3d499-dd4d-4c07-89e0-29ff985a752c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.471 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[300a4209-0176-4a2a-b7c2-85b353b0d74b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 systemd-machined[194747]: New machine qemu-101-instance-000000d2.
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.486 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[38dfa389-0e33-43ed-afd7-cc093f14b8fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.509 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[908f827f-6f11-41dc-bdc2-2ce94d7f5436]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 systemd[1]: Started Virtual Machine qemu-101-instance-000000d2.
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 ovn_controller[134375]: 2025-11-29T08:59:59Z|00987|binding|INFO|Setting lport c75d8990-417e-4bdb-b3d5-4d7ba47f3643 ovn-installed in OVS
Nov 29 08:59:59 compute-2 ovn_controller[134375]: 2025-11-29T08:59:59Z|00988|binding|INFO|Setting lport c75d8990-417e-4bdb-b3d5-4d7ba47f3643 up in Southbound
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.557 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[feaf4d6e-d6e3-44de-83dd-e037f723c674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.5635] manager: (tapbf214aa3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/472)
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.562 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[26c48a88-308f-455b-8b99-6d5d460a5acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 systemd-udevd[335554]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:59:59 compute-2 sudo[335559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:59 compute-2 sudo[335559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.609 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[91207fc1-b813-4d0e-badc-6d4724ddc3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 sudo[335559]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.612 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9dde5474-b77e-4523-a639-01a332af69c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.6409] device (tapbf214aa3-c0): carrier: link connected
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.650 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[04e4cfd5-d1a2-4b69-ba0e-a854fcd2fcbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.676 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c20b0e77-dc4f-4266-99fe-764938aff637]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf214aa3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:5e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976590, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335629, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 sudo[335608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:59:59 compute-2 sudo[335608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:59:59 compute-2 sudo[335608]: pam_unix(sudo:session): session closed for user root
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.701 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b43fb850-ccdf-4510-8cd5-f441d472589d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:5e94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 976590, 'tstamp': 976590}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335634, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.718 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e79e3acd-3a11-4bc9-9b2a-8c45f0749145]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf214aa3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:5e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976590, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335636, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.759 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[df76db38-ab5e-4405-b308-5a8393a85b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.865 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9bdf5-9666-4ebe-840b-39be46270ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.867 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf214aa3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.868 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.869 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf214aa3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.871 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 NetworkManager[48993]: <info>  [1764406799.8724] manager: (tapbf214aa3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Nov 29 08:59:59 compute-2 kernel: tapbf214aa3-c0: entered promiscuous mode
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.876 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf214aa3-c0, col_values=(('external_ids', {'iface-id': 'c17826d8-60cc-4702-9dae-6f2f3963d0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 ovn_controller[134375]: 2025-11-29T08:59:59Z|00989|binding|INFO|Releasing lport c17826d8-60cc-4702-9dae-6f2f3963d0d8 from this chassis (sb_readonly=0)
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.879 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.880 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.881 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f47341-ab15-4381-9403-01ae90a62d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.882 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: global
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-bf214aa3-cb83-4459-afa4-8d60262c5413
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID bf214aa3-cb83-4459-afa4-8d60262c5413
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:59:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 08:59:59.883 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'env', 'PROCESS_TAG=haproxy-bf214aa3-cb83-4459-afa4-8d60262c5413', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf214aa3-cb83-4459-afa4-8d60262c5413.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:59:59 compute-2 nova_compute[232428]: 2025-11-29 08:59:59.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.072 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406800.0715845, f0392096-3cf4-4c41-93ba-5a9f1298ce82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.073 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] VM Started (Lifecycle Event)
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.088 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.093 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406800.0718746, f0392096-3cf4-4c41-93ba-5a9f1298ce82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.093 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] VM Paused (Lifecycle Event)
Nov 29 09:00:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:00.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.121 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.125 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.157 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.193 232432 DEBUG nova.compute.manager [req-f685b3bc-c6b5-4d0e-9801-f05a24843b52 req-9b77e33c-f691-4f94-b3c3-d4d061516356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.194 232432 DEBUG oslo_concurrency.lockutils [req-f685b3bc-c6b5-4d0e-9801-f05a24843b52 req-9b77e33c-f691-4f94-b3c3-d4d061516356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.194 232432 DEBUG oslo_concurrency.lockutils [req-f685b3bc-c6b5-4d0e-9801-f05a24843b52 req-9b77e33c-f691-4f94-b3c3-d4d061516356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.194 232432 DEBUG oslo_concurrency.lockutils [req-f685b3bc-c6b5-4d0e-9801-f05a24843b52 req-9b77e33c-f691-4f94-b3c3-d4d061516356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.194 232432 DEBUG nova.compute.manager [req-f685b3bc-c6b5-4d0e-9801-f05a24843b52 req-9b77e33c-f691-4f94-b3c3-d4d061516356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Processing event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.195 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.198 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764406800.1984446, f0392096-3cf4-4c41-93ba-5a9f1298ce82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.199 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] VM Resumed (Lifecycle Event)
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.200 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.203 232432 INFO nova.virt.libvirt.driver [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance spawned successfully.
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.203 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.228 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.232 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.241 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.241 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.242 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.242 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.243 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.243 232432 DEBUG nova.virt.libvirt.driver [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:00:00 compute-2 podman[335711]: 2025-11-29 09:00:00.276544954 +0000 UTC m=+0.059968096 container create a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.281 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.320 232432 INFO nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Took 9.92 seconds to spawn the instance on the hypervisor.
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.321 232432 DEBUG nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:00:00 compute-2 systemd[1]: Started libpod-conmon-a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4.scope.
Nov 29 09:00:00 compute-2 podman[335711]: 2025-11-29 09:00:00.245481688 +0000 UTC m=+0.028904850 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 09:00:00 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:00:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0375cab9e70bd36e26391f2445f6950126ae66252580d30e850dbccf7a1de4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 09:00:00 compute-2 podman[335711]: 2025-11-29 09:00:00.373027245 +0000 UTC m=+0.156450417 container init a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 09:00:00 compute-2 podman[335711]: 2025-11-29 09:00:00.380042833 +0000 UTC m=+0.163465975 container start a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.401 232432 INFO nova.compute.manager [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Took 12.82 seconds to build instance.
Nov 29 09:00:00 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [NOTICE]   (335730) : New worker (335732) forked
Nov 29 09:00:00 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [NOTICE]   (335730) : Loading success.
Nov 29 09:00:00 compute-2 nova_compute[232428]: 2025-11-29 09:00:00.443 232432 DEBUG oslo_concurrency.lockutils [None req-9b8e021f-e2cb-434f-953f-322339347cb7 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:00 compute-2 ceph-mon[77138]: pgmap v3743: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:00:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 09:00:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:01 compute-2 sudo[335741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:00:01 compute-2 sudo[335741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:01 compute-2 sudo[335741]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:01 compute-2 sudo[335766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:00:01 compute-2 sudo[335766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:01 compute-2 sudo[335766]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:02.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.355 232432 DEBUG nova.compute.manager [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.357 232432 DEBUG oslo_concurrency.lockutils [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.358 232432 DEBUG oslo_concurrency.lockutils [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.359 232432 DEBUG oslo_concurrency.lockutils [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.360 232432 DEBUG nova.compute.manager [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.360 232432 WARNING nova.compute.manager [req-1a0172ba-9cb2-4889-a40b-a177f48fd9bb req-cc0383d6-5e96-4719-87c0-a1d1ad0c0f3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state active and task_state None.
Nov 29 09:00:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:00:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:00:02 compute-2 ceph-mon[77138]: pgmap v3744: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.816 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:02 compute-2 nova_compute[232428]: 2025-11-29 09:00:02.821 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:02.820 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:00:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:02.822 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:00:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:03.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:03 compute-2 nova_compute[232428]: 2025-11-29 09:00:03.338 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:03.366 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:03.367 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:03.368 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:03 compute-2 NetworkManager[48993]: <info>  [1764406803.7669] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Nov 29 09:00:03 compute-2 NetworkManager[48993]: <info>  [1764406803.7678] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Nov 29 09:00:03 compute-2 nova_compute[232428]: 2025-11-29 09:00:03.766 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:03 compute-2 ovn_controller[134375]: 2025-11-29T09:00:03Z|00990|binding|INFO|Releasing lport c17826d8-60cc-4702-9dae-6f2f3963d0d8 from this chassis (sb_readonly=0)
Nov 29 09:00:03 compute-2 ovn_controller[134375]: 2025-11-29T09:00:03Z|00991|binding|INFO|Releasing lport c17826d8-60cc-4702-9dae-6f2f3963d0d8 from this chassis (sb_readonly=0)
Nov 29 09:00:03 compute-2 nova_compute[232428]: 2025-11-29 09:00:03.798 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:03 compute-2 nova_compute[232428]: 2025-11-29 09:00:03.805 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:04 compute-2 nova_compute[232428]: 2025-11-29 09:00:04.287 232432 DEBUG nova.compute.manager [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:04 compute-2 nova_compute[232428]: 2025-11-29 09:00:04.288 232432 DEBUG nova.compute.manager [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing instance network info cache due to event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:00:04 compute-2 nova_compute[232428]: 2025-11-29 09:00:04.288 232432 DEBUG oslo_concurrency.lockutils [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:00:04 compute-2 nova_compute[232428]: 2025-11-29 09:00:04.289 232432 DEBUG oslo_concurrency.lockutils [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:00:04 compute-2 nova_compute[232428]: 2025-11-29 09:00:04.289 232432 DEBUG nova.network.neutron [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:00:04 compute-2 ceph-mon[77138]: pgmap v3745: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 29 09:00:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.601472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805601526, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 957, "num_deletes": 251, "total_data_size": 1985735, "memory_usage": 2017392, "flush_reason": "Manual Compaction"}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805609599, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1299245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82274, "largest_seqno": 83226, "table_properties": {"data_size": 1294851, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9708, "raw_average_key_size": 19, "raw_value_size": 1286080, "raw_average_value_size": 2608, "num_data_blocks": 91, "num_entries": 493, "num_filter_entries": 493, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406732, "oldest_key_time": 1764406732, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8200 microseconds, and 3570 cpu microseconds.
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.609670) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1299245 bytes OK
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.609689) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610886) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610897) EVENT_LOG_v1 {"time_micros": 1764406805610893, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1980969, prev total WAL file size 1980969, number of live WAL files 2.
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.611585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1268KB)], [165(14MB)]
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805611644, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 16400048, "oldest_snapshot_seqno": -1}
Nov 29 09:00:05 compute-2 nova_compute[232428]: 2025-11-29 09:00:05.667 232432 DEBUG nova.network.neutron [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated VIF entry in instance network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:00:05 compute-2 nova_compute[232428]: 2025-11-29 09:00:05.668 232432 DEBUG nova.network.neutron [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10774 keys, 14492653 bytes, temperature: kUnknown
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805725159, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 14492653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14422031, "index_size": 42605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26949, "raw_key_size": 285450, "raw_average_key_size": 26, "raw_value_size": 14232212, "raw_average_value_size": 1320, "num_data_blocks": 1620, "num_entries": 10774, "num_filter_entries": 10774, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.725780) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 14492653 bytes
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.729279) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.0 rd, 127.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(23.8) write-amplify(11.2) OK, records in: 11289, records dropped: 515 output_compression: NoCompression
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.729303) EVENT_LOG_v1 {"time_micros": 1764406805729293, "job": 106, "event": "compaction_finished", "compaction_time_micros": 113883, "compaction_time_cpu_micros": 41957, "output_level": 6, "num_output_files": 1, "total_output_size": 14492653, "num_input_records": 11289, "num_output_records": 10774, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805730234, "job": 106, "event": "table_file_deletion", "file_number": 167}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805733816, "job": 106, "event": "table_file_deletion", "file_number": 165}
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.611525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.733996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.734000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.734002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.734005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:05.734007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:05 compute-2 nova_compute[232428]: 2025-11-29 09:00:05.775 232432 DEBUG oslo_concurrency.lockutils [req-3809ec39-48f1-4fae-99a2-7431fd68e3c5 req-a3874ba9-e10b-4526-98e9-a5bbca79953b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:00:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:06.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:06 compute-2 ceph-mon[77138]: pgmap v3746: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Nov 29 09:00:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:07 compute-2 nova_compute[232428]: 2025-11-29 09:00:07.819 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:08.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:08 compute-2 nova_compute[232428]: 2025-11-29 09:00:08.341 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:08 compute-2 ceph-mon[77138]: pgmap v3747: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:00:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:09 compute-2 podman[335797]: 2025-11-29 09:00:09.701909342 +0000 UTC m=+0.097260716 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 09:00:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:10.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:10 compute-2 ceph-mon[77138]: pgmap v3748: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:00:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/981739051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:11 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:00:11 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:00:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:12 compute-2 ceph-mon[77138]: pgmap v3749: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:00:12 compute-2 nova_compute[232428]: 2025-11-29 09:00:12.821 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:12.823 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:00:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:13 compute-2 ovn_controller[134375]: 2025-11-29T09:00:13Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:0b:36 10.100.0.13
Nov 29 09:00:13 compute-2 ovn_controller[134375]: 2025-11-29T09:00:13Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:0b:36 10.100.0.13
Nov 29 09:00:13 compute-2 nova_compute[232428]: 2025-11-29 09:00:13.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:14 compute-2 ceph-mon[77138]: pgmap v3750: 305 pgs: 305 active+clean; 183 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 556 KiB/s wr, 81 op/s
Nov 29 09:00:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2364951944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:00:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/65487676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:00:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:16 compute-2 ceph-mon[77138]: pgmap v3751: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 143 op/s
Nov 29 09:00:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.287552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817287583, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 357, "num_deletes": 250, "total_data_size": 272159, "memory_usage": 278792, "flush_reason": "Manual Compaction"}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817290970, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 178336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83232, "largest_seqno": 83583, "table_properties": {"data_size": 176179, "index_size": 320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5928, "raw_average_key_size": 20, "raw_value_size": 171912, "raw_average_value_size": 588, "num_data_blocks": 14, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406805, "oldest_key_time": 1764406805, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 3442 microseconds, and 1234 cpu microseconds.
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.290994) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 178336 bytes OK
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.291006) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293163) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293187) EVENT_LOG_v1 {"time_micros": 1764406817293179, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 269758, prev total WAL file size 269758, number of live WAL files 2.
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373638' seq:72057594037927935, type:22 .. '6D6772737461740033303139' seq:0, type:0; will stop at (end)
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(174KB)], [168(13MB)]
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817293786, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14670989, "oldest_snapshot_seqno": -1}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10558 keys, 10829774 bytes, temperature: kUnknown
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817405982, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10829774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10765360, "index_size": 36909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 281158, "raw_average_key_size": 26, "raw_value_size": 10584264, "raw_average_value_size": 1002, "num_data_blocks": 1382, "num_entries": 10558, "num_filter_entries": 10558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.406468) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10829774 bytes
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.408152) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.6 rd, 96.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(143.0) write-amplify(60.7) OK, records in: 11066, records dropped: 508 output_compression: NoCompression
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.408183) EVENT_LOG_v1 {"time_micros": 1764406817408170, "job": 108, "event": "compaction_finished", "compaction_time_micros": 112295, "compaction_time_cpu_micros": 53912, "output_level": 6, "num_output_files": 1, "total_output_size": 10829774, "num_input_records": 11066, "num_output_records": 10558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817408545, "job": 108, "event": "table_file_deletion", "file_number": 170}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817413277, "job": 108, "event": "table_file_deletion", "file_number": 168}
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.413419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.413427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.413430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.413435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:00:17.413439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:00:17 compute-2 nova_compute[232428]: 2025-11-29 09:00:17.826 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:18.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:18 compute-2 ceph-mon[77138]: pgmap v3752: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 09:00:18 compute-2 nova_compute[232428]: 2025-11-29 09:00:18.346 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:19.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:19 compute-2 sudo[335824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:00:19 compute-2 sudo[335824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:19 compute-2 sudo[335824]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:19 compute-2 sudo[335849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:00:19 compute-2 sudo[335849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:19 compute-2 sudo[335849]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:20.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:20 compute-2 ceph-mon[77138]: pgmap v3753: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 29 09:00:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:21.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:22.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:22 compute-2 ceph-mon[77138]: pgmap v3754: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 29 09:00:22 compute-2 podman[335876]: 2025-11-29 09:00:22.708507818 +0000 UTC m=+0.103485899 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:00:22 compute-2 nova_compute[232428]: 2025-11-29 09:00:22.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:23 compute-2 nova_compute[232428]: 2025-11-29 09:00:23.349 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:24.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:24 compute-2 ceph-mon[77138]: pgmap v3755: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Nov 29 09:00:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:25 compute-2 nova_compute[232428]: 2025-11-29 09:00:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:26 compute-2 ceph-mon[77138]: pgmap v3756: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 151 op/s
Nov 29 09:00:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:27 compute-2 nova_compute[232428]: 2025-11-29 09:00:27.833 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:00:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965781439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:00:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:00:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965781439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:00:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:28 compute-2 nova_compute[232428]: 2025-11-29 09:00:28.351 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:28 compute-2 podman[335906]: 2025-11-29 09:00:28.664740962 +0000 UTC m=+0.071156264 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 09:00:28 compute-2 ceph-mon[77138]: pgmap v3757: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 77 op/s
Nov 29 09:00:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/965781439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:00:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/965781439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:00:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:29.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:30 compute-2 ceph-mon[77138]: pgmap v3758: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 77 op/s
Nov 29 09:00:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:31 compute-2 nova_compute[232428]: 2025-11-29 09:00:31.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:32.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:32 compute-2 ceph-mon[77138]: pgmap v3759: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 72 op/s
Nov 29 09:00:32 compute-2 nova_compute[232428]: 2025-11-29 09:00:32.836 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:33 compute-2 nova_compute[232428]: 2025-11-29 09:00:33.353 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:34.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:34 compute-2 ceph-mon[77138]: pgmap v3760: 305 pgs: 305 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 500 KiB/s rd, 739 KiB/s wr, 37 op/s
Nov 29 09:00:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:35.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:35 compute-2 nova_compute[232428]: 2025-11-29 09:00:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:00:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.966 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.967 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.967 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 09:00:36 compute-2 nova_compute[232428]: 2025-11-29 09:00:36.967 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid f0392096-3cf4-4c41-93ba-5a9f1298ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:00:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:37 compute-2 ceph-mon[77138]: pgmap v3761: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:00:37 compute-2 nova_compute[232428]: 2025-11-29 09:00:37.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.154 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:38.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.183 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.183 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:38 compute-2 ceph-mon[77138]: pgmap v3762: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:00:38 compute-2 nova_compute[232428]: 2025-11-29 09:00:38.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:39.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:39 compute-2 nova_compute[232428]: 2025-11-29 09:00:39.465 232432 INFO nova.compute.manager [None req-b8cd99f7-c1cc-4901-9883-d284215ab75b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Get console output
Nov 29 09:00:39 compute-2 nova_compute[232428]: 2025-11-29 09:00:39.476 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 09:00:39 compute-2 sudo[335930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:00:39 compute-2 sudo[335930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:39 compute-2 sudo[335930]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:40 compute-2 podman[335955]: 2025-11-29 09:00:40.059853202 +0000 UTC m=+0.066370885 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:00:40 compute-2 sudo[335962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:00:40 compute-2 sudo[335962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:00:40 compute-2 sudo[335962]: pam_unix(sudo:session): session closed for user root
Nov 29 09:00:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.436 232432 DEBUG nova.compute.manager [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.436 232432 DEBUG nova.compute.manager [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing instance network info cache due to event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.438 232432 DEBUG oslo_concurrency.lockutils [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.438 232432 DEBUG oslo_concurrency.lockutils [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.439 232432 DEBUG nova.network.neutron [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:00:40 compute-2 ceph-mon[77138]: pgmap v3763: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.716 232432 DEBUG nova.compute.manager [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.716 232432 DEBUG oslo_concurrency.lockutils [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.717 232432 DEBUG oslo_concurrency.lockutils [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.718 232432 DEBUG oslo_concurrency.lockutils [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.718 232432 DEBUG nova.compute.manager [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:40 compute-2 nova_compute[232428]: 2025-11-29 09:00:40.719 232432 WARNING nova.compute.manager [req-8df58b1f-8b86-407a-af97-4ba79f254cd4 req-88e4cdbf-3b36-455b-ab12-04a947deeff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state active and task_state None.
Nov 29 09:00:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:41.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.264 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.265 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.265 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.265 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.266 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.443 232432 INFO nova.compute.manager [None req-0d5eaa53-c151-4b59-8699-5f23b5420983 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Get console output
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.451 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.565 232432 DEBUG nova.network.neutron [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated VIF entry in instance network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.565 232432 DEBUG nova.network.neutron [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.601 232432 DEBUG oslo_concurrency.lockutils [req-907949a6-8c61-43b9-8a00-44095a698f95 req-af2d9381-19bd-4131-939a-94aa36afdb1f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:00:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:00:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3935753706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.707 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.794 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.794 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.994 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.995 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3959MB free_disk=20.897228240966797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.995 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:41 compute-2 nova_compute[232428]: 2025-11-29 09:00:41.995 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.100 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance f0392096-3cf4-4c41-93ba-5a9f1298ce82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.101 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.101 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.179 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:00:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:42.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:00:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4247184490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.637 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.644 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.667 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:00:42 compute-2 ceph-mon[77138]: pgmap v3764: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:00:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3935753706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3896652145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4247184490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.712 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.712 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.842 232432 DEBUG nova.compute.manager [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.842 232432 DEBUG oslo_concurrency.lockutils [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.842 232432 DEBUG oslo_concurrency.lockutils [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.842 232432 DEBUG oslo_concurrency.lockutils [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.843 232432 DEBUG nova.compute.manager [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.843 232432 WARNING nova.compute.manager [req-3a9dd7c6-30ea-4443-bb81-c33f5ed18410 req-167f45fe-1da9-4bf2-bf86-a3c5986ddd1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state active and task_state None.
Nov 29 09:00:42 compute-2 nova_compute[232428]: 2025-11-29 09:00:42.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:43.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:43 compute-2 nova_compute[232428]: 2025-11-29 09:00:43.356 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4159564219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:43 compute-2 nova_compute[232428]: 2025-11-29 09:00:43.712 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:43 compute-2 nova_compute[232428]: 2025-11-29 09:00:43.713 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:00:43 compute-2 nova_compute[232428]: 2025-11-29 09:00:43.713 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:00:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.266 232432 INFO nova.compute.manager [None req-8b47de4a-e808-473e-8d18-1c4423834aed 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Get console output
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.275 275616 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 29 09:00:44 compute-2 ceph-mon[77138]: pgmap v3765: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.954 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.954 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing instance network info cache due to event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.954 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.955 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:00:44 compute-2 nova_compute[232428]: 2025-11-29 09:00:44.955 232432 DEBUG nova.network.neutron [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:00:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:45.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:46.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:46.384 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:00:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:46.384 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.385 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:46 compute-2 ceph-mon[77138]: pgmap v3766: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 163 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.816 232432 DEBUG nova.network.neutron [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated VIF entry in instance network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.817 232432 DEBUG nova.network.neutron [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.858 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.859 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.860 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.860 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.861 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.861 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.862 232432 WARNING nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state active and task_state None.
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.862 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.863 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.863 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.864 232432 DEBUG oslo_concurrency.lockutils [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.864 232432 DEBUG nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:46 compute-2 nova_compute[232428]: 2025-11-29 09:00:46.864 232432 WARNING nova.compute.manager [req-290db874-e2e0-4c78-ac07-086262ad986f req-1f43b7c4-adbc-4c42-8b57-8d2c8de046fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state active and task_state None.
Nov 29 09:00:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:00:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:47.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:00:47 compute-2 nova_compute[232428]: 2025-11-29 09:00:47.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:48.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:48 compute-2 nova_compute[232428]: 2025-11-29 09:00:48.359 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:48 compute-2 ceph-mon[77138]: pgmap v3767: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Nov 29 09:00:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2824405244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:50.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:50 compute-2 ceph-mon[77138]: pgmap v3768: 305 pgs: 305 active+clean; 258 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 KiB/s rd, 17 KiB/s wr, 5 op/s
Nov 29 09:00:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:51.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:51 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:51.386 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:00:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.362 232432 DEBUG nova.compute.manager [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.362 232432 DEBUG nova.compute.manager [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing instance network info cache due to event network-changed-c75d8990-417e-4bdb-b3d5-4d7ba47f3643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.362 232432 DEBUG oslo_concurrency.lockutils [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.363 232432 DEBUG oslo_concurrency.lockutils [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.363 232432 DEBUG nova.network.neutron [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Refreshing network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.523 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.523 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.524 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.524 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.524 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.525 232432 INFO nova.compute.manager [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Terminating instance
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.526 232432 DEBUG nova.compute.manager [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 09:00:52 compute-2 kernel: tapc75d8990-41 (unregistering): left promiscuous mode
Nov 29 09:00:52 compute-2 NetworkManager[48993]: <info>  [1764406852.5867] device (tapc75d8990-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 09:00:52 compute-2 ovn_controller[134375]: 2025-11-29T09:00:52Z|00992|binding|INFO|Releasing lport c75d8990-417e-4bdb-b3d5-4d7ba47f3643 from this chassis (sb_readonly=0)
Nov 29 09:00:52 compute-2 ovn_controller[134375]: 2025-11-29T09:00:52Z|00993|binding|INFO|Setting lport c75d8990-417e-4bdb-b3d5-4d7ba47f3643 down in Southbound
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.596 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 ovn_controller[134375]: 2025-11-29T09:00:52Z|00994|binding|INFO|Removing iface tapc75d8990-41 ovn-installed in OVS
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.609 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0b:36 10.100.0.13'], port_security=['fa:16:3e:b9:0b:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0392096-3cf4-4c41-93ba-5a9f1298ce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf214aa3-cb83-4459-afa4-8d60262c5413', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '8', 'neutron:security_group_ids': '621713c8-d5a8-4270-89b5-5edcd88d3e97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb2c6e8-c4d9-43b8-a7a3-4ba0a1e884fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=c75d8990-417e-4bdb-b3d5-4d7ba47f3643) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.610 143801 INFO neutron.agent.ovn.metadata.agent [-] Port c75d8990-417e-4bdb-b3d5-4d7ba47f3643 in datapath bf214aa3-cb83-4459-afa4-8d60262c5413 unbound from our chassis
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.612 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf214aa3-cb83-4459-afa4-8d60262c5413, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.613 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[83b869ca-2508-4d58-a15e-148b1d796f85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.614 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 namespace which is not needed anymore
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Nov 29 09:00:52 compute-2 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d2.scope: Consumed 15.680s CPU time.
Nov 29 09:00:52 compute-2 systemd-machined[194747]: Machine qemu-101-instance-000000d2 terminated.
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.760 232432 INFO nova.virt.libvirt.driver [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance destroyed successfully.
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.761 232432 DEBUG nova.objects.instance [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid f0392096-3cf4-4c41-93ba-5a9f1298ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.774 232432 DEBUG nova.virt.libvirt.vif [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:59:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-373777920',display_name='tempest-TestNetworkBasicOps-server-373777920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-373777920',id=210,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGAbE1Bc/5lERhFSOyy81e6ftfTbnPic5FHz7wIIOrErmrP6yq+Ns+MCuAgmZTbCd7/ex2hB13SDnk/c67Tl5CF65XYnsn0BAG8I/91mEvEEo+BDZi5523QkbWfL6YpdLw==',key_name='tempest-TestNetworkBasicOps-256322918',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:00:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-86ohm6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:00:00Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=f0392096-3cf4-4c41-93ba-5a9f1298ce82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.774 232432 DEBUG nova.network.os_vif_util [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.775 232432 DEBUG nova.network.os_vif_util [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.776 232432 DEBUG os_vif [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.779 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.780 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc75d8990-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.782 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 ceph-mon[77138]: pgmap v3769: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.786 232432 INFO os_vif [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0b:36,bridge_name='br-int',has_traffic_filtering=True,id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75d8990-41')
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [NOTICE]   (335730) : haproxy version is 2.8.14-c23fe91
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [NOTICE]   (335730) : path to executable is /usr/sbin/haproxy
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [WARNING]  (335730) : Exiting Master process...
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [WARNING]  (335730) : Exiting Master process...
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [ALERT]    (335730) : Current worker (335732) exited with code 143 (Terminated)
Nov 29 09:00:52 compute-2 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[335726]: [WARNING]  (335730) : All workers exited. Exiting... (0)
Nov 29 09:00:52 compute-2 systemd[1]: libpod-a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4.scope: Deactivated successfully.
Nov 29 09:00:52 compute-2 podman[336095]: 2025-11-29 09:00:52.838070317 +0000 UTC m=+0.060317258 container died a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 09:00:52 compute-2 podman[336069]: 2025-11-29 09:00:52.838229952 +0000 UTC m=+0.099238938 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.845 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-8d0375cab9e70bd36e26391f2445f6950126ae66252580d30e850dbccf7a1de4-merged.mount: Deactivated successfully.
Nov 29 09:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4-userdata-shm.mount: Deactivated successfully.
Nov 29 09:00:52 compute-2 podman[336095]: 2025-11-29 09:00:52.873705325 +0000 UTC m=+0.095952266 container cleanup a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 09:00:52 compute-2 systemd[1]: libpod-conmon-a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4.scope: Deactivated successfully.
Nov 29 09:00:52 compute-2 podman[336162]: 2025-11-29 09:00:52.940087769 +0000 UTC m=+0.042601415 container remove a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.946 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[44381079-e657-4214-b98d-bb26837019ef]: (4, ('Sat Nov 29 09:00:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 (a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4)\na4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4\nSat Nov 29 09:00:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 (a4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4)\na4bd425c9e2c456762a72d5d01f43288137d98a2e3d4a9235fad58eae31cbdc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.948 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37afc3f3-edae-472d-ac1a-3ecd4e56e196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.949 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf214aa3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.951 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 kernel: tapbf214aa3-c0: left promiscuous mode
Nov 29 09:00:52 compute-2 nova_compute[232428]: 2025-11-29 09:00:52.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.967 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9518f60c-77b6-4954-847e-7ad707f7b137]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.986 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c0462c23-6778-4e0e-8fac-5d4e971d011e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:52.988 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfb09b4-2189-4e35-bb53-f0097c384188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:53.006 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2d034f5a-954d-4ed8-a2d4-cfa8a40a4c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976581, 'reachable_time': 28066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336177, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:53.010 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 09:00:53 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:00:53.010 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[262ab61b-2547-4a04-bda4-623449885f0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:00:53 compute-2 systemd[1]: run-netns-ovnmeta\x2dbf214aa3\x2dcb83\x2d4459\x2dafa4\x2d8d60262c5413.mount: Deactivated successfully.
Nov 29 09:00:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.186 232432 DEBUG nova.compute.manager [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.186 232432 DEBUG oslo_concurrency.lockutils [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.186 232432 DEBUG oslo_concurrency.lockutils [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.187 232432 DEBUG oslo_concurrency.lockutils [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.187 232432 DEBUG nova.compute.manager [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.187 232432 DEBUG nova.compute.manager [req-757c9c31-67dc-4f76-b240-dc1a3c1d5790 req-beb94cb7-e2d1-4753-87b1-db84b99407db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-unplugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.208 232432 INFO nova.virt.libvirt.driver [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Deleting instance files /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82_del
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.209 232432 INFO nova.virt.libvirt.driver [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Deletion of /var/lib/nova/instances/f0392096-3cf4-4c41-93ba-5a9f1298ce82_del complete
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.268 232432 INFO nova.compute.manager [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.269 232432 DEBUG oslo.service.loopingcall [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.269 232432 DEBUG nova.compute.manager [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 09:00:53 compute-2 nova_compute[232428]: 2025-11-29 09:00:53.270 232432 DEBUG nova.network.neutron [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 09:00:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.224 232432 DEBUG nova.network.neutron [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.239 232432 INFO nova.compute.manager [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Took 0.97 seconds to deallocate network for instance.
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.294 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.294 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.365 232432 DEBUG oslo_concurrency.processutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.401 232432 DEBUG nova.network.neutron [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updated VIF entry in instance network info cache for port c75d8990-417e-4bdb-b3d5-4d7ba47f3643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.403 232432 DEBUG nova.network.neutron [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Updating instance_info_cache with network_info: [{"id": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "address": "fa:16:3e:b9:0b:36", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75d8990-41", "ovs_interfaceid": "c75d8990-417e-4bdb-b3d5-4d7ba47f3643", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.421 232432 DEBUG oslo_concurrency.lockutils [req-4b25095b-051f-43a6-9257-c46262b232e5 req-b66361b3-2157-4d8d-a73a-d87d78b9a12e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f0392096-3cf4-4c41-93ba-5a9f1298ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:00:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2387376699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/661294477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:00:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1397515001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.799 232432 DEBUG oslo_concurrency.processutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.807 232432 DEBUG nova.compute.provider_tree [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.833 232432 DEBUG nova.scheduler.client.report [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.865 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:54 compute-2 nova_compute[232428]: 2025-11-29 09:00:54.913 232432 INFO nova.scheduler.client.report [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance f0392096-3cf4-4c41-93ba-5a9f1298ce82
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.019 232432 DEBUG oslo_concurrency.lockutils [None req-8b196d45-6a8e-4ebf-8240-f2716ec21975 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.299 232432 DEBUG nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.300 232432 DEBUG oslo_concurrency.lockutils [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.301 232432 DEBUG oslo_concurrency.lockutils [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.301 232432 DEBUG oslo_concurrency.lockutils [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f0392096-3cf4-4c41-93ba-5a9f1298ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.301 232432 DEBUG nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] No waiting events found dispatching network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.302 232432 WARNING nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received unexpected event network-vif-plugged-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 for instance with vm_state deleted and task_state None.
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.302 232432 DEBUG nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Received event network-vif-deleted-c75d8990-417e-4bdb-b3d5-4d7ba47f3643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.302 232432 INFO nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Neutron deleted interface c75d8990-417e-4bdb-b3d5-4d7ba47f3643; detaching it from the instance and deleting it from the info cache
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.303 232432 DEBUG nova.network.neutron [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 29 09:00:55 compute-2 nova_compute[232428]: 2025-11-29 09:00:55.305 232432 DEBUG nova.compute.manager [req-647e6fc2-ecb6-485b-8a0d-84619d4f7886 req-6809c0a7-6963-477e-9dcb-2db4b3825b35 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Detach interface failed, port_id=c75d8990-417e-4bdb-b3d5-4d7ba47f3643, reason: Instance f0392096-3cf4-4c41-93ba-5a9f1298ce82 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 09:00:55 compute-2 ceph-mon[77138]: pgmap v3770: 305 pgs: 305 active+clean; 186 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 29 op/s
Nov 29 09:00:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1397515001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:00:55 compute-2 sshd-session[336201]: Invalid user ethereum from 45.148.10.240 port 43340
Nov 29 09:00:55 compute-2 sshd-session[336201]: Connection closed by invalid user ethereum 45.148.10.240 port 43340 [preauth]
Nov 29 09:00:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:56.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:00:56 compute-2 ceph-mon[77138]: pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Nov 29 09:00:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:57 compute-2 nova_compute[232428]: 2025-11-29 09:00:57.783 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:57 compute-2 nova_compute[232428]: 2025-11-29 09:00:57.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:00:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:00:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:58.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:00:58 compute-2 ceph-mon[77138]: pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Nov 29 09:00:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:00:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:00:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:00:59 compute-2 podman[336205]: 2025-11-29 09:00:59.653209373 +0000 UTC m=+0.059186541 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:01:00 compute-2 sudo[336226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:00 compute-2 sudo[336226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:00 compute-2 sudo[336226]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:00.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:00 compute-2 nova_compute[232428]: 2025-11-29 09:01:00.209 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:00 compute-2 sudo[336251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:00 compute-2 sudo[336251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:00 compute-2 sudo[336251]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:00 compute-2 nova_compute[232428]: 2025-11-29 09:01:00.279 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:00 compute-2 ceph-mon[77138]: pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Nov 29 09:01:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:01 compute-2 CROND[336278]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 09:01:01 compute-2 run-parts[336281]: (/etc/cron.hourly) starting 0anacron
Nov 29 09:01:01 compute-2 run-parts[336287]: (/etc/cron.hourly) finished 0anacron
Nov 29 09:01:01 compute-2 CROND[336277]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 09:01:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:02 compute-2 sudo[336289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:02 compute-2 sudo[336289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:02 compute-2 sudo[336289]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:02 compute-2 sudo[336314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:01:02 compute-2 sudo[336314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:02 compute-2 sudo[336314]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:02 compute-2 sudo[336339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:02 compute-2 sudo[336339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:02 compute-2 sudo[336339]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:02.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:02 compute-2 sudo[336364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:01:02 compute-2 sudo[336364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:02 compute-2 ceph-mon[77138]: pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 52 op/s
Nov 29 09:01:02 compute-2 nova_compute[232428]: 2025-11-29 09:01:02.785 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:02 compute-2 sudo[336364]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:02 compute-2 nova_compute[232428]: 2025-11-29 09:01:02.848 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:03.367 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:03.368 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:01:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:03.368 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:01:03 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:01:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:04.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:04 compute-2 ceph-mon[77138]: pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:01:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:05.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:06.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:06 compute-2 ceph-mon[77138]: pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:01:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:07 compute-2 nova_compute[232428]: 2025-11-29 09:01:07.758 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406852.7574096, f0392096-3cf4-4c41-93ba-5a9f1298ce82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:01:07 compute-2 nova_compute[232428]: 2025-11-29 09:01:07.758 232432 INFO nova.compute.manager [-] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] VM Stopped (Lifecycle Event)
Nov 29 09:01:07 compute-2 nova_compute[232428]: 2025-11-29 09:01:07.787 232432 DEBUG nova.compute.manager [None req-9b882870-54c3-4e62-84ad-b7fb349c7863 - - - - - -] [instance: f0392096-3cf4-4c41-93ba-5a9f1298ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:01:07 compute-2 nova_compute[232428]: 2025-11-29 09:01:07.788 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:07 compute-2 nova_compute[232428]: 2025-11-29 09:01:07.862 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:08 compute-2 ceph-mon[77138]: pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:09.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:09 compute-2 sudo[336423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:09 compute-2 sudo[336423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:09 compute-2 sudo[336423]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:09 compute-2 sudo[336448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:01:09 compute-2 sudo[336448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:09 compute-2 sudo[336448]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:01:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:01:10 compute-2 ceph-mon[77138]: pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:10 compute-2 podman[336475]: 2025-11-29 09:01:10.709600948 +0000 UTC m=+0.095089099 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 09:01:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:12 compute-2 ceph-mon[77138]: pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:12 compute-2 nova_compute[232428]: 2025-11-29 09:01:12.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:12 compute-2 nova_compute[232428]: 2025-11-29 09:01:12.864 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:14 compute-2 ceph-mon[77138]: pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:15.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:16.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:17 compute-2 ceph-mon[77138]: pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:17.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:17 compute-2 nova_compute[232428]: 2025-11-29 09:01:17.792 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:17 compute-2 nova_compute[232428]: 2025-11-29 09:01:17.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:18 compute-2 ceph-mon[77138]: pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:18.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:19.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1401654113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:20 compute-2 ceph-mon[77138]: pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:01:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:20.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:20 compute-2 sudo[336502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:20 compute-2 sudo[336502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:20 compute-2 sudo[336502]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:20 compute-2 sudo[336527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:20 compute-2 sudo[336527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:20 compute-2 sudo[336527]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:21.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:22.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:22 compute-2 nova_compute[232428]: 2025-11-29 09:01:22.795 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:22 compute-2 ceph-mon[77138]: pgmap v3784: 305 pgs: 305 active+clean; 149 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 09:01:22 compute-2 nova_compute[232428]: 2025-11-29 09:01:22.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:23 compute-2 podman[336553]: 2025-11-29 09:01:23.746510736 +0000 UTC m=+0.143309877 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 09:01:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2107379554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:01:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2199218333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:01:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:24.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:25.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:25 compute-2 ceph-mon[77138]: pgmap v3785: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:01:25 compute-2 nova_compute[232428]: 2025-11-29 09:01:25.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:26 compute-2 ceph-mon[77138]: pgmap v3786: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:01:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:26.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:27.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:27 compute-2 nova_compute[232428]: 2025-11-29 09:01:27.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:27 compute-2 nova_compute[232428]: 2025-11-29 09:01:27.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:01:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/588808987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:01:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:01:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/588808987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:01:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:28.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:28 compute-2 ceph-mon[77138]: pgmap v3787: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/588808987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:01:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/588808987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:01:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:29.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:30 compute-2 podman[336583]: 2025-11-29 09:01:30.710668178 +0000 UTC m=+0.104162970 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 09:01:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:31.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:31 compute-2 nova_compute[232428]: 2025-11-29 09:01:31.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:31 compute-2 ceph-mon[77138]: pgmap v3788: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Nov 29 09:01:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:01:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.3 total, 600.0 interval
                                           Cumulative writes: 16K writes, 84K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1540 writes, 7758 keys, 1540 commit groups, 1.0 writes per commit group, ingest: 15.53 MB, 0.03 MB/s
                                           Interval WAL: 1540 writes, 1540 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     42.5      2.44              0.48        54    0.045       0      0       0.0       0.0
                                             L6      1/0   10.33 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4     86.1     73.8      7.57              2.07        53    0.143    428K    28K       0.0       0.0
                                            Sum      1/0   10.33 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     65.1     66.1     10.01              2.54       107    0.094    428K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9     65.9     64.4      1.52              0.45        14    0.108     77K   3641       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     86.1     73.8      7.57              2.07        53    0.143    428K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     44.1      2.36              0.48        53    0.044       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.101, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.65 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 10.0 seconds
                                           Interval compaction: 0.10 GB write, 0.16 MB/s write, 0.10 GB read, 0.17 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 72.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000344 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3977,69.85 MB,22.9786%) FilterBlock(107,1.14 MB,0.376024%) IndexBlock(107,1.88 MB,0.618147%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 09:01:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:32.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:32 compute-2 ceph-mon[77138]: pgmap v3789: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 09:01:32 compute-2 nova_compute[232428]: 2025-11-29 09:01:32.798 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:32 compute-2 nova_compute[232428]: 2025-11-29 09:01:32.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:33.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:34.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:34 compute-2 ceph-mon[77138]: pgmap v3790: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 543 KiB/s wr, 97 op/s
Nov 29 09:01:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:35.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:36 compute-2 nova_compute[232428]: 2025-11-29 09:01:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:36 compute-2 nova_compute[232428]: 2025-11-29 09:01:36.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:01:36 compute-2 nova_compute[232428]: 2025-11-29 09:01:36.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:01:36 compute-2 nova_compute[232428]: 2025-11-29 09:01:36.220 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:01:36 compute-2 nova_compute[232428]: 2025-11-29 09:01:36.221 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:36.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:36 compute-2 ceph-mon[77138]: pgmap v3791: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:01:37 compute-2 ovn_controller[134375]: 2025-11-29T09:01:37Z|00995|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 29 09:01:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:37.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:37 compute-2 nova_compute[232428]: 2025-11-29 09:01:37.800 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:37 compute-2 nova_compute[232428]: 2025-11-29 09:01:37.871 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:38 compute-2 nova_compute[232428]: 2025-11-29 09:01:38.211 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:38 compute-2 ceph-mon[77138]: pgmap v3792: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 09:01:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:39 compute-2 nova_compute[232428]: 2025-11-29 09:01:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:40 compute-2 sudo[336607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:40 compute-2 sudo[336607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:40 compute-2 sudo[336607]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:40 compute-2 sudo[336632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:01:40 compute-2 sudo[336632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:01:40 compute-2 sudo[336632]: pam_unix(sudo:session): session closed for user root
Nov 29 09:01:40 compute-2 ceph-mon[77138]: pgmap v3793: 305 pgs: 305 active+clean; 170 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 497 KiB/s wr, 85 op/s
Nov 29 09:01:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:41 compute-2 podman[336657]: 2025-11-29 09:01:41.688053996 +0000 UTC m=+0.084777548 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.242 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.243 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.243 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:01:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:42.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:01:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1469269706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.796 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:01:42 compute-2 ceph-mon[77138]: pgmap v3794: 305 pgs: 305 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.803 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:42 compute-2 nova_compute[232428]: 2025-11-29 09:01:42.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.081 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.083 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4190MB free_disk=20.943435668945312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.084 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.084 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:01:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:43.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.288 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.289 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.314 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:01:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:01:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/985941418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.811 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.821 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:01:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1469269706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3246123097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2035084151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.889 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.910 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:01:43 compute-2 nova_compute[232428]: 2025-11-29 09:01:43.910 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:01:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:44.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:44 compute-2 ceph-mon[77138]: pgmap v3795: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 09:01:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/985941418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:44 compute-2 nova_compute[232428]: 2025-11-29 09:01:44.911 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:44 compute-2 nova_compute[232428]: 2025-11-29 09:01:44.911 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:44 compute-2 nova_compute[232428]: 2025-11-29 09:01:44.912 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:01:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:45.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:46.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:47 compute-2 ceph-mon[77138]: pgmap v3796: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 09:01:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:47.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:47 compute-2 nova_compute[232428]: 2025-11-29 09:01:47.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:01:47 compute-2 nova_compute[232428]: 2025-11-29 09:01:47.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:47 compute-2 nova_compute[232428]: 2025-11-29 09:01:47.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:48 compute-2 ceph-mon[77138]: pgmap v3797: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 09:01:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:48.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:48.908 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:01:48 compute-2 nova_compute[232428]: 2025-11-29 09:01:48.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:48.912 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:01:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:01:48.914 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:01:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:49.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:50 compute-2 ceph-mon[77138]: pgmap v3798: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 09:01:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:52.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:52 compute-2 nova_compute[232428]: 2025-11-29 09:01:52.809 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:52 compute-2 nova_compute[232428]: 2025-11-29 09:01:52.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:53.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:53 compute-2 ceph-mon[77138]: pgmap v3799: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 1.7 MiB/s wr, 42 op/s
Nov 29 09:01:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:54.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/226939711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:54 compute-2 ceph-mon[77138]: pgmap v3800: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 96 KiB/s rd, 86 KiB/s wr, 29 op/s
Nov 29 09:01:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1600940834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:54 compute-2 podman[336730]: 2025-11-29 09:01:54.715675036 +0000 UTC m=+0.117335580 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:01:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:55.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:01:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1524937875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:01:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:01:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:56 compute-2 ceph-mon[77138]: pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 09:01:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:01:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:57.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:01:57 compute-2 nova_compute[232428]: 2025-11-29 09:01:57.812 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:57 compute-2 nova_compute[232428]: 2025-11-29 09:01:57.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:01:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:01:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:58.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:01:58 compute-2 ceph-mon[77138]: pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 09:01:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:01:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:01:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:59.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:00 compute-2 ceph-mon[77138]: pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 09:02:00 compute-2 sudo[336760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:00 compute-2 sudo[336760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:00 compute-2 sudo[336760]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:00 compute-2 sudo[336785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:00 compute-2 sudo[336785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:00 compute-2 sudo[336785]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:00 compute-2 podman[336809]: 2025-11-29 09:02:00.896505437 +0000 UTC m=+0.091694104 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 09:02:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:02 compute-2 ceph-mon[77138]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:02:02 compute-2 nova_compute[232428]: 2025-11-29 09:02:02.815 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:02 compute-2 nova_compute[232428]: 2025-11-29 09:02:02.882 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:02:03.369 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:02:03.369 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:02:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:02:03.370 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:02:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:04 compute-2 ceph-mon[77138]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:02:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:05.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:06.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:06 compute-2 ceph-mon[77138]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 13 op/s
Nov 29 09:02:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:07.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:07 compute-2 nova_compute[232428]: 2025-11-29 09:02:07.817 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:07 compute-2 nova_compute[232428]: 2025-11-29 09:02:07.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:08.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:08 compute-2 ceph-mon[77138]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:09 compute-2 sudo[336834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:09 compute-2 sudo[336834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:09 compute-2 sudo[336834]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:09 compute-2 sudo[336859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:02:09 compute-2 sudo[336859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:09 compute-2 sudo[336859]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:09 compute-2 sudo[336884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:09 compute-2 sudo[336884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:09 compute-2 sudo[336884]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:09 compute-2 sudo[336909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:02:09 compute-2 sudo[336909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:10.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:10 compute-2 sudo[336909]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:11 compute-2 ceph-mon[77138]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:11.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:12.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:12 compute-2 ceph-mon[77138]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:02:12 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:02:12 compute-2 podman[336969]: 2025-11-29 09:02:12.672880463 +0000 UTC m=+0.075010104 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 09:02:12 compute-2 nova_compute[232428]: 2025-11-29 09:02:12.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:12 compute-2 nova_compute[232428]: 2025-11-29 09:02:12.885 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:13.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:13 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:02:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:14 compute-2 ceph-mon[77138]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 09:02:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:15.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 09:02:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:16 compute-2 ceph-mon[77138]: pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:17.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:17 compute-2 nova_compute[232428]: 2025-11-29 09:02:17.822 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:17 compute-2 nova_compute[232428]: 2025-11-29 09:02:17.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:18.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:18 compute-2 ceph-mon[77138]: pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:02:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 302K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 28K syncs, 2.68 writes per sync, written: 0.30 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3139 writes, 12K keys, 3139 commit groups, 1.0 writes per commit group, ingest: 12.34 MB, 0.02 MB/s
                                           Interval WAL: 3139 writes, 1258 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:02:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:19 compute-2 sudo[336992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:19 compute-2 sudo[336992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:19 compute-2 sudo[336992]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:19 compute-2 sudo[337017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:02:19 compute-2 sudo[337017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:19 compute-2 sudo[337017]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:02:20 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:02:20 compute-2 ceph-mon[77138]: pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2248713597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:20.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:20 compute-2 sudo[337043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:20 compute-2 sudo[337043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:20 compute-2 sudo[337043]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:20 compute-2 sudo[337068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:20 compute-2 sudo[337068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:21 compute-2 sudo[337068]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:22.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:22 compute-2 ceph-mon[77138]: pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:02:22 compute-2 nova_compute[232428]: 2025-11-29 09:02:22.825 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:22 compute-2 nova_compute[232428]: 2025-11-29 09:02:22.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:23.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:24 compute-2 ceph-mon[77138]: pgmap v3815: 305 pgs: 305 active+clean; 129 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 116 KiB/s wr, 11 op/s
Nov 29 09:02:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:25.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:25 compute-2 nova_compute[232428]: 2025-11-29 09:02:25.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:25 compute-2 podman[337095]: 2025-11-29 09:02:25.70555498 +0000 UTC m=+0.109739144 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 09:02:26 compute-2 nova_compute[232428]: 2025-11-29 09:02:26.213 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:26 compute-2 ceph-mon[77138]: pgmap v3816: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:02:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:27.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:27 compute-2 nova_compute[232428]: 2025-11-29 09:02:27.827 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:27 compute-2 nova_compute[232428]: 2025-11-29 09:02:27.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:02:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/582967868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:02:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:02:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/582967868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:02:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:28 compute-2 ceph-mon[77138]: pgmap v3817: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/582967868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/582967868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1314036464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:02:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2061222742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:02:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:30 compute-2 ceph-mon[77138]: pgmap v3818: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:02:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:31.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:31 compute-2 podman[337125]: 2025-11-29 09:02:31.685264615 +0000 UTC m=+0.071620829 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 09:02:32 compute-2 nova_compute[232428]: 2025-11-29 09:02:32.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:32 compute-2 ceph-mon[77138]: pgmap v3819: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:02:32 compute-2 nova_compute[232428]: 2025-11-29 09:02:32.830 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:32 compute-2 nova_compute[232428]: 2025-11-29 09:02:32.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:34.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:35 compute-2 ceph-mon[77138]: pgmap v3820: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 29 09:02:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:35.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:36 compute-2 nova_compute[232428]: 2025-11-29 09:02:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:36.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:37 compute-2 ceph-mon[77138]: pgmap v3821: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 29 09:02:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:37.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.224 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.833 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:37 compute-2 nova_compute[232428]: 2025-11-29 09:02:37.894 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:38 compute-2 ceph-mon[77138]: pgmap v3822: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:02:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:38.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:39.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:39 compute-2 nova_compute[232428]: 2025-11-29 09:02:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:40 compute-2 nova_compute[232428]: 2025-11-29 09:02:40.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:40.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:40 compute-2 ceph-mon[77138]: pgmap v3823: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:02:41 compute-2 sudo[337149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:41 compute-2 sudo[337149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:41 compute-2 sudo[337149]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:41.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:41 compute-2 sudo[337174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:02:41 compute-2 sudo[337174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:02:41 compute-2 sudo[337174]: pam_unix(sudo:session): session closed for user root
Nov 29 09:02:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.229 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:02:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:02:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4045086690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.694 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:02:42 compute-2 ceph-mon[77138]: pgmap v3824: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:02:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4045086690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.836 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.869 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.870 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4175MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.870 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.870 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.973 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.973 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:02:42 compute-2 nova_compute[232428]: 2025-11-29 09:02:42.991 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:02:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:02:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2720590389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:43 compute-2 nova_compute[232428]: 2025-11-29 09:02:43.452 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:02:43 compute-2 nova_compute[232428]: 2025-11-29 09:02:43.459 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:02:43 compute-2 nova_compute[232428]: 2025-11-29 09:02:43.475 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:02:43 compute-2 nova_compute[232428]: 2025-11-29 09:02:43.478 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:02:43 compute-2 nova_compute[232428]: 2025-11-29 09:02:43.478 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:02:43 compute-2 podman[337244]: 2025-11-29 09:02:43.686990971 +0000 UTC m=+0.073522148 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 09:02:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1544152170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2720590389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:44.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:45 compute-2 ceph-mon[77138]: pgmap v3825: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 29 09:02:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3419853262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:45 compute-2 nova_compute[232428]: 2025-11-29 09:02:45.479 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:45 compute-2 nova_compute[232428]: 2025-11-29 09:02:45.479 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:02:45 compute-2 nova_compute[232428]: 2025-11-29 09:02:45.479 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:02:46 compute-2 ceph-mon[77138]: pgmap v3826: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 66 op/s
Nov 29 09:02:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 09:02:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 09:02:47 compute-2 nova_compute[232428]: 2025-11-29 09:02:47.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:47 compute-2 nova_compute[232428]: 2025-11-29 09:02:47.896 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:48 compute-2 ceph-mon[77138]: pgmap v3827: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 29 09:02:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:50 compute-2 ceph-mon[77138]: pgmap v3828: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 29 09:02:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:52 compute-2 ceph-mon[77138]: pgmap v3829: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 5 op/s
Nov 29 09:02:52 compute-2 nova_compute[232428]: 2025-11-29 09:02:52.839 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:52 compute-2 nova_compute[232428]: 2025-11-29 09:02:52.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:54 compute-2 ceph-mon[77138]: pgmap v3830: 305 pgs: 305 active+clean; 158 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 09:02:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1041010412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:02:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:55.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:02:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:02:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1928395851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:02:56 compute-2 podman[337272]: 2025-11-29 09:02:56.762761218 +0000 UTC m=+0.157607162 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:02:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:57 compute-2 ceph-mon[77138]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 852 B/s wr, 80 op/s
Nov 29 09:02:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/25869047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:02:57 compute-2 nova_compute[232428]: 2025-11-29 09:02:57.843 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:57 compute-2 nova_compute[232428]: 2025-11-29 09:02:57.899 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:58 compute-2 ceph-mon[77138]: pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 852 B/s wr, 77 op/s
Nov 29 09:02:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:02:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:02:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:02:59.161 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:02:59 compute-2 nova_compute[232428]: 2025-11-29 09:02:59.161 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:02:59 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:02:59.163 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:02:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:02:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:02:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:00 compute-2 ceph-mon[77138]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 94 KiB/s rd, 1.2 KiB/s wr, 149 op/s
Nov 29 09:03:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:01 compute-2 sudo[337300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:01 compute-2 sudo[337300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:01 compute-2 sudo[337300]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:01 compute-2 sudo[337325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:01 compute-2 sudo[337325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:01 compute-2 sudo[337325]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:02 compute-2 podman[337351]: 2025-11-29 09:03:02.697549136 +0000 UTC m=+0.090310379 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 09:03:02 compute-2 nova_compute[232428]: 2025-11-29 09:03:02.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:02 compute-2 nova_compute[232428]: 2025-11-29 09:03:02.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:02 compute-2 ceph-mon[77138]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.2 KiB/s wr, 208 op/s
Nov 29 09:03:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:03.369 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:03.369 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:03.370 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:05 compute-2 ceph-mon[77138]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.2 KiB/s wr, 207 op/s
Nov 29 09:03:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:05.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:06 compute-2 ceph-mon[77138]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 1.2 KiB/s wr, 193 op/s
Nov 29 09:03:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:07.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:07 compute-2 nova_compute[232428]: 2025-11-29 09:03:07.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:07 compute-2 nova_compute[232428]: 2025-11-29 09:03:07.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:08 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:08.164 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:08 compute-2 ceph-mon[77138]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 341 B/s wr, 130 op/s
Nov 29 09:03:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:09.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:10 compute-2 ceph-mon[77138]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 341 B/s wr, 131 op/s
Nov 29 09:03:10 compute-2 sshd-session[337377]: Invalid user eth from 45.148.10.240 port 55872
Nov 29 09:03:11 compute-2 sshd-session[337377]: Connection closed by invalid user eth 45.148.10.240 port 55872 [preauth]
Nov 29 09:03:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:12.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:12 compute-2 nova_compute[232428]: 2025-11-29 09:03:12.851 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:12 compute-2 nova_compute[232428]: 2025-11-29 09:03:12.903 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:13 compute-2 ceph-mon[77138]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 09:03:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:13.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:14 compute-2 ceph-mon[77138]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 09:03:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:14 compute-2 podman[337381]: 2025-11-29 09:03:14.713737792 +0000 UTC m=+0.108160605 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:03:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:16.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:17 compute-2 ceph-mon[77138]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 09:03:17 compute-2 nova_compute[232428]: 2025-11-29 09:03:17.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:17 compute-2 nova_compute[232428]: 2025-11-29 09:03:17.906 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:18 compute-2 ceph-mon[77138]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 09:03:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:19 compute-2 nova_compute[232428]: 2025-11-29 09:03:19.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:19 compute-2 nova_compute[232428]: 2025-11-29 09:03:19.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 09:03:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:19.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:19 compute-2 sudo[337404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:19 compute-2 sudo[337404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:19 compute-2 sudo[337404]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:19 compute-2 sudo[337429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:03:19 compute-2 sudo[337429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:19 compute-2 sudo[337429]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:20 compute-2 sudo[337455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:20 compute-2 sudo[337455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:20 compute-2 sudo[337455]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:20 compute-2 sudo[337480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:03:20 compute-2 sudo[337480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:20 compute-2 sudo[337480]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:20 compute-2 ceph-mon[77138]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 09:03:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:21.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:21 compute-2 sudo[337537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:21 compute-2 sudo[337537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:21 compute-2 sudo[337537]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:21 compute-2 sudo[337562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:21 compute-2 sudo[337562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:21 compute-2 sudo[337562]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:03:22 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:03:22 compute-2 ceph-mon[77138]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:22 compute-2 nova_compute[232428]: 2025-11-29 09:03:22.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:22 compute-2 nova_compute[232428]: 2025-11-29 09:03:22.907 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:23.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:03:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:03:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:25 compute-2 ceph-mon[77138]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:27 compute-2 ceph-mon[77138]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:27 compute-2 nova_compute[232428]: 2025-11-29 09:03:27.508 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:27 compute-2 podman[337590]: 2025-11-29 09:03:27.693263816 +0000 UTC m=+0.098988060 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 29 09:03:27 compute-2 nova_compute[232428]: 2025-11-29 09:03:27.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:27 compute-2 nova_compute[232428]: 2025-11-29 09:03:27.909 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:28 compute-2 nova_compute[232428]: 2025-11-29 09:03:28.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:28 compute-2 nova_compute[232428]: 2025-11-29 09:03:28.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 09:03:28 compute-2 nova_compute[232428]: 2025-11-29 09:03:28.220 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 09:03:28 compute-2 ceph-mon[77138]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:28.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:29 compute-2 sudo[337619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:29 compute-2 sudo[337619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:29 compute-2 sudo[337619]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:29 compute-2 sudo[337644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:03:29 compute-2 sudo[337644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:29 compute-2 sudo[337644]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:03:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:03:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 09:03:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 09:03:30 compute-2 ceph-mon[77138]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:32 compute-2 nova_compute[232428]: 2025-11-29 09:03:32.220 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:32.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:32 compute-2 ceph-mon[77138]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:32 compute-2 nova_compute[232428]: 2025-11-29 09:03:32.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:32 compute-2 nova_compute[232428]: 2025-11-29 09:03:32.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:33 compute-2 podman[337671]: 2025-11-29 09:03:33.68555199 +0000 UTC m=+0.083271691 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:03:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:34.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:34 compute-2 ceph-mon[77138]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:36.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:37 compute-2 nova_compute[232428]: 2025-11-29 09:03:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:37 compute-2 ceph-mon[77138]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:37 compute-2 nova_compute[232428]: 2025-11-29 09:03:37.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:37 compute-2 nova_compute[232428]: 2025-11-29 09:03:37.914 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:38.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:38 compute-2 ceph-mon[77138]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:38 compute-2 nova_compute[232428]: 2025-11-29 09:03:38.716 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:38 compute-2 nova_compute[232428]: 2025-11-29 09:03:38.717 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:38 compute-2 nova_compute[232428]: 2025-11-29 09:03:38.927 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.021 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.021 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.031 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.032 232432 INFO nova.compute.claims [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Claim successful on node compute-2.ctlplane.example.com
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:03:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.250 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.250 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.302 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:03:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4038179995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.764 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:39 compute-2 nova_compute[232428]: 2025-11-29 09:03:39.774 232432 DEBUG nova.compute.provider_tree [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.010 232432 DEBUG nova.scheduler.client.report [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.468 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.469 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 09:03:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.517 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.518 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.540 232432 INFO nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.568 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.679 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.680 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.681 232432 INFO nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Creating image(s)
Nov 29 09:03:40 compute-2 ceph-mon[77138]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4038179995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.720 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.754 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.785 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.789 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.885 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.886 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.886 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.887 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.925 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:40 compute-2 nova_compute[232428]: 2025-11-29 09:03:40.930 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.163 232432 DEBUG nova.policy [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd65747cb23f34d0b82fe6b4b04e5930d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '79610f2eca54482d94e23676a7ebcbd7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.355 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.460 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] resizing rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.597 232432 DEBUG nova.objects.instance [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.616 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.617 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Ensure instance console log exists: /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.617 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.618 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:41 compute-2 nova_compute[232428]: 2025-11-29 09:03:41.618 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:41 compute-2 sudo[337882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:41 compute-2 sudo[337882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:41 compute-2 sudo[337882]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:41 compute-2 sudo[337907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:03:41 compute-2 sudo[337907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:03:41 compute-2 sudo[337907]: pam_unix(sudo:session): session closed for user root
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.495 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.496 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.496 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.497 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.497 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:42 compute-2 ceph-mon[77138]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.866 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.899 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Successfully created port: 2810d337-b37c-4d01-b817-9dcca807fac3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 09:03:42 compute-2 nova_compute[232428]: 2025-11-29 09:03:42.916 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:03:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3656122871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.007 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.295 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.296 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4129MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.296 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.296 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.366 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.367 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.367 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.415 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3656122871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:03:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2136004400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.941 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.951 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.972 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.996 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:03:43 compute-2 nova_compute[232428]: 2025-11-29 09:03:43.997 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.186 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Successfully updated port: 2810d337-b37c-4d01-b817-9dcca807fac3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.214 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.215 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquired lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.215 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.344 232432 DEBUG nova.compute.manager [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-changed-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.345 232432 DEBUG nova.compute.manager [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Refreshing instance network info cache due to event network-changed-2810d337-b37c-4d01-b817-9dcca807fac3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.346 232432 DEBUG oslo_concurrency.lockutils [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:03:44 compute-2 nova_compute[232428]: 2025-11-29 09:03:44.452 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 09:03:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:44 compute-2 ceph-mon[77138]: pgmap v3855: 305 pgs: 305 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 819 KiB/s wr, 0 op/s
Nov 29 09:03:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2136004400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:45 compute-2 podman[337978]: 2025-11-29 09:03:45.723953837 +0000 UTC m=+0.120335034 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 09:03:45 compute-2 nova_compute[232428]: 2025-11-29 09:03:45.997 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:45 compute-2 nova_compute[232428]: 2025-11-29 09:03:45.998 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:45 compute-2 nova_compute[232428]: 2025-11-29 09:03:45.998 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:03:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1274209132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.173 232432 DEBUG nova.network.neutron [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updating instance_info_cache with network_info: [{"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.199 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Releasing lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.199 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Instance network_info: |[{"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.199 232432 DEBUG oslo_concurrency.lockutils [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.200 232432 DEBUG nova.network.neutron [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Refreshing network info cache for port 2810d337-b37c-4d01-b817-9dcca807fac3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.204 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Start _get_guest_xml network_info=[{"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.209 232432 WARNING nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.214 232432 DEBUG nova.virt.libvirt.host [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.214 232432 DEBUG nova.virt.libvirt.host [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.218 232432 DEBUG nova.virt.libvirt.host [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.219 232432 DEBUG nova.virt.libvirt.host [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.220 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.221 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.221 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.221 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.222 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.222 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.222 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.223 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.223 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.223 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.224 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.224 232432 DEBUG nova.virt.hardware [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.228 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:03:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2237881868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.692 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.728 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:46 compute-2 nova_compute[232428]: 2025-11-29 09:03:46.731 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:03:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1299967345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:03:47 compute-2 ceph-mon[77138]: pgmap v3856: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:03:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1394626300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2237881868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.634 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.902s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.637 232432 DEBUG nova.virt.libvirt.vif [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-204401341',display_name='tempest-TestServerBasicOps-server-204401341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-204401341',id=214,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx6n+PI3YQV/iRM6yYEVLXRsiNP74Tr/idf+zYE3VawYGc2tNTw0Y/1aQwtBHfFztylc8tyahBU/qh1ONVHir1BOSsGLoylp2T/q0uRJh3JFrnapjCar9s75C1s306pwg==',key_name='tempest-TestServerBasicOps-751477061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79610f2eca54482d94e23676a7ebcbd7',ramdisk_id='',reservation_id='r-dwfkteve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-927972909',owner_user_name='tempest-TestServerBasicOps-927972909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:03:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65747cb23f34d0b82fe6b4b04e5930d',uuid=86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.638 232432 DEBUG nova.network.os_vif_util [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converting VIF {"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.639 232432 DEBUG nova.network.os_vif_util [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.641 232432 DEBUG nova.objects.instance [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.774 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <uuid>86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d</uuid>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <name>instance-000000d6</name>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <metadata>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:name>tempest-TestServerBasicOps-server-204401341</nova:name>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 09:03:46</nova:creationTime>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:user uuid="d65747cb23f34d0b82fe6b4b04e5930d">tempest-TestServerBasicOps-927972909-project-member</nova:user>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:project uuid="79610f2eca54482d94e23676a7ebcbd7">tempest-TestServerBasicOps-927972909</nova:project>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <nova:port uuid="2810d337-b37c-4d01-b817-9dcca807fac3">
Nov 29 09:03:47 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </metadata>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <system>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="serial">86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="uuid">86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </system>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <os>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </os>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <features>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <apic/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </features>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </clock>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </cpu>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   <devices>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk">
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </source>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config">
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </source>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:03:47 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:b5:1a:a1"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <target dev="tap2810d337-b3"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </interface>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/console.log" append="off"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </serial>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <video>
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </video>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </rng>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 09:03:47 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 09:03:47 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 09:03:47 compute-2 nova_compute[232428]:   </devices>
Nov 29 09:03:47 compute-2 nova_compute[232428]: </domain>
Nov 29 09:03:47 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.776 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Preparing to wait for external event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.777 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.778 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.778 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.780 232432 DEBUG nova.virt.libvirt.vif [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-204401341',display_name='tempest-TestServerBasicOps-server-204401341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-204401341',id=214,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx6n+PI3YQV/iRM6yYEVLXRsiNP74Tr/idf+zYE3VawYGc2tNTw0Y/1aQwtBHfFztylc8tyahBU/qh1ONVHir1BOSsGLoylp2T/q0uRJh3JFrnapjCar9s75C1s306pwg==',key_name='tempest-TestServerBasicOps-751477061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79610f2eca54482d94e23676a7ebcbd7',ramdisk_id='',reservation_id='r-dwfkteve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-927972909',owner_user_name='tempest-TestServerBasicOps-927972909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:03:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65747cb23f34d0b82fe6b4b04e5930d',uuid=86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.780 232432 DEBUG nova.network.os_vif_util [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converting VIF {"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.782 232432 DEBUG nova.network.os_vif_util [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.782 232432 DEBUG os_vif [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.784097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027784212, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2261, "num_deletes": 251, "total_data_size": 5555916, "memory_usage": 5638472, "flush_reason": "Manual Compaction"}
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.784 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.785 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.786 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.790 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.791 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2810d337-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.791 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2810d337-b3, col_values=(('external_ids', {'iface-id': '2810d337-b37c-4d01-b817-9dcca807fac3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:1a:a1', 'vm-uuid': '86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:47 compute-2 NetworkManager[48993]: <info>  [1764407027.7949] manager: (tap2810d337-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.795 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.805 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.807 232432 INFO os_vif [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3')
Nov 29 09:03:47 compute-2 nova_compute[232428]: 2025-11-29 09:03:47.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027958671, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 3632560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83588, "largest_seqno": 85844, "table_properties": {"data_size": 3623349, "index_size": 5768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18720, "raw_average_key_size": 20, "raw_value_size": 3605074, "raw_average_value_size": 3905, "num_data_blocks": 253, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406818, "oldest_key_time": 1764406818, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 174674 microseconds, and 14211 cpu microseconds.
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.958753) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 3632560 bytes OK
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.958815) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.964738) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.964865) EVENT_LOG_v1 {"time_micros": 1764407027964850, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.964897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 5546054, prev total WAL file size 5546054, number of live WAL files 2.
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.968241) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(3547KB)], [171(10MB)]
Nov 29 09:03:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027968384, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 14462334, "oldest_snapshot_seqno": -1}
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.016 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.016 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.016 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] No VIF found with MAC fa:16:3e:b5:1a:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.017 232432 INFO nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Using config drive
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.052 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.196 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10960 keys, 12466234 bytes, temperature: kUnknown
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407028202448, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 12466234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12397855, "index_size": 39850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290211, "raw_average_key_size": 26, "raw_value_size": 12208243, "raw_average_value_size": 1113, "num_data_blocks": 1503, "num_entries": 10960, "num_filter_entries": 10960, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.207 232432 DEBUG nova.network.neutron [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updated VIF entry in instance network info cache for port 2810d337-b37c-4d01-b817-9dcca807fac3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.208 232432 DEBUG nova.network.neutron [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updating instance_info_cache with network_info: [{"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.203015) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12466234 bytes
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.212965) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.8 rd, 53.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 11481, records dropped: 521 output_compression: NoCompression
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.213002) EVENT_LOG_v1 {"time_micros": 1764407028212986, "job": 110, "event": "compaction_finished", "compaction_time_micros": 234172, "compaction_time_cpu_micros": 55312, "output_level": 6, "num_output_files": 1, "total_output_size": 12466234, "num_input_records": 11481, "num_output_records": 10960, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407028214465, "job": 110, "event": "table_file_deletion", "file_number": 173}
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407028218064, "job": 110, "event": "table_file_deletion", "file_number": 171}
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:47.968110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.218221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.218233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.218237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.218240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:03:48.218243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.276 232432 DEBUG oslo_concurrency.lockutils [req-1724af5f-c70c-4a58-af1c-1ecd520f41d1 req-edd80d5e-7f9a-4cdb-878d-b14648059de8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.468 232432 INFO nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Creating config drive at /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.484 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabtn2rar execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:48 compute-2 nova_compute[232428]: 2025-11-29 09:03:48.645 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabtn2rar" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1299967345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:03:49 compute-2 ceph-mon[77138]: pgmap v3857: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:03:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:49 compute-2 nova_compute[232428]: 2025-11-29 09:03:49.307 232432 DEBUG nova.storage.rbd_utils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] rbd image 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:03:49 compute-2 nova_compute[232428]: 2025-11-29 09:03:49.312 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.145 232432 DEBUG oslo_concurrency.processutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.147 232432 INFO nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Deleting local config drive /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d/disk.config because it was imported into RBD.
Nov 29 09:03:50 compute-2 kernel: tap2810d337-b3: entered promiscuous mode
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.2302] manager: (tap2810d337-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Nov 29 09:03:50 compute-2 ovn_controller[134375]: 2025-11-29T09:03:50Z|00996|binding|INFO|Claiming lport 2810d337-b37c-4d01-b817-9dcca807fac3 for this chassis.
Nov 29 09:03:50 compute-2 ovn_controller[134375]: 2025-11-29T09:03:50Z|00997|binding|INFO|2810d337-b37c-4d01-b817-9dcca807fac3: Claiming fa:16:3e:b5:1a:a1 10.100.0.3
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.231 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.242 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.246 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.263 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1a:a1 10.100.0.3'], port_security=['fa:16:3e:b5:1a:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79610f2eca54482d94e23676a7ebcbd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ce325c0-90f9-4c25-9ef7-257c16e3c600 c3a4a111-3a79-4575-a4fb-882b2fbe02eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23e6d8e-8712-4cef-b29c-c0c82bca36b8, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=2810d337-b37c-4d01-b817-9dcca807fac3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.264 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 2810d337-b37c-4d01-b817-9dcca807fac3 in datapath 92edf92c-e5aa-4ef8-81b8-f370f11ed058 bound to our chassis
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.266 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92edf92c-e5aa-4ef8-81b8-f370f11ed058
Nov 29 09:03:50 compute-2 systemd-machined[194747]: New machine qemu-102-instance-000000d6.
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.283 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[1f32516c-f215-4acd-be97-828775cc300b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.285 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92edf92c-e1 in ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.288 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92edf92c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.288 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2e41cfc9-88ed-42a7-a7b3-3aa7129c933b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.289 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1f92f4-6824-41a5-8579-fbd800779196]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.306 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e96f04-8517-48da-b442-d49f10b96f14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 systemd[1]: Started Virtual Machine qemu-102-instance-000000d6.
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 ovn_controller[134375]: 2025-11-29T09:03:50Z|00998|binding|INFO|Setting lport 2810d337-b37c-4d01-b817-9dcca807fac3 ovn-installed in OVS
Nov 29 09:03:50 compute-2 ovn_controller[134375]: 2025-11-29T09:03:50Z|00999|binding|INFO|Setting lport 2810d337-b37c-4d01-b817-9dcca807fac3 up in Southbound
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.326 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[85b7f6eb-7624-45a0-87d2-b50dabd33832]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 systemd-udevd[338140]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.3490] device (tap2810d337-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.3502] device (tap2810d337-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.366 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1122c9-bed7-4748-b010-bf6ac3181412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.3731] manager: (tap92edf92c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/478)
Nov 29 09:03:50 compute-2 systemd-udevd[338144]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.374 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8c3ac9-526f-485b-9d3f-33156787c3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.416 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[787f3427-5c8a-4700-b786-c8f05e90ed5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.419 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[efd011a2-5b93-451e-820d-b307f3a64f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.4449] device (tap92edf92c-e0): carrier: link connected
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.452 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f9be12c8-aa93-467f-88fa-7cc6279630ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.472 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[abea0aff-ee40-43b2-a96d-849e905bf7b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92edf92c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:4a:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999671, 'reachable_time': 29376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338170, 'error': None, 'target': 'ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.490 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4f200bb2-e196-406e-b3ea-166d3c8defbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:4a8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999671, 'tstamp': 999671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338171, 'error': None, 'target': 'ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.510 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[acf84de4-6caa-4600-b1d6-97bc458404e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92edf92c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:4a:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999671, 'reachable_time': 29376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338172, 'error': None, 'target': 'ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.548 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[13d4829d-8d7b-4ff0-b785-a2c1ee660142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.627 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[33392440-fbac-49d2-8b12-8606fae75c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.629 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92edf92c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.629 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.630 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92edf92c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:50 compute-2 kernel: tap92edf92c-e0: entered promiscuous mode
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.631 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 NetworkManager[48993]: <info>  [1764407030.6334] manager: (tap92edf92c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.635 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92edf92c-e0, col_values=(('external_ids', {'iface-id': '4563b2cf-66ae-409b-bddc-bd9894feadf4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 ovn_controller[134375]: 2025-11-29T09:03:50Z|01000|binding|INFO|Releasing lport 4563b2cf-66ae-409b-bddc-bd9894feadf4 from this chassis (sb_readonly=0)
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.639 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92edf92c-e5aa-4ef8-81b8-f370f11ed058.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92edf92c-e5aa-4ef8-81b8-f370f11ed058.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 09:03:50 compute-2 nova_compute[232428]: 2025-11-29 09:03:50.650 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.649 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bac668f0-2b86-4d74-81ab-7d1bdb8d9283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.651 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: global
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-92edf92c-e5aa-4ef8-81b8-f370f11ed058
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/92edf92c-e5aa-4ef8-81b8-f370f11ed058.pid.haproxy
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 92edf92c-e5aa-4ef8-81b8-f370f11ed058
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 09:03:50 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:03:50.652 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'env', 'PROCESS_TAG=haproxy-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92edf92c-e5aa-4ef8-81b8-f370f11ed058.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 09:03:51 compute-2 podman[338204]: 2025-11-29 09:03:51.043119248 +0000 UTC m=+0.030868321 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 09:03:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:51 compute-2 ceph-mon[77138]: pgmap v3858: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 09:03:51 compute-2 nova_compute[232428]: 2025-11-29 09:03:51.501 232432 DEBUG nova.compute.manager [req-569ce817-9549-459d-952b-9ed290745606 req-ff20b740-49f8-4a45-87c5-8c3e274df93d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:03:51 compute-2 nova_compute[232428]: 2025-11-29 09:03:51.502 232432 DEBUG oslo_concurrency.lockutils [req-569ce817-9549-459d-952b-9ed290745606 req-ff20b740-49f8-4a45-87c5-8c3e274df93d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:51 compute-2 nova_compute[232428]: 2025-11-29 09:03:51.503 232432 DEBUG oslo_concurrency.lockutils [req-569ce817-9549-459d-952b-9ed290745606 req-ff20b740-49f8-4a45-87c5-8c3e274df93d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:51 compute-2 nova_compute[232428]: 2025-11-29 09:03:51.503 232432 DEBUG oslo_concurrency.lockutils [req-569ce817-9549-459d-952b-9ed290745606 req-ff20b740-49f8-4a45-87c5-8c3e274df93d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:51 compute-2 nova_compute[232428]: 2025-11-29 09:03:51.504 232432 DEBUG nova.compute.manager [req-569ce817-9549-459d-952b-9ed290745606 req-ff20b740-49f8-4a45-87c5-8c3e274df93d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Processing event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 09:03:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:52 compute-2 podman[338204]: 2025-11-29 09:03:52.06021634 +0000 UTC m=+1.047965403 container create 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:03:52 compute-2 systemd[1]: Started libpod-conmon-93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149.scope.
Nov 29 09:03:52 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:03:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb0f401267ee0bb323a5c92c18e8c9126f90c2440176cfd3c53ea4faa916357/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 09:03:52 compute-2 ceph-mon[77138]: pgmap v3859: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 09:03:52 compute-2 podman[338204]: 2025-11-29 09:03:52.252280044 +0000 UTC m=+1.240029087 container init 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 09:03:52 compute-2 podman[338204]: 2025-11-29 09:03:52.26534949 +0000 UTC m=+1.253098523 container start 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 09:03:52 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [NOTICE]   (338264) : New worker (338267) forked
Nov 29 09:03:52 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [NOTICE]   (338264) : Loading success.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.329 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.330 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407032.3286357, 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.330 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] VM Started (Lifecycle Event)
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.335 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.338 232432 INFO nova.virt.libvirt.driver [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Instance spawned successfully.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.339 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.376 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.380 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.380 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.381 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.381 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.382 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.382 232432 DEBUG nova.virt.libvirt.driver [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.386 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.437 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.437 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407032.3298433, 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.438 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] VM Paused (Lifecycle Event)
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.475 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.479 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407032.3338325, 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.479 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] VM Resumed (Lifecycle Event)
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.507 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.509 232432 INFO nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Took 11.83 seconds to spawn the instance on the hypervisor.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.510 232432 DEBUG nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.515 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:03:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.545 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.578 232432 INFO nova.compute.manager [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Took 13.59 seconds to build instance.
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.598 232432 DEBUG oslo_concurrency.lockutils [None req-70aa794b-3677-4db5-8a3b-0c57a7f8a287 d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:52 compute-2 nova_compute[232428]: 2025-11-29 09:03:52.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.596 232432 DEBUG nova.compute.manager [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.596 232432 DEBUG oslo_concurrency.lockutils [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.597 232432 DEBUG oslo_concurrency.lockutils [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.597 232432 DEBUG oslo_concurrency.lockutils [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.597 232432 DEBUG nova.compute.manager [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] No waiting events found dispatching network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:03:53 compute-2 nova_compute[232428]: 2025-11-29 09:03:53.598 232432 WARNING nova.compute.manager [req-b9af16c2-b5b2-4f40-a690-aad84d52822a req-68c030ba-b7a0-4c6f-bb08-5ebe6d748719 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received unexpected event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 for instance with vm_state active and task_state None.
Nov 29 09:03:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:54 compute-2 ceph-mon[77138]: pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 09:03:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:55.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.271 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:56 compute-2 NetworkManager[48993]: <info>  [1764407036.2730] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Nov 29 09:03:56 compute-2 NetworkManager[48993]: <info>  [1764407036.2741] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.386 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:56 compute-2 ovn_controller[134375]: 2025-11-29T09:03:56Z|01001|binding|INFO|Releasing lport 4563b2cf-66ae-409b-bddc-bd9894feadf4 from this chassis (sb_readonly=0)
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.394 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:03:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:56.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:03:56 compute-2 ceph-mon[77138]: pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1010 KiB/s wr, 81 op/s
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.800 232432 DEBUG nova.compute.manager [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-changed-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.800 232432 DEBUG nova.compute.manager [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Refreshing instance network info cache due to event network-changed-2810d337-b37c-4d01-b817-9dcca807fac3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.801 232432 DEBUG oslo_concurrency.lockutils [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.801 232432 DEBUG oslo_concurrency.lockutils [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:03:56 compute-2 nova_compute[232428]: 2025-11-29 09:03:56.801 232432 DEBUG nova.network.neutron [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Refreshing network info cache for port 2810d337-b37c-4d01-b817-9dcca807fac3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:03:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:03:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1883244047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/620365454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:03:57 compute-2 nova_compute[232428]: 2025-11-29 09:03:57.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:57 compute-2 nova_compute[232428]: 2025-11-29 09:03:57.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:03:58 compute-2 nova_compute[232428]: 2025-11-29 09:03:58.520 232432 DEBUG nova.network.neutron [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updated VIF entry in instance network info cache for port 2810d337-b37c-4d01-b817-9dcca807fac3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:03:58 compute-2 nova_compute[232428]: 2025-11-29 09:03:58.520 232432 DEBUG nova.network.neutron [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updating instance_info_cache with network_info: [{"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:03:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:03:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:03:58 compute-2 nova_compute[232428]: 2025-11-29 09:03:58.542 232432 DEBUG oslo_concurrency.lockutils [req-808a32ea-39da-4cf9-8a41-a6078f54e97d req-095d24a0-95e8-4aea-97a2-2240d94300e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:03:58 compute-2 podman[338280]: 2025-11-29 09:03:58.69217655 +0000 UTC m=+0.099654390 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 09:03:58 compute-2 ceph-mon[77138]: pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 55 op/s
Nov 29 09:03:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:03:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:03:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:00 compute-2 ceph-mon[77138]: pgmap v3863: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 09:04:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:01 compute-2 sudo[338310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:01 compute-2 sudo[338310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:01 compute-2 sudo[338310]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:01 compute-2 sudo[338335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:01 compute-2 sudo[338335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:01 compute-2 sudo[338335]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:02.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:02 compute-2 nova_compute[232428]: 2025-11-29 09:04:02.799 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:02 compute-2 nova_compute[232428]: 2025-11-29 09:04:02.926 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:03 compute-2 ceph-mon[77138]: pgmap v3864: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 09:04:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:03.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:03.370 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:03.371 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:03.372 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:04 compute-2 ceph-mon[77138]: pgmap v3865: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Nov 29 09:04:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:04 compute-2 podman[338362]: 2025-11-29 09:04:04.662243715 +0000 UTC m=+0.058966685 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 09:04:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:05.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:06.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:06 compute-2 ceph-mon[77138]: pgmap v3866: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 432 KiB/s wr, 63 op/s
Nov 29 09:04:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:04:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:07.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:04:07 compute-2 nova_compute[232428]: 2025-11-29 09:04:07.803 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:07 compute-2 nova_compute[232428]: 2025-11-29 09:04:07.929 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:09 compute-2 ceph-mon[77138]: pgmap v3867: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 432 KiB/s wr, 24 op/s
Nov 29 09:04:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:09.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:10 compute-2 ovn_controller[134375]: 2025-11-29T09:04:10Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:1a:a1 10.100.0.3
Nov 29 09:04:10 compute-2 ovn_controller[134375]: 2025-11-29T09:04:10Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:1a:a1 10.100.0.3
Nov 29 09:04:10 compute-2 ceph-mon[77138]: pgmap v3868: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 783 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Nov 29 09:04:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:11.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:12 compute-2 nova_compute[232428]: 2025-11-29 09:04:12.806 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:12 compute-2 nova_compute[232428]: 2025-11-29 09:04:12.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:13 compute-2 ceph-mon[77138]: pgmap v3869: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 09:04:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:13.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:14 compute-2 ceph-mon[77138]: pgmap v3870: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 09:04:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:16 compute-2 podman[338387]: 2025-11-29 09:04:16.703279013 +0000 UTC m=+0.100402133 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 09:04:16 compute-2 ceph-mon[77138]: pgmap v3871: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 09:04:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:17.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:17 compute-2 nova_compute[232428]: 2025-11-29 09:04:17.817 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:17 compute-2 nova_compute[232428]: 2025-11-29 09:04:17.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:18.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:19 compute-2 ceph-mon[77138]: pgmap v3872: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Nov 29 09:04:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:19.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:21 compute-2 ceph-mon[77138]: pgmap v3873: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Nov 29 09:04:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:22 compute-2 sudo[338411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:22 compute-2 sudo[338411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:22 compute-2 sudo[338411]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:22 compute-2 ceph-mon[77138]: pgmap v3874: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 180 KiB/s rd, 637 KiB/s wr, 33 op/s
Nov 29 09:04:22 compute-2 sudo[338436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:22 compute-2 sudo[338436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:22 compute-2 sudo[338436]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:22 compute-2 nova_compute[232428]: 2025-11-29 09:04:22.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:22 compute-2 nova_compute[232428]: 2025-11-29 09:04:22.937 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:24 compute-2 sshd-session[338462]: Invalid user support from 78.128.112.74 port 41510
Nov 29 09:04:24 compute-2 sshd-session[338462]: Connection closed by invalid user support 78.128.112.74 port 41510 [preauth]
Nov 29 09:04:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:24 compute-2 ceph-mon[77138]: pgmap v3875: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 39 KiB/s wr, 5 op/s
Nov 29 09:04:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:26 compute-2 ovn_controller[134375]: 2025-11-29T09:04:26Z|01002|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Nov 29 09:04:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:26 compute-2 ceph-mon[77138]: pgmap v3876: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Nov 29 09:04:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:27 compute-2 nova_compute[232428]: 2025-11-29 09:04:27.823 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:27 compute-2 nova_compute[232428]: 2025-11-29 09:04:27.938 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:04:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593430029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:04:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:04:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593430029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:04:28 compute-2 nova_compute[232428]: 2025-11-29 09:04:28.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:28.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:28 compute-2 ceph-mon[77138]: pgmap v3877: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 09:04:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/593430029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:04:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/593430029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:04:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:29 compute-2 sudo[338466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:29 compute-2 sudo[338466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:29 compute-2 sudo[338466]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:29 compute-2 sudo[338492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:04:29 compute-2 sudo[338492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:29 compute-2 sudo[338492]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:29 compute-2 podman[338490]: 2025-11-29 09:04:29.514268375 +0000 UTC m=+0.115215955 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 09:04:29 compute-2 sudo[338534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:29 compute-2 sudo[338534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:29 compute-2 sudo[338534]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:29 compute-2 sudo[338565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:04:29 compute-2 sudo[338565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:29.688 143912 DEBUG eventlet.wsgi.server [-] (143912) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:29.690 143912 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: Accept: */*
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: Connection: close
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: Content-Type: text/plain
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: Host: 169.254.169.254
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: User-Agent: curl/7.84.0
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: X-Forwarded-For: 10.100.0.3
Nov 29 09:04:29 compute-2 ovn_metadata_agent[143796]: X-Ovn-Network-Id: 92edf92c-e5aa-4ef8-81b8-f370f11ed058 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 29 09:04:30 compute-2 sudo[338565]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:30.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:30 compute-2 ceph-mon[77138]: pgmap v3878: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:04:30 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:04:31 compute-2 haproxy-metadata-proxy-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338267]: 10.100.0.3:51772 [29/Nov/2025:09:04:29.687] listener listener/metadata 0/0/0/1420/1420 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.107 143912 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.108 143912 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4177949
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.222 143912 DEBUG eventlet.wsgi.server [-] (143912) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.223 143912 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: Accept: */*
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: Connection: close
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: Content-Length: 100
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: Content-Type: application/x-www-form-urlencoded
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: Host: 169.254.169.254
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: User-Agent: curl/7.84.0
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: X-Forwarded-For: 10.100.0.3
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: X-Ovn-Network-Id: 92edf92c-e5aa-4ef8-81b8-f370f11ed058
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 29 09:04:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.547 143912 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 29 09:04:31 compute-2 haproxy-metadata-proxy-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338267]: 10.100.0.3:51780 [29/Nov/2025:09:04:31.221] listener listener/metadata 0/0/0/326/326 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 29 09:04:31 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:31.548 143912 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3252926
Nov 29 09:04:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:32 compute-2 nova_compute[232428]: 2025-11-29 09:04:32.825 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:32 compute-2 ceph-mon[77138]: pgmap v3879: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 09:04:32 compute-2 nova_compute[232428]: 2025-11-29 09:04:32.942 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.287 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.288 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.288 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.288 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.288 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.290 232432 INFO nova.compute.manager [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Terminating instance
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.291 232432 DEBUG nova.compute.manager [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 09:04:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:33 compute-2 kernel: tap2810d337-b3 (unregistering): left promiscuous mode
Nov 29 09:04:33 compute-2 NetworkManager[48993]: <info>  [1764407073.3448] device (tap2810d337-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 09:04:33 compute-2 ovn_controller[134375]: 2025-11-29T09:04:33Z|01003|binding|INFO|Releasing lport 2810d337-b37c-4d01-b817-9dcca807fac3 from this chassis (sb_readonly=0)
Nov 29 09:04:33 compute-2 ovn_controller[134375]: 2025-11-29T09:04:33Z|01004|binding|INFO|Setting lport 2810d337-b37c-4d01-b817-9dcca807fac3 down in Southbound
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.353 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 ovn_controller[134375]: 2025-11-29T09:04:33Z|01005|binding|INFO|Removing iface tap2810d337-b3 ovn-installed in OVS
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:33.360 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1a:a1 10.100.0.3'], port_security=['fa:16:3e:b5:1a:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79610f2eca54482d94e23676a7ebcbd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ce325c0-90f9-4c25-9ef7-257c16e3c600 c3a4a111-3a79-4575-a4fb-882b2fbe02eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23e6d8e-8712-4cef-b29c-c0c82bca36b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=2810d337-b37c-4d01-b817-9dcca807fac3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:04:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:33.361 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 2810d337-b37c-4d01-b817-9dcca807fac3 in datapath 92edf92c-e5aa-4ef8-81b8-f370f11ed058 unbound from our chassis
Nov 29 09:04:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:33.362 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92edf92c-e5aa-4ef8-81b8-f370f11ed058, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 09:04:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:33.363 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d05e5b2f-fa06-4ca0-998f-549ec89411ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:33 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:33.363 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058 namespace which is not needed anymore
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.377 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Nov 29 09:04:33 compute-2 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d6.scope: Consumed 16.682s CPU time.
Nov 29 09:04:33 compute-2 systemd-machined[194747]: Machine qemu-102-instance-000000d6 terminated.
Nov 29 09:04:33 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [NOTICE]   (338264) : haproxy version is 2.8.14-c23fe91
Nov 29 09:04:33 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [NOTICE]   (338264) : path to executable is /usr/sbin/haproxy
Nov 29 09:04:33 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [WARNING]  (338264) : Exiting Master process...
Nov 29 09:04:33 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [ALERT]    (338264) : Current worker (338267) exited with code 143 (Terminated)
Nov 29 09:04:33 compute-2 neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058[338255]: [WARNING]  (338264) : All workers exited. Exiting... (0)
Nov 29 09:04:33 compute-2 systemd[1]: libpod-93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149.scope: Deactivated successfully.
Nov 29 09:04:33 compute-2 podman[338648]: 2025-11-29 09:04:33.494241785 +0000 UTC m=+0.044746122 container died 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:04:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149-userdata-shm.mount: Deactivated successfully.
Nov 29 09:04:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-3eb0f401267ee0bb323a5c92c18e8c9126f90c2440176cfd3c53ea4faa916357-merged.mount: Deactivated successfully.
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.534 232432 INFO nova.virt.libvirt.driver [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Instance destroyed successfully.
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.535 232432 DEBUG nova.objects.instance [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lazy-loading 'resources' on Instance uuid 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:04:33 compute-2 podman[338648]: 2025-11-29 09:04:33.538655846 +0000 UTC m=+0.089160193 container cleanup 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.550 232432 DEBUG nova.virt.libvirt.vif [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-204401341',display_name='tempest-TestServerBasicOps-server-204401341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-204401341',id=214,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx6n+PI3YQV/iRM6yYEVLXRsiNP74Tr/idf+zYE3VawYGc2tNTw0Y/1aQwtBHfFztylc8tyahBU/qh1ONVHir1BOSsGLoylp2T/q0uRJh3JFrnapjCar9s75C1s306pwg==',key_name='tempest-TestServerBasicOps-751477061',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:03:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79610f2eca54482d94e23676a7ebcbd7',ramdisk_id='',reservation_id='r-dwfkteve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-927972909',owner_user_name='tempest-TestServerBasicOps-927972909-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:04:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65747cb23f34d0b82fe6b4b04e5930d',uuid=86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.550 232432 DEBUG nova.network.os_vif_util [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converting VIF {"id": "2810d337-b37c-4d01-b817-9dcca807fac3", "address": "fa:16:3e:b5:1a:a1", "network": {"id": "92edf92c-e5aa-4ef8-81b8-f370f11ed058", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2057550370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79610f2eca54482d94e23676a7ebcbd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2810d337-b3", "ovs_interfaceid": "2810d337-b37c-4d01-b817-9dcca807fac3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.551 232432 DEBUG nova.network.os_vif_util [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.551 232432 DEBUG os_vif [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.553 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2810d337-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.555 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.556 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:33 compute-2 nova_compute[232428]: 2025-11-29 09:04:33.559 232432 INFO os_vif [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:1a:a1,bridge_name='br-int',has_traffic_filtering=True,id=2810d337-b37c-4d01-b817-9dcca807fac3,network=Network(92edf92c-e5aa-4ef8-81b8-f370f11ed058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2810d337-b3')
Nov 29 09:04:33 compute-2 systemd[1]: libpod-conmon-93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149.scope: Deactivated successfully.
Nov 29 09:04:34 compute-2 nova_compute[232428]: 2025-11-29 09:04:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:35.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:35 compute-2 ceph-mon[77138]: pgmap v3880: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 09:04:35 compute-2 podman[338687]: 2025-11-29 09:04:35.578279771 +0000 UTC m=+2.010031375 container remove 93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.588 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[429bdcfc-6468-4418-a9d4-91296fcb5894]: (4, ('Sat Nov 29 09:04:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058 (93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149)\n93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149\nSat Nov 29 09:04:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058 (93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149)\n93acd4ae16aedb3b1842a76c5d27a18b90620dbfa7eae1ba96b8316751d14149\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.590 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a4d9b6-7d33-4028-b975-82419224160c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.592 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92edf92c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:04:35 compute-2 kernel: tap92edf92c-e0: left promiscuous mode
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.619 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.620 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.623 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[88b82cd4-49f0-4a86-bc08-0f5a3b76586f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.640 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5b73a8ae-60c8-45ee-b3e6-20252c578f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.642 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8fd51a-d130-432c-8fb6-5e72a1caab1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.672 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a79250-36f0-44ba-a980-f678597746fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999662, 'reachable_time': 30405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338734, 'error': None, 'target': 'ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 systemd[1]: run-netns-ovnmeta\x2d92edf92c\x2de5aa\x2d4ef8\x2d81b8\x2df370f11ed058.mount: Deactivated successfully.
Nov 29 09:04:35 compute-2 podman[338719]: 2025-11-29 09:04:35.677827227 +0000 UTC m=+0.076640975 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.676 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92edf92c-e5aa-4ef8-81b8-f370f11ed058 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 09:04:35 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:35.677 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[cac0b8fe-358c-401d-9314-30da8f490505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.820 232432 DEBUG nova.compute.manager [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-unplugged-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.820 232432 DEBUG oslo_concurrency.lockutils [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.821 232432 DEBUG oslo_concurrency.lockutils [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.821 232432 DEBUG oslo_concurrency.lockutils [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.821 232432 DEBUG nova.compute.manager [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] No waiting events found dispatching network-vif-unplugged-2810d337-b37c-4d01-b817-9dcca807fac3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:04:35 compute-2 nova_compute[232428]: 2025-11-29 09:04:35.822 232432 DEBUG nova.compute.manager [req-cfcf99c5-8a5f-446f-a9ff-b185bffe8b2d req-34a4c8eb-d696-4fa5-9ea3-8702bda4b533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-unplugged-2810d337-b37c-4d01-b817-9dcca807fac3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 09:04:36 compute-2 nova_compute[232428]: 2025-11-29 09:04:36.278 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:36.279 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:04:36 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:36.281 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:04:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:36 compute-2 ceph-mon[77138]: pgmap v3881: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 29 09:04:37 compute-2 nova_compute[232428]: 2025-11-29 09:04:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:37 compute-2 sudo[338742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:37 compute-2 sudo[338742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:37 compute-2 sudo[338742]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:37 compute-2 sudo[338767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:04:37 compute-2 sudo[338767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:37 compute-2 sudo[338767]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:37 compute-2 nova_compute[232428]: 2025-11-29 09:04:37.946 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.312 232432 DEBUG nova.compute.manager [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.313 232432 DEBUG oslo_concurrency.lockutils [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.314 232432 DEBUG oslo_concurrency.lockutils [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.314 232432 DEBUG oslo_concurrency.lockutils [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.315 232432 DEBUG nova.compute.manager [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] No waiting events found dispatching network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.315 232432 WARNING nova.compute.manager [req-fb8e368d-566c-4f4d-b912-032413861ef9 req-00f32eaa-e5ae-4793-bfd9-78b3bcedd68c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received unexpected event network-vif-plugged-2810d337-b37c-4d01-b817-9dcca807fac3 for instance with vm_state active and task_state deleting.
Nov 29 09:04:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:04:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:04:38 compute-2 nova_compute[232428]: 2025-11-29 09:04:38.556 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:38.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:39 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:04:39.283 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:04:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:40 compute-2 ceph-mon[77138]: pgmap v3882: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 29 09:04:41 compute-2 nova_compute[232428]: 2025-11-29 09:04:41.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:41 compute-2 nova_compute[232428]: 2025-11-29 09:04:41.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:04:41 compute-2 nova_compute[232428]: 2025-11-29 09:04:41.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:04:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:04:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:04:41 compute-2 nova_compute[232428]: 2025-11-29 09:04:41.534 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 09:04:41 compute-2 nova_compute[232428]: 2025-11-29 09:04:41.534 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:04:41 compute-2 ceph-mon[77138]: pgmap v3883: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 7 op/s
Nov 29 09:04:42 compute-2 sudo[338795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:42 compute-2 sudo[338795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:42 compute-2 sudo[338795]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:42 compute-2 sudo[338821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:04:42 compute-2 sudo[338821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:04:42 compute-2 sudo[338821]: pam_unix(sudo:session): session closed for user root
Nov 29 09:04:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:42.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:42 compute-2 nova_compute[232428]: 2025-11-29 09:04:42.948 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:43 compute-2 nova_compute[232428]: 2025-11-29 09:04:43.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:43 compute-2 nova_compute[232428]: 2025-11-29 09:04:43.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:43 compute-2 ceph-mon[77138]: pgmap v3884: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 09:04:43 compute-2 nova_compute[232428]: 2025-11-29 09:04:43.559 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:44.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.684 232432 INFO nova.virt.libvirt.driver [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Deleting instance files /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_del
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.685 232432 INFO nova.virt.libvirt.driver [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Deletion of /var/lib/nova/instances/86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d_del complete
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.724 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.725 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.726 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.726 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:04:44 compute-2 nova_compute[232428]: 2025-11-29 09:04:44.727 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:04:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:04:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:45.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:04:45 compute-2 ceph-mon[77138]: pgmap v3885: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 597 B/s wr, 16 op/s
Nov 29 09:04:45 compute-2 nova_compute[232428]: 2025-11-29 09:04:45.434 232432 INFO nova.compute.manager [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Took 12.14 seconds to destroy the instance on the hypervisor.
Nov 29 09:04:45 compute-2 nova_compute[232428]: 2025-11-29 09:04:45.436 232432 DEBUG oslo.service.loopingcall [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 09:04:45 compute-2 nova_compute[232428]: 2025-11-29 09:04:45.436 232432 DEBUG nova.compute.manager [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 09:04:45 compute-2 nova_compute[232428]: 2025-11-29 09:04:45.437 232432 DEBUG nova.network.neutron [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 09:04:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:04:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1314683156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:45 compute-2 nova_compute[232428]: 2025-11-29 09:04:45.764 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.000 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.002 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4171MB free_disk=20.958209991455078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.002 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.003 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:46 compute-2 ceph-mon[77138]: pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 09:04:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1314683156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.921 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.922 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:04:46 compute-2 nova_compute[232428]: 2025-11-29 09:04:46.922 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.134 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.233 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.233 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.268 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.300 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 09:04:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.373 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:04:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:47 compute-2 podman[338891]: 2025-11-29 09:04:47.663158282 +0000 UTC m=+0.061364480 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Nov 29 09:04:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3540003379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.806 232432 DEBUG nova.network.neutron [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:04:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:04:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/382090972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.873 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.879 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.935 232432 DEBUG nova.compute.manager [req-97bc59a9-8a85-42cc-8f61-9a52f63abbde req-d7dc2bbd-7e56-482b-816b-b9302463e60a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Received event network-vif-deleted-2810d337-b37c-4d01-b817-9dcca807fac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.936 232432 INFO nova.compute.manager [req-97bc59a9-8a85-42cc-8f61-9a52f63abbde req-d7dc2bbd-7e56-482b-816b-b9302463e60a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Neutron deleted interface 2810d337-b37c-4d01-b817-9dcca807fac3; detaching it from the instance and deleting it from the info cache
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.936 232432 DEBUG nova.network.neutron [req-97bc59a9-8a85-42cc-8f61-9a52f63abbde req-d7dc2bbd-7e56-482b-816b-b9302463e60a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.951 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:47 compute-2 nova_compute[232428]: 2025-11-29 09:04:47.955 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.000 232432 INFO nova.compute.manager [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Took 2.56 seconds to deallocate network for instance.
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.013 232432 DEBUG nova.compute.manager [req-97bc59a9-8a85-42cc-8f61-9a52f63abbde req-d7dc2bbd-7e56-482b-816b-b9302463e60a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Detach interface failed, port_id=2810d337-b37c-4d01-b817-9dcca807fac3, reason: Instance 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.044 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.044 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.083 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.084 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.129 232432 DEBUG oslo_concurrency.processutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.532 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407073.5310438, 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.533 232432 INFO nova.compute.manager [-] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] VM Stopped (Lifecycle Event)
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.578 232432 DEBUG nova.compute.manager [None req-1088a248-3c16-4b00-b6c1-95d247bdfde6 - - - - - -] [instance: 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:04:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:04:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/277359289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.601 232432 DEBUG oslo_concurrency.processutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.606 232432 DEBUG nova.compute.provider_tree [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:04:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:48.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.673 232432 DEBUG nova.scheduler.client.report [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.708 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.788 232432 INFO nova.scheduler.client.report [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Deleted allocations for instance 86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d
Nov 29 09:04:48 compute-2 ceph-mon[77138]: pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 29 09:04:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/382090972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1316468941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/277359289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:48 compute-2 nova_compute[232428]: 2025-11-29 09:04:48.908 232432 DEBUG oslo_concurrency.lockutils [None req-ec26a2bf-699c-42a2-bf43-b551a7a04dde d65747cb23f34d0b82fe6b4b04e5930d 79610f2eca54482d94e23676a7ebcbd7 - - default default] Lock "86acaaf5-d1ad-4c3a-94a0-cb1b4e87337d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:04:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:49.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:50 compute-2 nova_compute[232428]: 2025-11-29 09:04:50.045 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:50 compute-2 nova_compute[232428]: 2025-11-29 09:04:50.045 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:04:50 compute-2 nova_compute[232428]: 2025-11-29 09:04:50.045 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:04:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:50 compute-2 ceph-mon[77138]: pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 29 09:04:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:04:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:04:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:52 compute-2 nova_compute[232428]: 2025-11-29 09:04:52.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:53 compute-2 nova_compute[232428]: 2025-11-29 09:04:53.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:53 compute-2 ceph-mon[77138]: pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 938 B/s wr, 23 op/s
Nov 29 09:04:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:54.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:55 compute-2 ceph-mon[77138]: pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 16 op/s
Nov 29 09:04:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:56 compute-2 ceph-mon[77138]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 597 B/s wr, 13 op/s
Nov 29 09:04:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:56.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:04:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:04:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:04:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2843743218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:57 compute-2 nova_compute[232428]: 2025-11-29 09:04:57.955 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:58 compute-2 nova_compute[232428]: 2025-11-29 09:04:58.564 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:04:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:58.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:58 compute-2 ceph-mon[77138]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 09:04:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2872714765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:04:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:04:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:04:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:04:59 compute-2 podman[338940]: 2025-11-29 09:04:59.720711753 +0000 UTC m=+0.126049592 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 09:05:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:00 compute-2 ceph-mon[77138]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 09:05:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:01.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:01 compute-2 nova_compute[232428]: 2025-11-29 09:05:01.598 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:01 compute-2 nova_compute[232428]: 2025-11-29 09:05:01.752 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:02 compute-2 sudo[338970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:02 compute-2 sudo[338970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:02 compute-2 sudo[338970]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:02 compute-2 sudo[338995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:02 compute-2 sudo[338995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:02 compute-2 sudo[338995]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:02 compute-2 nova_compute[232428]: 2025-11-29 09:05:02.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:03 compute-2 ceph-mon[77138]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:03.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:03.372 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:03.373 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:03.373 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:03 compute-2 nova_compute[232428]: 2025-11-29 09:05:03.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:04.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:05 compute-2 ceph-mon[77138]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:05.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:06 compute-2 podman[339022]: 2025-11-29 09:05:06.642047542 +0000 UTC m=+0.052081882 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 09:05:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:07 compute-2 ceph-mon[77138]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:07 compute-2 nova_compute[232428]: 2025-11-29 09:05:07.958 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:08 compute-2 ceph-mon[77138]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:08 compute-2 nova_compute[232428]: 2025-11-29 09:05:08.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:09.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:10 compute-2 ceph-mon[77138]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:11.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:12 compute-2 nova_compute[232428]: 2025-11-29 09:05:12.959 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:13 compute-2 ceph-mon[77138]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:13.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:13 compute-2 nova_compute[232428]: 2025-11-29 09:05:13.571 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:16 compute-2 ceph-mon[77138]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:17 compute-2 ceph-mon[77138]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:17 compute-2 nova_compute[232428]: 2025-11-29 09:05:17.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:18 compute-2 ceph-mon[77138]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:18 compute-2 nova_compute[232428]: 2025-11-29 09:05:18.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:18 compute-2 sshd-session[339048]: Invalid user solv from 45.148.10.240 port 46946
Nov 29 09:05:18 compute-2 sshd-session[339048]: Connection closed by invalid user solv 45.148.10.240 port 46946 [preauth]
Nov 29 09:05:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:18 compute-2 podman[339050]: 2025-11-29 09:05:18.684210023 +0000 UTC m=+0.084825728 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:05:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:20 compute-2 ceph-mon[77138]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:22 compute-2 sudo[339072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:22 compute-2 sudo[339072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:22 compute-2 sudo[339072]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:22 compute-2 sudo[339097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:22 compute-2 sudo[339097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:22 compute-2 sudo[339097]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:22 compute-2 ceph-mon[77138]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:22 compute-2 nova_compute[232428]: 2025-11-29 09:05:22.963 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:23 compute-2 nova_compute[232428]: 2025-11-29 09:05:23.576 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:25 compute-2 ceph-mon[77138]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:25.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:27 compute-2 ceph-mon[77138]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:27.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:27 compute-2 nova_compute[232428]: 2025-11-29 09:05:27.966 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:05:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444918983' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:05:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:05:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444918983' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:05:28 compute-2 nova_compute[232428]: 2025-11-29 09:05:28.578 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:28 compute-2 ceph-mon[77138]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/444918983' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:05:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/444918983' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:05:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:29 compute-2 nova_compute[232428]: 2025-11-29 09:05:29.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:30 compute-2 podman[339126]: 2025-11-29 09:05:30.69013047 +0000 UTC m=+0.089919778 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 09:05:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:31 compute-2 ceph-mon[77138]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:32 compute-2 nova_compute[232428]: 2025-11-29 09:05:32.968 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:33 compute-2 nova_compute[232428]: 2025-11-29 09:05:33.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:33 compute-2 ceph-mon[77138]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:34.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:34 compute-2 ceph-mon[77138]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:36 compute-2 nova_compute[232428]: 2025-11-29 09:05:36.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:37 compute-2 ceph-mon[77138]: pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:37 compute-2 nova_compute[232428]: 2025-11-29 09:05:37.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:37 compute-2 podman[339157]: 2025-11-29 09:05:37.652547337 +0000 UTC m=+0.063501326 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 09:05:37 compute-2 sudo[339163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:37 compute-2 sudo[339163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:37 compute-2 sudo[339163]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:37 compute-2 sudo[339199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:05:37 compute-2 sudo[339199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:37 compute-2 sudo[339199]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:37 compute-2 sudo[339224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:37 compute-2 sudo[339224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:37 compute-2 sudo[339224]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:37 compute-2 sudo[339249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 09:05:37 compute-2 sudo[339249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:37 compute-2 nova_compute[232428]: 2025-11-29 09:05:37.969 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:38 compute-2 podman[339347]: 2025-11-29 09:05:38.421153861 +0000 UTC m=+0.082055512 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:05:38 compute-2 podman[339347]: 2025-11-29 09:05:38.515120594 +0000 UTC m=+0.176022235 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 09:05:38 compute-2 nova_compute[232428]: 2025-11-29 09:05:38.583 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:38 compute-2 ceph-mon[77138]: pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:38 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:39 compute-2 podman[339501]: 2025-11-29 09:05:39.252022832 +0000 UTC m=+0.055161877 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 09:05:39 compute-2 podman[339501]: 2025-11-29 09:05:39.264662945 +0000 UTC m=+0.067801990 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 09:05:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:39.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:39 compute-2 podman[339567]: 2025-11-29 09:05:39.473553652 +0000 UTC m=+0.051831973 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 29 09:05:39 compute-2 podman[339567]: 2025-11-29 09:05:39.486944749 +0000 UTC m=+0.065223080 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 09:05:39 compute-2 sudo[339249]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:39 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:39 compute-2 sudo[339599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:39 compute-2 sudo[339599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:39 compute-2 sudo[339599]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:39 compute-2 sudo[339624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:05:39 compute-2 sudo[339624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:39 compute-2 sudo[339624]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:39 compute-2 sudo[339649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:39 compute-2 sudo[339649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:39 compute-2 sudo[339649]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:39 compute-2 sudo[339674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:05:39 compute-2 sudo[339674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:40 compute-2 sudo[339674]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:41 compute-2 ceph-mon[77138]: pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:05:41 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:05:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:41.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:42 compute-2 nova_compute[232428]: 2025-11-29 09:05:42.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:42 compute-2 nova_compute[232428]: 2025-11-29 09:05:42.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:05:42 compute-2 nova_compute[232428]: 2025-11-29 09:05:42.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:05:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/554130387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:05:42 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:05:42 compute-2 nova_compute[232428]: 2025-11-29 09:05:42.230 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:05:42 compute-2 ceph-mon[77138]: pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:05:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:42 compute-2 sudo[339732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:42 compute-2 sudo[339732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:42 compute-2 sudo[339732]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:42 compute-2 sudo[339757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:42 compute-2 sudo[339757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:42 compute-2 sudo[339757]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:42 compute-2 nova_compute[232428]: 2025-11-29 09:05:42.972 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:43.471 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:05:43 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:43.473 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.472 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.585 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.673 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.673 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.697 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.767 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.768 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.778 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.779 232432 INFO nova.compute.claims [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Claim successful on node compute-2.ctlplane.example.com
Nov 29 09:05:43 compute-2 nova_compute[232428]: 2025-11-29 09:05:43.881 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:05:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2406681503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.316 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.323 232432 DEBUG nova.compute.provider_tree [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.344 232432 DEBUG nova.scheduler.client.report [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.378 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.379 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.381 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.381 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.382 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.382 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.462 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.463 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.482 232432 INFO nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.501 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 09:05:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:44.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.738 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.741 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.742 232432 INFO nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Creating image(s)
Nov 29 09:05:44 compute-2 ceph-mon[77138]: pgmap v3915: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 94 KiB/s wr, 0 op/s
Nov 29 09:05:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2406681503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.785 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:05:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4194246944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.814 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.837 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.842 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.874 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.918 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.919 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.920 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.920 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.947 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:44 compute-2 nova_compute[232428]: 2025-11-29 09:05:44.950 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b4a2e080-eb4c-4586-9f89-474d759a40e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.100 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.102 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4119MB free_disk=20.987201690673828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.102 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.102 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.178 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Successfully created port: 65868f11-8006-4314-8ead-051bba9aaeae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.205 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance b4a2e080-eb4c-4586-9f89-474d759a40e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.206 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.206 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.241 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.282 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b4a2e080-eb4c-4586-9f89-474d759a40e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.347 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] resizing rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 09:05:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:05:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:45.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.455 232432 DEBUG nova.objects.instance [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a2e080-eb4c-4586-9f89-474d759a40e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.469 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.470 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Ensure instance console log exists: /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.471 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.471 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.471 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:05:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3834912158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.758 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.763 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:05:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4194246944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3710333703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3834912158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.780 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.810 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:05:45 compute-2 nova_compute[232428]: 2025-11-29 09:05:45.810 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.075 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Successfully updated port: 65868f11-8006-4314-8ead-051bba9aaeae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.100 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.101 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquired lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.101 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.239 232432 DEBUG nova.compute.manager [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-changed-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.239 232432 DEBUG nova.compute.manager [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Refreshing instance network info cache due to event network-changed-65868f11-8006-4314-8ead-051bba9aaeae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.240 232432 DEBUG oslo_concurrency.lockutils [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:05:46 compute-2 nova_compute[232428]: 2025-11-29 09:05:46.281 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 09:05:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:46 compute-2 ceph-mon[77138]: pgmap v3916: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 3.2 MiB/s wr, 38 op/s
Nov 29 09:05:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/816665252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/429392080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.297 232432 DEBUG nova.network.neutron [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Updating instance_info_cache with network_info: [{"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:05:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:47.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.403 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Releasing lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.403 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Instance network_info: |[{"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.403 232432 DEBUG oslo_concurrency.lockutils [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.403 232432 DEBUG nova.network.neutron [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Refreshing network info cache for port 65868f11-8006-4314-8ead-051bba9aaeae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.406 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Start _get_guest_xml network_info=[{"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encryption_options': None, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.410 232432 WARNING nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.415 232432 DEBUG nova.virt.libvirt.host [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.415 232432 DEBUG nova.virt.libvirt.host [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.418 232432 DEBUG nova.virt.libvirt.host [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.419 232432 DEBUG nova.virt.libvirt.host [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.420 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.420 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.420 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.420 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.421 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.422 232432 DEBUG nova.virt.hardware [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.424 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:47.475 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:05:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2250784518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.883 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.912 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.917 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:47 compute-2 sudo[340053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:05:47 compute-2 sudo[340053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:47 compute-2 sudo[340053]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:47 compute-2 nova_compute[232428]: 2025-11-29 09:05:47.974 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:48 compute-2 sudo[340083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:05:48 compute-2 sudo[340083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:05:48 compute-2 sudo[340083]: pam_unix(sudo:session): session closed for user root
Nov 29 09:05:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:05:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3552184703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.389 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.392 232432 DEBUG nova.virt.libvirt.vif [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1303753898',display_name='tempest-TestServerMultinode-server-1303753898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-1303753898',id=216,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-yzzgiz99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:44Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=b4a2e080-eb4c-4586-9f89-474d759a40e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.393 232432 DEBUG nova.network.os_vif_util [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.394 232432 DEBUG nova.network.os_vif_util [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.396 232432 DEBUG nova.objects.instance [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a2e080-eb4c-4586-9f89-474d759a40e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.419 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <uuid>b4a2e080-eb4c-4586-9f89-474d759a40e1</uuid>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <name>instance-000000d8</name>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <metadata>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:name>tempest-TestServerMultinode-server-1303753898</nova:name>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 09:05:47</nova:creationTime>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:user uuid="1ef789b2d4084ff99c58ebaccf153280">tempest-TestServerMultinode-1741703404-project-admin</nova:user>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:project uuid="8c7919c45c334cfb95f0fdc69027c245">tempest-TestServerMultinode-1741703404</nova:project>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <nova:port uuid="65868f11-8006-4314-8ead-051bba9aaeae">
Nov 29 09:05:48 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </metadata>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <system>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="serial">b4a2e080-eb4c-4586-9f89-474d759a40e1</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="uuid">b4a2e080-eb4c-4586-9f89-474d759a40e1</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </system>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <os>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </os>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <features>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <apic/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </features>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </clock>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </cpu>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   <devices>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b4a2e080-eb4c-4586-9f89-474d759a40e1_disk">
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </source>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config">
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </source>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:05:48 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:1a:d1:99"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <target dev="tap65868f11-80"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </interface>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/console.log" append="off"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </serial>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <video>
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </video>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </rng>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 09:05:48 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 09:05:48 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 09:05:48 compute-2 nova_compute[232428]:   </devices>
Nov 29 09:05:48 compute-2 nova_compute[232428]: </domain>
Nov 29 09:05:48 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.421 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Preparing to wait for external event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.421 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.422 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.422 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.423 232432 DEBUG nova.virt.libvirt.vif [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1303753898',display_name='tempest-TestServerMultinode-server-1303753898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-1303753898',id=216,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-yzzgiz99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:44Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=b4a2e080-eb4c-4586-9f89-474d759a40e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.423 232432 DEBUG nova.network.os_vif_util [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.424 232432 DEBUG nova.network.os_vif_util [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.424 232432 DEBUG os_vif [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.425 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.426 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.426 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.430 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.430 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65868f11-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.431 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65868f11-80, col_values=(('external_ids', {'iface-id': '65868f11-8006-4314-8ead-051bba9aaeae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:d1:99', 'vm-uuid': 'b4a2e080-eb4c-4586-9f89-474d759a40e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.433 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:48 compute-2 NetworkManager[48993]: <info>  [1764407148.4336] manager: (tap65868f11-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.436 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.441 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.442 232432 INFO os_vif [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80')
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.510 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.511 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.511 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No VIF found with MAC fa:16:3e:1a:d1:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.512 232432 INFO nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Using config drive
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.540 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.810 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.870 232432 INFO nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Creating config drive at /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config
Nov 29 09:05:48 compute-2 nova_compute[232428]: 2025-11-29 09:05:48.875 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppz4y012w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.031 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppz4y012w" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.058 232432 DEBUG nova.storage.rbd_utils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.063 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.311 232432 DEBUG nova.network.neutron [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Updated VIF entry in instance network info cache for port 65868f11-8006-4314-8ead-051bba9aaeae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.312 232432 DEBUG nova.network.neutron [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Updating instance_info_cache with network_info: [{"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:05:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:05:49 compute-2 ceph-mon[77138]: pgmap v3917: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 3.2 MiB/s wr, 38 op/s
Nov 29 09:05:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2250784518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3552184703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.330 232432 DEBUG oslo_concurrency.lockutils [req-d6c341d9-be32-4259-9867-4432f8b2cda1 req-38200a0a-aef9-4062-8793-99d28edeb6ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b4a2e080-eb4c-4586-9f89-474d759a40e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01006|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.374 232432 DEBUG oslo_concurrency.processutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config b4a2e080-eb4c-4586-9f89-474d759a40e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.375 232432 INFO nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Deleting local config drive /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1/disk.config because it was imported into RBD.
Nov 29 09:05:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:49 compute-2 kernel: tap65868f11-80: entered promiscuous mode
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.4425] manager: (tap65868f11-80): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.445 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01007|binding|INFO|Claiming lport 65868f11-8006-4314-8ead-051bba9aaeae for this chassis.
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01008|binding|INFO|65868f11-8006-4314-8ead-051bba9aaeae: Claiming fa:16:3e:1a:d1:99 10.100.0.11
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.453 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.461 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:d1:99 10.100.0.11'], port_security=['fa:16:3e:1a:d1:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4a2e080-eb4c-4586-9f89-474d759a40e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=65868f11-8006-4314-8ead-051bba9aaeae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.462 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 65868f11-8006-4314-8ead-051bba9aaeae in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 bound to our chassis
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.463 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.477 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[19af209f-a281-44a9-b66c-86d73a89a9ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.478 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f61907c-41 in ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.480 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f61907c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.480 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dccaf7-36db-4fe7-a1b2-138a28cb2869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.481 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[37e2a315-bcf4-4335-b87a-d8f43c4dc2de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 systemd-machined[194747]: New machine qemu-103-instance-000000d8.
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.495 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[230e3bcf-98d1-415c-98a1-d9045fcf9fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01009|binding|INFO|Setting lport 65868f11-8006-4314-8ead-051bba9aaeae ovn-installed in OVS
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01010|binding|INFO|Setting lport 65868f11-8006-4314-8ead-051bba9aaeae up in Southbound
Nov 29 09:05:49 compute-2 systemd[1]: Started Virtual Machine qemu-103-instance-000000d8.
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.520 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ab5bb1-1f85-48f3-b9b4-b8427a003441]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 systemd-udevd[340220]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.552 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f979605a-d623-4db6-9bbf-590565b8e5cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.557 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[51477342-5994-48da-a668-fe294bcee92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.5601] manager: (tap7f61907c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.5607] device (tap65868f11-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.5614] device (tap65868f11-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 09:05:49 compute-2 systemd-udevd[340230]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:05:49 compute-2 podman[340199]: 2025-11-29 09:05:49.571370263 +0000 UTC m=+0.087160421 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.594 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[435fd9cb-d108-4b2d-b383-613ddcce7e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.598 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b355116c-630b-413c-8269-010e5cd39735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.6218] device (tap7f61907c-40): carrier: link connected
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.627 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4e42c6a4-d400-432c-b147-a634fba0b485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.645 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[05311abb-9f99-4af3-85b3-b3e5af19a8b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011589, 'reachable_time': 21900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340255, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.663 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[57645571-9cb0-4da6-a9e5-b8f4f72b6282]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:f80e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1011588, 'tstamp': 1011588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340256, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.684 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[bf58a870-65b9-4862-a2f7-ec30f931bcd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011589, 'reachable_time': 21900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340257, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.720 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[80a41534-f516-4cf1-badd-09babd06de45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.788 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[fc052b95-5d2d-4fc2-bbe2-14f639d6b7e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.789 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.789 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.790 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f61907c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 NetworkManager[48993]: <info>  [1764407149.7921] manager: (tap7f61907c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Nov 29 09:05:49 compute-2 kernel: tap7f61907c-40: entered promiscuous mode
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.793 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.794 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f61907c-40, col_values=(('external_ids', {'iface-id': 'f4d00aa1-326b-4003-b66e-9a8340a19429'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.794 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_controller[134375]: 2025-11-29T09:05:49Z|01011|binding|INFO|Releasing lport f4d00aa1-326b-4003-b66e-9a8340a19429 from this chassis (sb_readonly=0)
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.810 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.811 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.812 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9aec6b-d923-452a-9a7a-51fb65cf8777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.813 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: global
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 09:05:49 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:05:49.814 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'env', 'PROCESS_TAG=haproxy-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f61907c-426d-40db-9f88-8bc5f33db1b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.960 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407149.9594166, b4a2e080-eb4c-4586-9f89-474d759a40e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.960 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] VM Started (Lifecycle Event)
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.990 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.993 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407149.9596436, b4a2e080-eb4c-4586-9f89-474d759a40e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:05:49 compute-2 nova_compute[232428]: 2025-11-29 09:05:49.994 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] VM Paused (Lifecycle Event)
Nov 29 09:05:50 compute-2 nova_compute[232428]: 2025-11-29 09:05:50.014 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:05:50 compute-2 nova_compute[232428]: 2025-11-29 09:05:50.018 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:05:50 compute-2 nova_compute[232428]: 2025-11-29 09:05:50.049 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:05:50 compute-2 podman[340332]: 2025-11-29 09:05:50.196861846 +0000 UTC m=+0.063592178 container create 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 09:05:50 compute-2 systemd[1]: Started libpod-conmon-6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c.scope.
Nov 29 09:05:50 compute-2 podman[340332]: 2025-11-29 09:05:50.163004693 +0000 UTC m=+0.029735025 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 09:05:50 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:05:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ff7d032f237c6293d462ab3f7febb307e6943d8e504f25745db5cf14a98ea8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 09:05:50 compute-2 podman[340332]: 2025-11-29 09:05:50.296570497 +0000 UTC m=+0.163300839 container init 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 09:05:50 compute-2 podman[340332]: 2025-11-29 09:05:50.304145963 +0000 UTC m=+0.170876265 container start 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:05:50 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [NOTICE]   (340352) : New worker (340354) forked
Nov 29 09:05:50 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [NOTICE]   (340352) : Loading success.
Nov 29 09:05:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1641842313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:50 compute-2 ceph-mon[77138]: pgmap v3918: 305 pgs: 305 active+clean; 234 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 4.2 MiB/s wr, 90 op/s
Nov 29 09:05:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.575 232432 DEBUG nova.compute.manager [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.575 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.575 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.576 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.576 232432 DEBUG nova.compute.manager [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Processing event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.576 232432 DEBUG nova.compute.manager [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.576 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.577 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.577 232432 DEBUG oslo_concurrency.lockutils [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.577 232432 DEBUG nova.compute.manager [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] No waiting events found dispatching network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.577 232432 WARNING nova.compute.manager [req-1de8fcb6-c63f-41df-9878-d0ccad8ef23a req-83aeb0d8-fe0a-465d-b4ff-9f0c488ac25a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received unexpected event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae for instance with vm_state building and task_state spawning.
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.578 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.582 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407151.5826116, b4a2e080-eb4c-4586-9f89-474d759a40e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.583 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] VM Resumed (Lifecycle Event)
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.584 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.588 232432 INFO nova.virt.libvirt.driver [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Instance spawned successfully.
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.589 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.603 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.612 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.618 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.618 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.619 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.619 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.620 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.621 232432 DEBUG nova.virt.libvirt.driver [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.655 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:05:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/682494736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1803930433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.699 232432 INFO nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Took 6.96 seconds to spawn the instance on the hypervisor.
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.699 232432 DEBUG nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.792 232432 INFO nova.compute.manager [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Took 8.05 seconds to build instance.
Nov 29 09:05:51 compute-2 nova_compute[232428]: 2025-11-29 09:05:51.811 232432 DEBUG oslo_concurrency.lockutils [None req-9a45c5e2-37a7-4c83-9169-4bf8dd2b7f65 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:05:52 compute-2 nova_compute[232428]: 2025-11-29 09:05:52.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:05:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:52 compute-2 ceph-mon[77138]: pgmap v3919: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 137 op/s
Nov 29 09:05:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1603083410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:05:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:52 compute-2 nova_compute[232428]: 2025-11-29 09:05:52.975 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:53 compute-2 nova_compute[232428]: 2025-11-29 09:05:53.433 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:55 compute-2 ceph-mon[77138]: pgmap v3920: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 155 op/s
Nov 29 09:05:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:56 compute-2 ceph-mon[77138]: pgmap v3921: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 247 op/s
Nov 29 09:05:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2396982345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:57.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:05:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:05:57 compute-2 nova_compute[232428]: 2025-11-29 09:05:57.978 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:58 compute-2 ceph-mon[77138]: pgmap v3922: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 209 op/s
Nov 29 09:05:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3593974134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:05:58 compute-2 nova_compute[232428]: 2025-11-29 09:05:58.435 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:05:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:05:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:05:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:05:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:05:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:59.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:00 compute-2 ceph-mon[77138]: pgmap v3923: 305 pgs: 305 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 250 op/s
Nov 29 09:06:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:01.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:01 compute-2 podman[340370]: 2025-11-29 09:06:01.698338034 +0000 UTC m=+0.097623417 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:06:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2862961704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.244 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.245 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.245 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.245 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.245 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.246 232432 INFO nova.compute.manager [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Terminating instance
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.247 232432 DEBUG nova.compute.manager [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 09:06:02 compute-2 kernel: tap65868f11-80 (unregistering): left promiscuous mode
Nov 29 09:06:02 compute-2 NetworkManager[48993]: <info>  [1764407162.2837] device (tap65868f11-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 09:06:02 compute-2 ovn_controller[134375]: 2025-11-29T09:06:02Z|01012|binding|INFO|Releasing lport 65868f11-8006-4314-8ead-051bba9aaeae from this chassis (sb_readonly=0)
Nov 29 09:06:02 compute-2 ovn_controller[134375]: 2025-11-29T09:06:02Z|01013|binding|INFO|Setting lport 65868f11-8006-4314-8ead-051bba9aaeae down in Southbound
Nov 29 09:06:02 compute-2 ovn_controller[134375]: 2025-11-29T09:06:02Z|01014|binding|INFO|Removing iface tap65868f11-80 ovn-installed in OVS
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.291 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.294 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.297 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:d1:99 10.100.0.11'], port_security=['fa:16:3e:1a:d1:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4a2e080-eb4c-4586-9f89-474d759a40e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=65868f11-8006-4314-8ead-051bba9aaeae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.299 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 65868f11-8006-4314-8ead-051bba9aaeae in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 unbound from our chassis
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.300 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f61907c-426d-40db-9f88-8bc5f33db1b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.302 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb83175-d9ec-423a-b77b-7ac4289d798f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.302 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace which is not needed anymore
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Nov 29 09:06:02 compute-2 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d8.scope: Consumed 11.403s CPU time.
Nov 29 09:06:02 compute-2 systemd-machined[194747]: Machine qemu-103-instance-000000d8 terminated.
Nov 29 09:06:02 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [NOTICE]   (340352) : haproxy version is 2.8.14-c23fe91
Nov 29 09:06:02 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [NOTICE]   (340352) : path to executable is /usr/sbin/haproxy
Nov 29 09:06:02 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [WARNING]  (340352) : Exiting Master process...
Nov 29 09:06:02 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [ALERT]    (340352) : Current worker (340354) exited with code 143 (Terminated)
Nov 29 09:06:02 compute-2 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[340348]: [WARNING]  (340352) : All workers exited. Exiting... (0)
Nov 29 09:06:02 compute-2 systemd[1]: libpod-6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c.scope: Deactivated successfully.
Nov 29 09:06:02 compute-2 podman[340422]: 2025-11-29 09:06:02.438477362 +0000 UTC m=+0.044513045 container died 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 09:06:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c-userdata-shm.mount: Deactivated successfully.
Nov 29 09:06:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-f9ff7d032f237c6293d462ab3f7febb307e6943d8e504f25745db5cf14a98ea8-merged.mount: Deactivated successfully.
Nov 29 09:06:02 compute-2 podman[340422]: 2025-11-29 09:06:02.482697957 +0000 UTC m=+0.088733640 container cleanup 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.484 232432 INFO nova.virt.libvirt.driver [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Instance destroyed successfully.
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.485 232432 DEBUG nova.objects.instance [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'resources' on Instance uuid b4a2e080-eb4c-4586-9f89-474d759a40e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:06:02 compute-2 systemd[1]: libpod-conmon-6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c.scope: Deactivated successfully.
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.499 232432 DEBUG nova.virt.libvirt.vif [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1303753898',display_name='tempest-TestServerMultinode-server-1303753898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-1303753898',id=216,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:05:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-yzzgiz99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:05:51Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=b4a2e080-eb4c-4586-9f89-474d759a40e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.499 232432 DEBUG nova.network.os_vif_util [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "65868f11-8006-4314-8ead-051bba9aaeae", "address": "fa:16:3e:1a:d1:99", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65868f11-80", "ovs_interfaceid": "65868f11-8006-4314-8ead-051bba9aaeae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.500 232432 DEBUG nova.network.os_vif_util [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.500 232432 DEBUG os_vif [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.502 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65868f11-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.505 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.507 232432 INFO os_vif [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:d1:99,bridge_name='br-int',has_traffic_filtering=True,id=65868f11-8006-4314-8ead-051bba9aaeae,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65868f11-80')
Nov 29 09:06:02 compute-2 podman[340461]: 2025-11-29 09:06:02.550367172 +0000 UTC m=+0.043951968 container remove 6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.556 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[293ce289-15ef-4cb9-9365-d3986639ff95]: (4, ('Sat Nov 29 09:06:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c)\n6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c\nSat Nov 29 09:06:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c)\n6c380906ff70e4772a70760ca831d9bdfd95bbe2aebcb4d582f47db19026513c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.557 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[f54cfab5-ca56-4794-8c18-07515b626c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.558 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 kernel: tap7f61907c-40: left promiscuous mode
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.575 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.577 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e2567bf4-0316-4b1d-8dee-1449f444c3ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.592 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ca478f-0fa7-4935-9a79-5ff745758506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.593 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8de611ce-f4c2-4c1c-9a8b-e31ae035b96a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.609 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e82c51ea-550b-411e-8ab2-e835e330b850]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011581, 'reachable_time': 22055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340493, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 systemd[1]: run-netns-ovnmeta\x2d7f61907c\x2d426d\x2d40db\x2d9f88\x2d8bc5f33db1b9.mount: Deactivated successfully.
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.612 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 09:06:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:02.613 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9fd010-fea2-4422-b6f7-0b769950bda3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.692 232432 DEBUG nova.compute.manager [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-unplugged-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.692 232432 DEBUG oslo_concurrency.lockutils [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.693 232432 DEBUG oslo_concurrency.lockutils [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.693 232432 DEBUG oslo_concurrency.lockutils [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.693 232432 DEBUG nova.compute.manager [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] No waiting events found dispatching network-vif-unplugged-65868f11-8006-4314-8ead-051bba9aaeae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.694 232432 DEBUG nova.compute.manager [req-dbfef3d5-467e-4a73-a8d8-2f368a9e457a req-db944216-3cf7-4cf3-ac09-4552c86f544d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-unplugged-65868f11-8006-4314-8ead-051bba9aaeae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 09:06:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:02 compute-2 sudo[340495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:02 compute-2 sudo[340495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:02 compute-2 sudo[340495]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.918 232432 INFO nova.virt.libvirt.driver [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Deleting instance files /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1_del
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.919 232432 INFO nova.virt.libvirt.driver [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Deletion of /var/lib/nova/instances/b4a2e080-eb4c-4586-9f89-474d759a40e1_del complete
Nov 29 09:06:02 compute-2 ceph-mon[77138]: pgmap v3924: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.2 MiB/s wr, 238 op/s
Nov 29 09:06:02 compute-2 sudo[340520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:02 compute-2 sudo[340520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:02 compute-2 sudo[340520]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.988 232432 INFO nova.compute.manager [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.989 232432 DEBUG oslo.service.loopingcall [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.989 232432 DEBUG nova.compute.manager [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 09:06:02 compute-2 nova_compute[232428]: 2025-11-29 09:06:02.990 232432 DEBUG nova.network.neutron [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 09:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:03.373 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:03.374 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:03.374 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:03.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:03 compute-2 nova_compute[232428]: 2025-11-29 09:06:03.692 232432 DEBUG nova.network.neutron [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:06:03 compute-2 nova_compute[232428]: 2025-11-29 09:06:03.718 232432 INFO nova.compute.manager [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Took 0.73 seconds to deallocate network for instance.
Nov 29 09:06:03 compute-2 nova_compute[232428]: 2025-11-29 09:06:03.778 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:03 compute-2 nova_compute[232428]: 2025-11-29 09:06:03.779 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:03 compute-2 nova_compute[232428]: 2025-11-29 09:06:03.824 232432 DEBUG oslo_concurrency.processutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:06:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:06:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1390398439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.307 232432 DEBUG oslo_concurrency.processutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.319 232432 DEBUG nova.compute.provider_tree [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.370 232432 DEBUG nova.scheduler.client.report [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.402 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.429 232432 INFO nova.scheduler.client.report [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Deleted allocations for instance b4a2e080-eb4c-4586-9f89-474d759a40e1
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.497 232432 DEBUG oslo_concurrency.lockutils [None req-dd69cb79-7763-4112-835c-0d16e03390fb 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:04.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.786 232432 DEBUG nova.compute.manager [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.787 232432 DEBUG oslo_concurrency.lockutils [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.787 232432 DEBUG oslo_concurrency.lockutils [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.788 232432 DEBUG oslo_concurrency.lockutils [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4a2e080-eb4c-4586-9f89-474d759a40e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.788 232432 DEBUG nova.compute.manager [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] No waiting events found dispatching network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.788 232432 WARNING nova.compute.manager [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received unexpected event network-vif-plugged-65868f11-8006-4314-8ead-051bba9aaeae for instance with vm_state deleted and task_state None.
Nov 29 09:06:04 compute-2 nova_compute[232428]: 2025-11-29 09:06:04.788 232432 DEBUG nova.compute.manager [req-9017a6b2-4d07-4e82-833a-087e950ec8a1 req-769952d9-c62a-4753-ae4c-8ec4ed85e8d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Received event network-vif-deleted-65868f11-8006-4314-8ead-051bba9aaeae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:06:04 compute-2 ceph-mon[77138]: pgmap v3925: 305 pgs: 305 active+clean; 206 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 374 KiB/s wr, 226 op/s
Nov 29 09:06:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1390398439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:06 compute-2 ceph-mon[77138]: pgmap v3926: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Nov 29 09:06:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:07.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:07 compute-2 nova_compute[232428]: 2025-11-29 09:06:07.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:07 compute-2 nova_compute[232428]: 2025-11-29 09:06:07.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:08 compute-2 podman[340570]: 2025-11-29 09:06:08.675037885 +0000 UTC m=+0.071534957 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 09:06:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:08.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:09 compute-2 ceph-mon[77138]: pgmap v3927: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 29 09:06:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1312814041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:09.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:11 compute-2 ceph-mon[77138]: pgmap v3928: 305 pgs: 305 active+clean; 155 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Nov 29 09:06:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:11.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:11 compute-2 nova_compute[232428]: 2025-11-29 09:06:11.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:12 compute-2 nova_compute[232428]: 2025-11-29 09:06:12.506 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:12.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:12 compute-2 nova_compute[232428]: 2025-11-29 09:06:12.985 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:13 compute-2 ceph-mon[77138]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Nov 29 09:06:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:13.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:14 compute-2 ceph-mon[77138]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 09:06:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:14.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:15.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:16.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:16 compute-2 ceph-mon[77138]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.404025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177404100, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1696, "num_deletes": 256, "total_data_size": 4062990, "memory_usage": 4116568, "flush_reason": "Manual Compaction"}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Nov 29 09:06:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:17.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177422291, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2659495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85849, "largest_seqno": 87540, "table_properties": {"data_size": 2652279, "index_size": 4222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14829, "raw_average_key_size": 19, "raw_value_size": 2637990, "raw_average_value_size": 3545, "num_data_blocks": 184, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407028, "oldest_key_time": 1764407028, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 18344 microseconds, and 8143 cpu microseconds.
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.422369) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2659495 bytes OK
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.422389) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.425791) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.425812) EVENT_LOG_v1 {"time_micros": 1764407177425807, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.425830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4055290, prev total WAL file size 4055290, number of live WAL files 2.
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.426985) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323733' seq:72057594037927935, type:22 .. '6C6F676D0033353235' seq:0, type:0; will stop at (end)
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2597KB)], [174(11MB)]
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177427043, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 15125729, "oldest_snapshot_seqno": -1}
Nov 29 09:06:17 compute-2 nova_compute[232428]: 2025-11-29 09:06:17.483 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407162.482198, b4a2e080-eb4c-4586-9f89-474d759a40e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:06:17 compute-2 nova_compute[232428]: 2025-11-29 09:06:17.483 232432 INFO nova.compute.manager [-] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] VM Stopped (Lifecycle Event)
Nov 29 09:06:17 compute-2 nova_compute[232428]: 2025-11-29 09:06:17.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11175 keys, 14981314 bytes, temperature: kUnknown
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177522702, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14981314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14908766, "index_size": 43507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 295645, "raw_average_key_size": 26, "raw_value_size": 14712916, "raw_average_value_size": 1316, "num_data_blocks": 1657, "num_entries": 11175, "num_filter_entries": 11175, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.522962) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14981314 bytes
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.525149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.9 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 11704, records dropped: 529 output_compression: NoCompression
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.525174) EVENT_LOG_v1 {"time_micros": 1764407177525163, "job": 112, "event": "compaction_finished", "compaction_time_micros": 95730, "compaction_time_cpu_micros": 35885, "output_level": 6, "num_output_files": 1, "total_output_size": 14981314, "num_input_records": 11704, "num_output_records": 11175, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177526093, "job": 112, "event": "table_file_deletion", "file_number": 176}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177528807, "job": 112, "event": "table_file_deletion", "file_number": 174}
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.426892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.528887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.528893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.528895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.528898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:06:17.528900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:06:17 compute-2 nova_compute[232428]: 2025-11-29 09:06:17.543 232432 DEBUG nova.compute.manager [None req-09ce7940-59b0-4f47-bf0f-36024d0cb11d - - - - - -] [instance: b4a2e080-eb4c-4586-9f89-474d759a40e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:06:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:17 compute-2 nova_compute[232428]: 2025-11-29 09:06:17.989 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:18 compute-2 ceph-mon[77138]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:06:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:19.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:20 compute-2 podman[340594]: 2025-11-29 09:06:20.649217822 +0000 UTC m=+0.059872642 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 09:06:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:20.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:21 compute-2 ceph-mon[77138]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:06:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:21.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:22 compute-2 ceph-mon[77138]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 09:06:22 compute-2 nova_compute[232428]: 2025-11-29 09:06:22.509 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:22.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:22 compute-2 nova_compute[232428]: 2025-11-29 09:06:22.993 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:23 compute-2 sudo[340616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:23 compute-2 sudo[340616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:23 compute-2 sudo[340616]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:23 compute-2 sudo[340641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:23 compute-2 sudo[340641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:23 compute-2 sudo[340641]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:23.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:24 compute-2 ceph-mon[77138]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:24.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:25.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:26 compute-2 ceph-mon[77138]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:26.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:27 compute-2 nova_compute[232428]: 2025-11-29 09:06:27.510 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:27 compute-2 nova_compute[232428]: 2025-11-29 09:06:27.995 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:28.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:28 compute-2 ceph-mon[77138]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/475602414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:06:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/475602414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:06:29 compute-2 nova_compute[232428]: 2025-11-29 09:06:29.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:30.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:30 compute-2 ceph-mon[77138]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:31.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:32 compute-2 nova_compute[232428]: 2025-11-29 09:06:32.513 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:32 compute-2 podman[340671]: 2025-11-29 09:06:32.755525741 +0000 UTC m=+0.150421919 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:06:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:32.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:32 compute-2 nova_compute[232428]: 2025-11-29 09:06:32.998 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:33 compute-2 ceph-mon[77138]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:33.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:34 compute-2 ceph-mon[77138]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:34.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:35.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:36 compute-2 ceph-mon[77138]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:36.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:37.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:37 compute-2 nova_compute[232428]: 2025-11-29 09:06:37.515 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:38 compute-2 nova_compute[232428]: 2025-11-29 09:06:37.999 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:38 compute-2 nova_compute[232428]: 2025-11-29 09:06:38.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:38.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:38 compute-2 ceph-mon[77138]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:39 compute-2 nova_compute[232428]: 2025-11-29 09:06:39.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:39 compute-2 podman[340701]: 2025-11-29 09:06:39.646224808 +0000 UTC m=+0.048783289 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 09:06:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:40.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:40 compute-2 ceph-mon[77138]: pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:42 compute-2 nova_compute[232428]: 2025-11-29 09:06:42.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:42 compute-2 nova_compute[232428]: 2025-11-29 09:06:42.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:06:42 compute-2 nova_compute[232428]: 2025-11-29 09:06:42.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:06:42 compute-2 nova_compute[232428]: 2025-11-29 09:06:42.220 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:06:42 compute-2 nova_compute[232428]: 2025-11-29 09:06:42.517 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:42.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:42 compute-2 ceph-mon[77138]: pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:43 compute-2 nova_compute[232428]: 2025-11-29 09:06:43.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:43 compute-2 sudo[340722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:43 compute-2 sudo[340722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:43 compute-2 sudo[340722]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:43 compute-2 sudo[340747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:43 compute-2 sudo[340747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:43 compute-2 sudo[340747]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:43.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:44 compute-2 nova_compute[232428]: 2025-11-29 09:06:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:44.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:44 compute-2 ceph-mon[77138]: pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:45.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.230 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.230 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.231 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:06:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:06:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1101150793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.715 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:06:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.890 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.891 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4155MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.891 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.891 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:06:46 compute-2 ceph-mon[77138]: pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1101150793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.993 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:06:46 compute-2 nova_compute[232428]: 2025-11-29 09:06:46.994 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.035 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:06:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:06:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1345827447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:47.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.459 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.465 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.486 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.510 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.510 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.519 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:47.958 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:06:47 compute-2 nova_compute[232428]: 2025-11-29 09:06:47.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:47 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:47.961 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:06:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1345827447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:48 compute-2 nova_compute[232428]: 2025-11-29 09:06:48.002 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:48 compute-2 sudo[340820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:48 compute-2 sudo[340820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:48 compute-2 sudo[340820]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:48 compute-2 sudo[340845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:06:48 compute-2 sudo[340845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:48 compute-2 sudo[340845]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:48 compute-2 sudo[340870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:48 compute-2 sudo[340870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:48 compute-2 sudo[340870]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:48 compute-2 sudo[340895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:06:48 compute-2 sudo[340895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:48 compute-2 sudo[340895]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:06:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:06:49 compute-2 ceph-mon[77138]: pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:06:49 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:06:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:49.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:50 compute-2 nova_compute[232428]: 2025-11-29 09:06:50.511 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2096468359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:06:50 compute-2 ceph-mon[77138]: pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:50 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/655567624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:50.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:51 compute-2 nova_compute[232428]: 2025-11-29 09:06:51.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:06:51 compute-2 nova_compute[232428]: 2025-11-29 09:06:51.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:06:51 compute-2 podman[340952]: 2025-11-29 09:06:51.300162506 +0000 UTC m=+0.060499303 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 09:06:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:52 compute-2 nova_compute[232428]: 2025-11-29 09:06:52.520 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:52 compute-2 ceph-mon[77138]: pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:53 compute-2 nova_compute[232428]: 2025-11-29 09:06:53.003 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:06:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:53.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:06:54 compute-2 ceph-mon[77138]: pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:54 compute-2 ovn_controller[134375]: 2025-11-29T09:06:54Z|01015|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 09:06:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:54.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:55 compute-2 sudo[340976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:06:55 compute-2 sudo[340976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:55 compute-2 sudo[340976]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:55 compute-2 sudo[341001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:06:55 compute-2 sudo[341001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:06:55 compute-2 sudo[341001]: pam_unix(sudo:session): session closed for user root
Nov 29 09:06:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:06:56 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:06:56 compute-2 ceph-mon[77138]: pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:56.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:57.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:57 compute-2 nova_compute[232428]: 2025-11-29 09:06:57.522 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:06:57.963 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:06:58 compute-2 nova_compute[232428]: 2025-11-29 09:06:58.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:06:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:06:58 compute-2 ceph-mon[77138]: pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:06:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:06:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3590118869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:06:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:06:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:06:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:00 compute-2 ceph-mon[77138]: pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2329302652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:00.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:02 compute-2 nova_compute[232428]: 2025-11-29 09:07:02.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:02.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:02 compute-2 ceph-mon[77138]: pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:03 compute-2 nova_compute[232428]: 2025-11-29 09:07:03.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:07:03.374 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:07:03.375 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:07:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:07:03.375 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:07:03 compute-2 sudo[341031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:03 compute-2 sudo[341031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:03.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:03 compute-2 sudo[341031]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:03 compute-2 sudo[341064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:03 compute-2 sudo[341064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:03 compute-2 sudo[341064]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:03 compute-2 podman[341055]: 2025-11-29 09:07:03.592383554 +0000 UTC m=+0.123593531 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:07:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:05 compute-2 ceph-mon[77138]: pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:07 compute-2 ceph-mon[77138]: pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:07.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:07 compute-2 nova_compute[232428]: 2025-11-29 09:07:07.526 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:08 compute-2 nova_compute[232428]: 2025-11-29 09:07:08.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:08 compute-2 ceph-mon[77138]: pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:10 compute-2 podman[341111]: 2025-11-29 09:07:10.66263249 +0000 UTC m=+0.059857782 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:07:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:10 compute-2 ceph-mon[77138]: pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:07:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:11.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:07:12 compute-2 nova_compute[232428]: 2025-11-29 09:07:12.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:12.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:12 compute-2 ceph-mon[77138]: pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:13 compute-2 nova_compute[232428]: 2025-11-29 09:07:13.011 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:14.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:15 compute-2 ceph-mon[77138]: pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:15.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:17 compute-2 ceph-mon[77138]: pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:17.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:17 compute-2 nova_compute[232428]: 2025-11-29 09:07:17.529 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:18 compute-2 nova_compute[232428]: 2025-11-29 09:07:18.015 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:19 compute-2 ceph-mon[77138]: pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:20 compute-2 ceph-mon[77138]: pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:21.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:21 compute-2 podman[341136]: 2025-11-29 09:07:21.653194812 +0000 UTC m=+0.059754199 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 09:07:22 compute-2 nova_compute[232428]: 2025-11-29 09:07:22.531 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:22.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:22 compute-2 ceph-mon[77138]: pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:23 compute-2 nova_compute[232428]: 2025-11-29 09:07:23.017 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:23.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:23 compute-2 sudo[341157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:23 compute-2 sudo[341157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:23 compute-2 sudo[341157]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:23 compute-2 sudo[341182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:23 compute-2 sudo[341182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:23 compute-2 sudo[341182]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:24.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:25 compute-2 ceph-mon[77138]: pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 341 B/s wr, 1 op/s
Nov 29 09:07:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:25.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:26 compute-2 ceph-mon[77138]: pgmap v3966: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 09:07:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:26.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:27.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:27 compute-2 nova_compute[232428]: 2025-11-29 09:07:27.533 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:28 compute-2 nova_compute[232428]: 2025-11-29 09:07:28.022 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:28 compute-2 ceph-mon[77138]: pgmap v3967: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 09:07:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/35439533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:07:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/35439533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:07:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:28.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:29 compute-2 nova_compute[232428]: 2025-11-29 09:07:29.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:29 compute-2 sshd-session[341210]: Invalid user ubuntu from 45.148.10.240 port 50746
Nov 29 09:07:29 compute-2 sshd-session[341210]: Connection closed by invalid user ubuntu 45.148.10.240 port 50746 [preauth]
Nov 29 09:07:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:30.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:30 compute-2 ceph-mon[77138]: pgmap v3968: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 09:07:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.114366) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252114465, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 251, "total_data_size": 1950577, "memory_usage": 1968192, "flush_reason": "Manual Compaction"}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252137895, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1287117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87545, "largest_seqno": 88470, "table_properties": {"data_size": 1282808, "index_size": 2024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9448, "raw_average_key_size": 19, "raw_value_size": 1274216, "raw_average_value_size": 2649, "num_data_blocks": 90, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407178, "oldest_key_time": 1764407178, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 23577 microseconds, and 6966 cpu microseconds.
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.137942) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1287117 bytes OK
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.137966) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.140227) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.140244) EVENT_LOG_v1 {"time_micros": 1764407252140239, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.140262) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1945940, prev total WAL file size 1945940, number of live WAL files 2.
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.141199) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1256KB)], [177(14MB)]
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252141277, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 16268431, "oldest_snapshot_seqno": -1}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 11141 keys, 14302943 bytes, temperature: kUnknown
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252266524, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 14302943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14231218, "index_size": 42748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27909, "raw_key_size": 295631, "raw_average_key_size": 26, "raw_value_size": 14036259, "raw_average_value_size": 1259, "num_data_blocks": 1620, "num_entries": 11141, "num_filter_entries": 11141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.266763) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 14302943 bytes
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.268505) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.3 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 11656, records dropped: 515 output_compression: NoCompression
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.268540) EVENT_LOG_v1 {"time_micros": 1764407252268524, "job": 114, "event": "compaction_finished", "compaction_time_micros": 125317, "compaction_time_cpu_micros": 42141, "output_level": 6, "num_output_files": 1, "total_output_size": 14302943, "num_input_records": 11656, "num_output_records": 11141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252269245, "job": 114, "event": "table_file_deletion", "file_number": 179}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252274798, "job": 114, "event": "table_file_deletion", "file_number": 177}
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.140927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.274890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.274898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.274902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.274906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:07:32.274910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:07:32 compute-2 nova_compute[232428]: 2025-11-29 09:07:32.537 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:32.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:33 compute-2 nova_compute[232428]: 2025-11-29 09:07:33.024 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:33 compute-2 ceph-mon[77138]: pgmap v3969: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 09:07:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:33.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/916988230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:34 compute-2 ceph-mon[77138]: pgmap v3970: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 09:07:34 compute-2 podman[341215]: 2025-11-29 09:07:34.680368552 +0000 UTC m=+0.086494991 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:07:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:07:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1992439454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:07:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1992439454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:07:36 compute-2 ceph-mon[77138]: pgmap v3971: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 29 09:07:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:36.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:37 compute-2 nova_compute[232428]: 2025-11-29 09:07:37.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:37 compute-2 nova_compute[232428]: 2025-11-29 09:07:37.538 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:38 compute-2 nova_compute[232428]: 2025-11-29 09:07:38.025 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:39 compute-2 ceph-mon[77138]: pgmap v3972: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 09:07:39 compute-2 nova_compute[232428]: 2025-11-29 09:07:39.388 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:39 compute-2 nova_compute[232428]: 2025-11-29 09:07:39.389 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:39.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2538792099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:07:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:41 compute-2 ceph-mon[77138]: pgmap v3973: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 09:07:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:41 compute-2 podman[341244]: 2025-11-29 09:07:41.637062506 +0000 UTC m=+0.045993816 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:07:42 compute-2 nova_compute[232428]: 2025-11-29 09:07:42.540 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:42 compute-2 ceph-mon[77138]: pgmap v3974: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:43 compute-2 nova_compute[232428]: 2025-11-29 09:07:43.027 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:43 compute-2 sudo[341264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:43 compute-2 sudo[341264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:43 compute-2 sudo[341264]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:43 compute-2 sudo[341289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:43 compute-2 sudo[341289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:43 compute-2 sudo[341289]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:44 compute-2 nova_compute[232428]: 2025-11-29 09:07:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:44 compute-2 nova_compute[232428]: 2025-11-29 09:07:44.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:07:44 compute-2 nova_compute[232428]: 2025-11-29 09:07:44.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:07:44 compute-2 nova_compute[232428]: 2025-11-29 09:07:44.316 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:07:44 compute-2 ceph-mon[77138]: pgmap v3975: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:07:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:46 compute-2 nova_compute[232428]: 2025-11-29 09:07:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:46 compute-2 ceph-mon[77138]: pgmap v3976: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 29 09:07:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:46.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.243 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.244 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.244 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.244 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.244 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:07:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.542 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:07:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2222820011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.721 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:07:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2222820011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.895 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.897 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4162MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.897 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:07:47 compute-2 nova_compute[232428]: 2025-11-29 09:07:47.897 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.180 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.180 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.210 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:07:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:07:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2268232540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.683 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.690 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.726 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.728 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:07:48 compute-2 nova_compute[232428]: 2025-11-29 09:07:48.728 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:07:48 compute-2 ceph-mon[77138]: pgmap v3977: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 29 09:07:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2268232540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:49 compute-2 nova_compute[232428]: 2025-11-29 09:07:49.718 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:49 compute-2 nova_compute[232428]: 2025-11-29 09:07:49.719 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:50 compute-2 ceph-mon[77138]: pgmap v3978: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 29 09:07:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:51 compute-2 nova_compute[232428]: 2025-11-29 09:07:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:51 compute-2 nova_compute[232428]: 2025-11-29 09:07:51.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:07:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 09:07:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 09:07:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/697145045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:52 compute-2 nova_compute[232428]: 2025-11-29 09:07:52.544 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:52 compute-2 podman[341363]: 2025-11-29 09:07:52.650923714 +0000 UTC m=+0.060533142 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 09:07:52 compute-2 ceph-mon[77138]: pgmap v3979: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 09:07:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3753362989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:07:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:53 compute-2 nova_compute[232428]: 2025-11-29 09:07:53.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:53.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:54 compute-2 ceph-mon[77138]: pgmap v3980: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 09:07:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:55 compute-2 sudo[341386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:55 compute-2 sudo[341386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:55 compute-2 sudo[341386]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:55 compute-2 sudo[341411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:07:55 compute-2 sudo[341411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:55 compute-2 sudo[341411]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:56 compute-2 sudo[341437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:56 compute-2 sudo[341437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:56 compute-2 sudo[341437]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:56 compute-2 sudo[341462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 09:07:56 compute-2 sudo[341462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:56 compute-2 sudo[341462]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:56.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:56 compute-2 ceph-mon[77138]: pgmap v3981: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 22 op/s
Nov 29 09:07:57 compute-2 nova_compute[232428]: 2025-11-29 09:07:57.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:07:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:57 compute-2 nova_compute[232428]: 2025-11-29 09:07:57.546 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 09:07:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 09:07:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:07:57 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:07:57 compute-2 sudo[341506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:57 compute-2 sudo[341506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:57 compute-2 sudo[341506]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:57 compute-2 sudo[341531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:07:57 compute-2 sudo[341531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:57 compute-2 sudo[341531]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:57 compute-2 sudo[341556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:07:57 compute-2 sudo[341556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:57 compute-2 sudo[341556]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:58 compute-2 sudo[341582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:07:58 compute-2 sudo[341582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:07:58 compute-2 nova_compute[232428]: 2025-11-29 09:07:58.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:07:58 compute-2 sudo[341582]: pam_unix(sudo:session): session closed for user root
Nov 29 09:07:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:07:58.574 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:07:58 compute-2 nova_compute[232428]: 2025-11-29 09:07:58.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:07:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:07:58.575 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:07:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:07:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:58.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:07:59 compute-2 ceph-mon[77138]: pgmap v3982: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 21 op/s
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:07:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:07:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:07:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:59.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:07:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/71947940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:08:00 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:08:00.577 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:08:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:00 compute-2 ceph-mon[77138]: pgmap v3983: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 21 op/s
Nov 29 09:08:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/473711722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:08:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1968188615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:08:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:01.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:02 compute-2 nova_compute[232428]: 2025-11-29 09:08:02.548 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:02.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:03 compute-2 nova_compute[232428]: 2025-11-29 09:08:03.037 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:03 compute-2 ceph-mon[77138]: pgmap v3984: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Nov 29 09:08:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1557411659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:08:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1557411659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:08:03.375 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:08:03.376 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:08:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:08:03.377 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:08:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:03 compute-2 sshd-session[341383]: Connection closed by authenticating user root 115.190.194.12 port 46484 [preauth]
Nov 29 09:08:03 compute-2 sudo[341640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:03 compute-2 sudo[341640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:03 compute-2 sudo[341640]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:04 compute-2 sudo[341666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:04 compute-2 sudo[341666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:04 compute-2 sudo[341666]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:04 compute-2 ceph-mon[77138]: pgmap v3985: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Nov 29 09:08:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:04.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:05 compute-2 sudo[341691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:05 compute-2 sudo[341691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:05 compute-2 sudo[341691]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:05 compute-2 sudo[341722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:08:05 compute-2 sudo[341722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:05 compute-2 sudo[341722]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:05 compute-2 podman[341715]: 2025-11-29 09:08:05.647130503 +0000 UTC m=+0.097833190 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:08:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:08:06 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:08:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:06.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:07 compute-2 nova_compute[232428]: 2025-11-29 09:08:07.550 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:07 compute-2 ceph-mon[77138]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 09:08:08 compute-2 nova_compute[232428]: 2025-11-29 09:08:08.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:08 compute-2 ceph-mon[77138]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 09:08:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:08.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:10.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:11 compute-2 ceph-mon[77138]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 09:08:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:12 compute-2 nova_compute[232428]: 2025-11-29 09:08:12.553 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:12 compute-2 podman[341772]: 2025-11-29 09:08:12.690678388 +0000 UTC m=+0.076151523 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:08:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:12.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:12 compute-2 ceph-mon[77138]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 09:08:13 compute-2 nova_compute[232428]: 2025-11-29 09:08:13.040 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:13.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:14.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:15 compute-2 ceph-mon[77138]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 09:08:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:15.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:17.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:17 compute-2 nova_compute[232428]: 2025-11-29 09:08:17.554 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:17 compute-2 ceph-mon[77138]: pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 255 B/s wr, 11 op/s
Nov 29 09:08:18 compute-2 nova_compute[232428]: 2025-11-29 09:08:18.041 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:19 compute-2 ceph-mon[77138]: pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:21 compute-2 ceph-mon[77138]: pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:22 compute-2 nova_compute[232428]: 2025-11-29 09:08:22.556 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:23 compute-2 nova_compute[232428]: 2025-11-29 09:08:23.043 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:23 compute-2 ceph-mon[77138]: pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:23.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:23 compute-2 podman[341796]: 2025-11-29 09:08:23.707172005 +0000 UTC m=+0.110138268 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 09:08:24 compute-2 sudo[341818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:24 compute-2 sudo[341818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:24 compute-2 sudo[341818]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:24 compute-2 sudo[341843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:24 compute-2 sudo[341843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:24 compute-2 sudo[341843]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:24 compute-2 ceph-mon[77138]: pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:25.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:26 compute-2 nova_compute[232428]: 2025-11-29 09:08:26.458 232432 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.33 sec
Nov 29 09:08:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:27 compute-2 ceph-mon[77138]: pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:27 compute-2 nova_compute[232428]: 2025-11-29 09:08:27.558 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:28 compute-2 nova_compute[232428]: 2025-11-29 09:08:28.046 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:08:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/622318878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:08:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:08:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/622318878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:08:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:28 compute-2 ceph-mon[77138]: pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:08:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/622318878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:08:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/622318878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:08:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:29.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:30.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:31 compute-2 ceph-mon[77138]: pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 684 KiB/s rd, 2 op/s
Nov 29 09:08:31 compute-2 nova_compute[232428]: 2025-11-29 09:08:31.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:08:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:32 compute-2 nova_compute[232428]: 2025-11-29 09:08:32.560 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:32.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:33 compute-2 nova_compute[232428]: 2025-11-29 09:08:33.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:33 compute-2 nova_compute[232428]: 2025-11-29 09:08:33.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:08:33 compute-2 nova_compute[232428]: 2025-11-29 09:08:33.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 09:08:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:34 compute-2 ceph-mon[77138]: pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3 op/s
Nov 29 09:08:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:35.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:36 compute-2 ceph-mon[77138]: pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 09:08:36 compute-2 podman[341874]: 2025-11-29 09:08:36.740693912 +0000 UTC m=+0.134402044 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:08:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:36.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:37 compute-2 nova_compute[232428]: 2025-11-29 09:08:37.562 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:37 compute-2 ceph-mon[77138]: pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 09:08:38 compute-2 nova_compute[232428]: 2025-11-29 09:08:38.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:38.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:39.285887) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319286232, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 909, "num_deletes": 250, "total_data_size": 1757840, "memory_usage": 1777784, "flush_reason": "Manual Compaction"}
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Nov 29 09:08:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 09:08:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:39.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319932567, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 761991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88475, "largest_seqno": 89379, "table_properties": {"data_size": 758458, "index_size": 1312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9579, "raw_average_key_size": 20, "raw_value_size": 750852, "raw_average_value_size": 1639, "num_data_blocks": 57, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407253, "oldest_key_time": 1764407253, "file_creation_time": 1764407319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 646783 microseconds, and 6252 cpu microseconds.
Nov 29 09:08:39 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:39.932686) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 761991 bytes OK
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:39.932710) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:40.642069) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:40.642125) EVENT_LOG_v1 {"time_micros": 1764407320642112, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:40.642169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1753243, prev total WAL file size 1769567, number of live WAL files 2.
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:40.643509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303138' seq:72057594037927935, type:22 .. '6D6772737461740033323639' seq:0, type:0; will stop at (end)
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(744KB)], [180(13MB)]
Nov 29 09:08:40 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407320643600, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15064934, "oldest_snapshot_seqno": -1}
Nov 29 09:08:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11107 keys, 11615259 bytes, temperature: kUnknown
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407321543967, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11615259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547715, "index_size": 38695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 295121, "raw_average_key_size": 26, "raw_value_size": 11357469, "raw_average_value_size": 1022, "num_data_blocks": 1452, "num_entries": 11107, "num_filter_entries": 11107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:08:41 compute-2 ceph-mon[77138]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 09:08:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:41.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.544443) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11615259 bytes
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.683009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.7 rd, 12.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.6 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(35.0) write-amplify(15.2) OK, records in: 11599, records dropped: 492 output_compression: NoCompression
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.683043) EVENT_LOG_v1 {"time_micros": 1764407321683030, "job": 116, "event": "compaction_finished", "compaction_time_micros": 900463, "compaction_time_cpu_micros": 62550, "output_level": 6, "num_output_files": 1, "total_output_size": 11615259, "num_input_records": 11599, "num_output_records": 11107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407321683343, "job": 116, "event": "table_file_deletion", "file_number": 182}
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407321685969, "job": 116, "event": "table_file_deletion", "file_number": 180}
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:40.643434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.686084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.686092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.686095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.686099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:08:41.686101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:08:41 compute-2 nova_compute[232428]: 2025-11-29 09:08:41.861 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 09:08:41 compute-2 nova_compute[232428]: 2025-11-29 09:08:41.861 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:08:41 compute-2 nova_compute[232428]: 2025-11-29 09:08:41.862 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 09:08:42 compute-2 ceph-mon[77138]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Nov 29 09:08:42 compute-2 nova_compute[232428]: 2025-11-29 09:08:42.564 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:43 compute-2 nova_compute[232428]: 2025-11-29 09:08:43.052 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:43 compute-2 ceph-mon[77138]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 170 B/s wr, 4 op/s
Nov 29 09:08:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:43 compute-2 podman[341904]: 2025-11-29 09:08:43.667466676 +0000 UTC m=+0.071377517 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 09:08:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:44 compute-2 sudo[341923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:44 compute-2 sudo[341923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:44 compute-2 sudo[341923]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:44 compute-2 sudo[341948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:08:44 compute-2 sudo[341948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:08:44 compute-2 sudo[341948]: pam_unix(sudo:session): session closed for user root
Nov 29 09:08:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:44.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:08:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:45.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:08:45 compute-2 ceph-mon[77138]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 685 KiB/s rd, 255 B/s wr, 4 op/s
Nov 29 09:08:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:46.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:47 compute-2 nova_compute[232428]: 2025-11-29 09:08:47.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:47.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:47 compute-2 ceph-mon[77138]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 09:08:48 compute-2 nova_compute[232428]: 2025-11-29 09:08:48.055 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:49.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:49.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:49 compute-2 ceph-mon[77138]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 09:08:49 compute-2 nova_compute[232428]: 2025-11-29 09:08:49.998 232432 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.54 sec
Nov 29 09:08:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:50 compute-2 ceph-mon[77138]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 09:08:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:52 compute-2 nova_compute[232428]: 2025-11-29 09:08:52.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:53 compute-2 ceph-mon[77138]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Nov 29 09:08:53 compute-2 nova_compute[232428]: 2025-11-29 09:08:53.059 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:53.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:54 compute-2 podman[341978]: 2025-11-29 09:08:54.707817857 +0000 UTC m=+0.104063381 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 09:08:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:55 compute-2 ceph-mon[77138]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 255 B/s wr, 0 op/s
Nov 29 09:08:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:08:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:55.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:08:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:08:57 compute-2 ceph-mon[77138]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Nov 29 09:08:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:57.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:08:57 compute-2 nova_compute[232428]: 2025-11-29 09:08:57.572 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:57.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2903914103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:08:58 compute-2 nova_compute[232428]: 2025-11-29 09:08:58.060 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:08:58 compute-2 nova_compute[232428]: 2025-11-29 09:08:58.874 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:08:58 compute-2 nova_compute[232428]: 2025-11-29 09:08:58.875 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:08:58 compute-2 nova_compute[232428]: 2025-11-29 09:08:58.875 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:08:58 compute-2 nova_compute[232428]: 2025-11-29 09:08:58.876 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:08:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:08:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:08:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:08:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:08:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:59.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:01 compute-2 ceph-mon[77138]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:01.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:02 compute-2 nova_compute[232428]: 2025-11-29 09:09:02.574 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:02 compute-2 ceph-mon[77138]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:03 compute-2 nova_compute[232428]: 2025-11-29 09:09:03.063 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:03.376 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:03.377 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:09:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:03.377 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:09:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:03.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:04 compute-2 ceph-mon[77138]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Nov 29 09:09:04 compute-2 sudo[342004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:04 compute-2 sudo[342004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:04 compute-2 sudo[342004]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:04 compute-2 sudo[342029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:04 compute-2 sudo[342029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:04 compute-2 sudo[342029]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:05 compute-2 ceph-mon[77138]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Nov 29 09:09:05 compute-2 sudo[342054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:05 compute-2 sudo[342054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:05 compute-2 sudo[342054]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:05 compute-2 sudo[342079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:09:05 compute-2 sudo[342079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:05 compute-2 sudo[342079]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:05 compute-2 sudo[342104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:05 compute-2 sudo[342104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:05 compute-2 sudo[342104]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:05 compute-2 sudo[342129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:09:05 compute-2 sudo[342129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:05 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.488 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.489 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.489 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.489 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.490 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:06 compute-2 nova_compute[232428]: 2025-11-29 09:09:06.490 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:06 compute-2 sudo[342129]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:07 compute-2 ceph-mon[77138]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 KiB/s rd, 341 B/s wr, 11 op/s
Nov 29 09:09:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:07 compute-2 nova_compute[232428]: 2025-11-29 09:09:07.575 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:07.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:07 compute-2 podman[342186]: 2025-11-29 09:09:07.689728027 +0000 UTC m=+0.097435527 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 09:09:08 compute-2 nova_compute[232428]: 2025-11-29 09:09:08.065 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:08 compute-2 nova_compute[232428]: 2025-11-29 09:09:08.097 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:08 compute-2 nova_compute[232428]: 2025-11-29 09:09:08.097 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:09:08 compute-2 nova_compute[232428]: 2025-11-29 09:09:08.097 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:09:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:09:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:09.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:09.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:10 compute-2 ceph-mon[77138]: pgmap v4017: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 678 KiB/s wr, 13 op/s
Nov 29 09:09:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3160855647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3660982933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:11 compute-2 ceph-mon[77138]: pgmap v4018: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.577 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.658 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.658 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.659 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.659 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:09:12 compute-2 nova_compute[232428]: 2025-11-29 09:09:12.660 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.069 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:09:13 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4279327163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.321 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.593 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:09:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.595 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4163MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:09:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:13.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.595 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:09:13 compute-2 nova_compute[232428]: 2025-11-29 09:09:13.596 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:09:13 compute-2 ceph-mon[77138]: pgmap v4019: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:09:14 compute-2 podman[342238]: 2025-11-29 09:09:14.716809615 +0000 UTC m=+0.109031563 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 09:09:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:15.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4279327163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:15 compute-2 ceph-mon[77138]: pgmap v4020: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:09:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.082 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.082 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.144 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:09:16 compute-2 sudo[342258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:16 compute-2 sudo[342258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:16 compute-2 sudo[342258]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:16 compute-2 sudo[342284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:09:16 compute-2 sudo[342284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:16 compute-2 sudo[342284]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:09:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2850339279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.727 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.737 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.873 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.875 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.876 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:09:16 compute-2 nova_compute[232428]: 2025-11-29 09:09:16.876 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:16 compute-2 ceph-mon[77138]: pgmap v4021: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 09:09:16 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:09:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1702390274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2850339279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 09:09:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:17.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:17 compute-2 nova_compute[232428]: 2025-11-29 09:09:17.579 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:09:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:09:18 compute-2 nova_compute[232428]: 2025-11-29 09:09:18.070 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:18 compute-2 ceph-mon[77138]: osdmap e429: 3 total, 3 up, 3 in
Nov 29 09:09:19 compute-2 ceph-mon[77138]: pgmap v4023: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Nov 29 09:09:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:19.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:21 compute-2 ceph-mon[77138]: pgmap v4024: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.0 KiB/s rd, 409 B/s wr, 7 op/s
Nov 29 09:09:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:21.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:22 compute-2 nova_compute[232428]: 2025-11-29 09:09:22.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:22 compute-2 ceph-mon[77138]: pgmap v4025: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 09:09:23 compute-2 nova_compute[232428]: 2025-11-29 09:09:23.073 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:23.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:24 compute-2 sudo[342334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:24 compute-2 sudo[342334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:24 compute-2 sudo[342334]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:24 compute-2 sudo[342363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:24 compute-2 sudo[342363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:24 compute-2 sudo[342363]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:24 compute-2 podman[342358]: 2025-11-29 09:09:24.815501785 +0000 UTC m=+0.055199199 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 09:09:24 compute-2 ceph-mon[77138]: pgmap v4026: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 09:09:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:27.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:27 compute-2 nova_compute[232428]: 2025-11-29 09:09:27.582 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:28 compute-2 nova_compute[232428]: 2025-11-29 09:09:28.075 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:09:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202835927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:09:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:09:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202835927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:09:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:29.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:30 compute-2 ceph-mds[83773]: mds.beacon.cephfs.compute-2.fwjrvc missed beacon ack from the monitors
Nov 29 09:09:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:31.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:31.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:32 compute-2 nova_compute[232428]: 2025-11-29 09:09:32.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:33 compute-2 nova_compute[232428]: 2025-11-29 09:09:33.077 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:33 compute-2 ceph-mon[77138]: pgmap v4027: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 09:09:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:33.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:33.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:34 compute-2 ceph-mon[77138]: pgmap v4028: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 476 B/s wr, 7 op/s
Nov 29 09:09:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4202835927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:09:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4202835927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:09:34 compute-2 ceph-mon[77138]: pgmap v4029: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s rd, 426 B/s wr, 6 op/s
Nov 29 09:09:34 compute-2 ceph-mon[77138]: pgmap v4030: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 29 09:09:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1042361463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:35 compute-2 ceph-mon[77138]: pgmap v4031: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:36 compute-2 ceph-mon[77138]: pgmap v4032: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:37.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:37 compute-2 nova_compute[232428]: 2025-11-29 09:09:37.586 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:38 compute-2 nova_compute[232428]: 2025-11-29 09:09:38.079 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:38 compute-2 podman[342411]: 2025-11-29 09:09:38.717436391 +0000 UTC m=+0.125688096 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 09:09:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:40 compute-2 ceph-mon[77138]: pgmap v4033: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:41 compute-2 sshd-session[342438]: Invalid user ubuntu from 45.148.10.240 port 43440
Nov 29 09:09:41 compute-2 sshd-session[342438]: Connection closed by invalid user ubuntu 45.148.10.240 port 43440 [preauth]
Nov 29 09:09:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:41 compute-2 ceph-mon[77138]: pgmap v4034: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:09:42 compute-2 nova_compute[232428]: 2025-11-29 09:09:42.588 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:42 compute-2 nova_compute[232428]: 2025-11-29 09:09:42.589 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:42 compute-2 nova_compute[232428]: 2025-11-29 09:09:42.590 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:42 compute-2 nova_compute[232428]: 2025-11-29 09:09:42.590 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:42 compute-2 ceph-mon[77138]: pgmap v4035: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 511 B/s rd, 0 op/s
Nov 29 09:09:43 compute-2 nova_compute[232428]: 2025-11-29 09:09:43.081 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:43.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:44 compute-2 ceph-mon[77138]: pgmap v4036: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 341 B/s wr, 6 op/s
Nov 29 09:09:44 compute-2 sudo[342442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:44 compute-2 sudo[342442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:44 compute-2 sudo[342442]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:44 compute-2 podman[342466]: 2025-11-29 09:09:44.987746821 +0000 UTC m=+0.063385740 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:09:44 compute-2 sudo[342473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:09:44 compute-2 sudo[342473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:09:44 compute-2 sudo[342473]: pam_unix(sudo:session): session closed for user root
Nov 29 09:09:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:46 compute-2 nova_compute[232428]: 2025-11-29 09:09:46.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:46 compute-2 nova_compute[232428]: 2025-11-29 09:09:46.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:09:46 compute-2 nova_compute[232428]: 2025-11-29 09:09:46.204 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:09:46 compute-2 ceph-mon[77138]: pgmap v4037: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 597 B/s wr, 7 op/s
Nov 29 09:09:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:46 compute-2 nova_compute[232428]: 2025-11-29 09:09:46.641 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:09:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:47.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:47 compute-2 nova_compute[232428]: 2025-11-29 09:09:47.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.083 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.368 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.368 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.369 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.369 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.369 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:09:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:48.642 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:09:48 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:48.643 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.643 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:09:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2838246535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:48 compute-2 nova_compute[232428]: 2025-11-29 09:09:48.866 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:09:49 compute-2 nova_compute[232428]: 2025-11-29 09:09:49.040 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:09:49 compute-2 nova_compute[232428]: 2025-11-29 09:09:49.041 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4152MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:09:49 compute-2 nova_compute[232428]: 2025-11-29 09:09:49.042 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:09:49 compute-2 nova_compute[232428]: 2025-11-29 09:09:49.042 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:09:49 compute-2 ceph-mon[77138]: pgmap v4038: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 09:09:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2838246535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.115 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.116 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.264 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.356 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.357 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.414 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.475 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.528 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:09:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:09:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1094483220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.962 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:09:50 compute-2 nova_compute[232428]: 2025-11-29 09:09:50.969 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:09:51 compute-2 ceph-mon[77138]: pgmap v4039: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 09:09:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1094483220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:52 compute-2 nova_compute[232428]: 2025-11-29 09:09:52.593 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:52 compute-2 ceph-mon[77138]: pgmap v4040: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 09:09:53 compute-2 nova_compute[232428]: 2025-11-29 09:09:53.086 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:53 compute-2 nova_compute[232428]: 2025-11-29 09:09:53.322 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:09:53 compute-2 nova_compute[232428]: 2025-11-29 09:09:53.326 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:09:53 compute-2 nova_compute[232428]: 2025-11-29 09:09:53.326 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:09:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:09:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606300608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:09:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3807548406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:54 compute-2 nova_compute[232428]: 2025-11-29 09:09:54.317 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:54 compute-2 nova_compute[232428]: 2025-11-29 09:09:54.318 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:54 compute-2 nova_compute[232428]: 2025-11-29 09:09:54.318 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:54 compute-2 nova_compute[232428]: 2025-11-29 09:09:54.318 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:09:54 compute-2 nova_compute[232428]: 2025-11-29 09:09:54.318 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:09:54 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:09:54.645 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:09:54 compute-2 ceph-mon[77138]: pgmap v4041: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 09:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3606300608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:09:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3450250076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:09:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:55 compute-2 podman[342561]: 2025-11-29 09:09:55.654320591 +0000 UTC m=+0.064938998 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 09:09:56 compute-2 ceph-mon[77138]: pgmap v4042: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 597 B/s wr, 4 op/s
Nov 29 09:09:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:09:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:09:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:57.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:09:57 compute-2 nova_compute[232428]: 2025-11-29 09:09:57.595 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:57.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:58 compute-2 nova_compute[232428]: 2025-11-29 09:09:58.089 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:09:58 compute-2 ceph-mon[77138]: pgmap v4043: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 09:09:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:09:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:09:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:09:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1739398218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:10:00 compute-2 ceph-mon[77138]: pgmap v4044: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:10:00 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 09:10:01 compute-2 nova_compute[232428]: 2025-11-29 09:10:01.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:01.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:01.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:02 compute-2 nova_compute[232428]: 2025-11-29 09:10:02.597 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:02 compute-2 ceph-mon[77138]: pgmap v4045: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 8 op/s
Nov 29 09:10:03 compute-2 nova_compute[232428]: 2025-11-29 09:10:03.091 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:03.377 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:03.378 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:10:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:03.378 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:10:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:10:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:10:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:05 compute-2 sudo[342586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:05 compute-2 sudo[342586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:05 compute-2 sudo[342586]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:05 compute-2 sudo[342611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:05 compute-2 sudo[342611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:05 compute-2 sudo[342611]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:05.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:06 compute-2 ceph-mon[77138]: pgmap v4046: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 708 KiB/s rd, 12 KiB/s wr, 35 op/s
Nov 29 09:10:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:07 compute-2 nova_compute[232428]: 2025-11-29 09:10:07.600 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:07 compute-2 ceph-mon[77138]: pgmap v4047: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 49 op/s
Nov 29 09:10:08 compute-2 nova_compute[232428]: 2025-11-29 09:10:08.094 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:09 compute-2 ceph-mon[77138]: pgmap v4048: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 09:10:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:09.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:09 compute-2 podman[342638]: 2025-11-29 09:10:09.678462565 +0000 UTC m=+0.082347453 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:10:11 compute-2 ceph-mon[77138]: pgmap v4049: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 29 09:10:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:11.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:12 compute-2 ceph-mon[77138]: pgmap v4050: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Nov 29 09:10:12 compute-2 nova_compute[232428]: 2025-11-29 09:10:12.601 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:13 compute-2 nova_compute[232428]: 2025-11-29 09:10:13.096 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:15 compute-2 ceph-mon[77138]: pgmap v4051: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 80 op/s
Nov 29 09:10:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:15.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:15.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:15 compute-2 podman[342668]: 2025-11-29 09:10:15.672278754 +0000 UTC m=+0.071360445 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 29 09:10:16 compute-2 sudo[342689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:16 compute-2 sudo[342689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:16 compute-2 sudo[342689]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:16 compute-2 sudo[342714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:10:16 compute-2 sudo[342714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:16 compute-2 sudo[342714]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:16 compute-2 sudo[342739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:16 compute-2 sudo[342739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:16 compute-2 sudo[342739]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:16 compute-2 sudo[342764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:10:16 compute-2 sudo[342764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:16 compute-2 sudo[342764]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:16 compute-2 ceph-mon[77138]: pgmap v4052: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 597 B/s wr, 53 op/s
Nov 29 09:10:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:17.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:17 compute-2 nova_compute[232428]: 2025-11-29 09:10:17.603 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:17.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:18 compute-2 nova_compute[232428]: 2025-11-29 09:10:18.100 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:10:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:10:19 compute-2 ceph-mon[77138]: pgmap v4053: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 856 KiB/s rd, 597 B/s wr, 39 op/s
Nov 29 09:10:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3233531520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:19.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:19.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:21 compute-2 ceph-mon[77138]: pgmap v4054: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 144 KiB/s rd, 597 B/s wr, 16 op/s
Nov 29 09:10:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:21.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/513338051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:22 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/992761591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:22 compute-2 nova_compute[232428]: 2025-11-29 09:10:22.605 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:23 compute-2 nova_compute[232428]: 2025-11-29 09:10:23.103 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:23.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:23 compute-2 ceph-mon[77138]: pgmap v4055: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Nov 29 09:10:23 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:10:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:10:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68314918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:10:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:10:23 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68314918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:10:23 compute-2 sudo[342824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:23 compute-2 sudo[342824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:23 compute-2 sudo[342824]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:24 compute-2 sudo[342850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:10:24 compute-2 sudo[342850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:24 compute-2 sudo[342850]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:24 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:10:24 compute-2 ceph-mon[77138]: pgmap v4056: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 682 B/s wr, 8 op/s
Nov 29 09:10:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/68314918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:10:24 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/68314918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:10:25 compute-2 sudo[342875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:25 compute-2 sudo[342875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:25 compute-2 sudo[342875]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:25 compute-2 sudo[342900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:25 compute-2 sudo[342900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:25 compute-2 sudo[342900]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:25.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:25 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 09:10:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:26 compute-2 podman[342926]: 2025-11-29 09:10:26.719291491 +0000 UTC m=+0.112041686 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 09:10:27 compute-2 ceph-mon[77138]: pgmap v4057: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 KiB/s rd, 597 B/s wr, 9 op/s
Nov 29 09:10:27 compute-2 ceph-mon[77138]: osdmap e430: 3 total, 3 up, 3 in
Nov 29 09:10:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:27.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:27 compute-2 nova_compute[232428]: 2025-11-29 09:10:27.607 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:28 compute-2 nova_compute[232428]: 2025-11-29 09:10:28.104 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:10:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1214833642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:10:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:10:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1214833642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:10:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:29.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:29 compute-2 ceph-mon[77138]: pgmap v4059: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 511 B/s wr, 21 op/s
Nov 29 09:10:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1214833642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:10:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1214833642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:10:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4025583321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:10:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4025583321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:10:30 compute-2 ceph-mon[77138]: pgmap v4060: 305 pgs: 305 active+clean; 135 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.9 KiB/s wr, 49 op/s
Nov 29 09:10:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:31.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:31.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:32 compute-2 nova_compute[232428]: 2025-11-29 09:10:32.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:32 compute-2 nova_compute[232428]: 2025-11-29 09:10:32.609 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 09:10:32 compute-2 ceph-mon[77138]: pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 48 op/s
Nov 29 09:10:32 compute-2 ceph-mon[77138]: osdmap e431: 3 total, 3 up, 3 in
Nov 29 09:10:33 compute-2 nova_compute[232428]: 2025-11-29 09:10:33.106 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:33.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:33.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:34 compute-2 ceph-mon[77138]: pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 57 op/s
Nov 29 09:10:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:35.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:36 compute-2 ceph-mon[77138]: pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 865 KiB/s rd, 1.5 KiB/s wr, 47 op/s
Nov 29 09:10:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:37.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:37 compute-2 nova_compute[232428]: 2025-11-29 09:10:37.611 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:38 compute-2 nova_compute[232428]: 2025-11-29 09:10:38.108 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:38 compute-2 ceph-mon[77138]: pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 KiB/s wr, 37 op/s
Nov 29 09:10:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:40 compute-2 nova_compute[232428]: 2025-11-29 09:10:40.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:40 compute-2 podman[342953]: 2025-11-29 09:10:40.667160481 +0000 UTC m=+0.077635099 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 09:10:40 compute-2 ceph-mon[77138]: pgmap v4066: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 573 KiB/s wr, 19 op/s
Nov 29 09:10:41 compute-2 nova_compute[232428]: 2025-11-29 09:10:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:41.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:42 compute-2 nova_compute[232428]: 2025-11-29 09:10:42.612 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:43 compute-2 ceph-mon[77138]: pgmap v4067: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 32 op/s
Nov 29 09:10:43 compute-2 nova_compute[232428]: 2025-11-29 09:10:43.111 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:43.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:45 compute-2 ceph-mon[77138]: pgmap v4068: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 44 op/s
Nov 29 09:10:45 compute-2 sudo[342981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:45 compute-2 sudo[342981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:45 compute-2 sudo[342981]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:45.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:45 compute-2 sudo[343006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:10:45 compute-2 sudo[343006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:10:45 compute-2 sudo[343006]: pam_unix(sudo:session): session closed for user root
Nov 29 09:10:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:46 compute-2 ceph-mon[77138]: pgmap v4069: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 29 09:10:46 compute-2 nova_compute[232428]: 2025-11-29 09:10:46.606 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:46.607 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:10:46 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:46.608 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:10:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:46 compute-2 podman[343032]: 2025-11-29 09:10:46.653671915 +0000 UTC m=+0.058576372 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:10:47 compute-2 nova_compute[232428]: 2025-11-29 09:10:47.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:47 compute-2 nova_compute[232428]: 2025-11-29 09:10:47.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:10:47 compute-2 nova_compute[232428]: 2025-11-29 09:10:47.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:10:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:47.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:47 compute-2 nova_compute[232428]: 2025-11-29 09:10:47.514 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:10:47 compute-2 nova_compute[232428]: 2025-11-29 09:10:47.614 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:48 compute-2 nova_compute[232428]: 2025-11-29 09:10:48.112 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:48 compute-2 ceph-mon[77138]: pgmap v4070: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 09:10:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:49.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:49 compute-2 nova_compute[232428]: 2025-11-29 09:10:49.504 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:49.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.307 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.307 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.308 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.308 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.308 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:10:50 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:10:50 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4003286432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.787 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.952 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.953 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4161MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.954 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:10:50 compute-2 nova_compute[232428]: 2025-11-29 09:10:50.954 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:10:51 compute-2 ceph-mon[77138]: pgmap v4071: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 704 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 09:10:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4003286432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.131 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.131 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.155 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:10:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:51.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:10:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1273534665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.591 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.600 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:10:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.684 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.688 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:10:51 compute-2 nova_compute[232428]: 2025-11-29 09:10:51.688 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:10:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1273534665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:52 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:10:52.609 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:10:52 compute-2 nova_compute[232428]: 2025-11-29 09:10:52.616 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:53 compute-2 nova_compute[232428]: 2025-11-29 09:10:53.114 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:53 compute-2 ceph-mon[77138]: pgmap v4072: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Nov 29 09:10:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:53.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:53 compute-2 nova_compute[232428]: 2025-11-29 09:10:53.690 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:53 compute-2 nova_compute[232428]: 2025-11-29 09:10:53.691 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:53.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:54 compute-2 ceph-mon[77138]: pgmap v4073: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 794 KiB/s wr, 13 op/s
Nov 29 09:10:55 compute-2 nova_compute[232428]: 2025-11-29 09:10:55.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:10:55 compute-2 nova_compute[232428]: 2025-11-29 09:10:55.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:10:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3107028670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:56 compute-2 ceph-mon[77138]: pgmap v4074: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:10:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1441298852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:10:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:10:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978066f0 =====
Nov 29 09:10:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:57.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:10:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978066f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:57 compute-2 nova_compute[232428]: 2025-11-29 09:10:57.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:57 compute-2 radosgw[83394]: beast: 0x7f55978066f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:57 compute-2 podman[343099]: 2025-11-29 09:10:57.84236815 +0000 UTC m=+0.069432127 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 09:10:58 compute-2 nova_compute[232428]: 2025-11-29 09:10:58.117 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:10:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3950441658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:10:59 compute-2 ceph-mon[77138]: pgmap v4075: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:10:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:10:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:10:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:10:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1502372707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:01 compute-2 ceph-mon[77138]: pgmap v4076: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:11:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3148641319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:11:01 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343865731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:11:02 compute-2 nova_compute[232428]: 2025-11-29 09:11:02.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1343865731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:11:02 compute-2 ceph-mon[77138]: pgmap v4077: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:11:03 compute-2 nova_compute[232428]: 2025-11-29 09:11:03.118 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:11:03.378 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:11:03.379 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:11:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:11:03.379 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:11:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:04 compute-2 ceph-mon[77138]: pgmap v4078: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:11:05 compute-2 sudo[343125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:05 compute-2 sudo[343125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:05 compute-2 sudo[343125]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:05 compute-2 sudo[343150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:05 compute-2 sudo[343150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:05 compute-2 sudo[343150]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:06 compute-2 ceph-mon[77138]: pgmap v4079: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:11:07 compute-2 nova_compute[232428]: 2025-11-29 09:11:07.753 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:07.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:07.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3294026258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:11:08 compute-2 nova_compute[232428]: 2025-11-29 09:11:08.122 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:09 compute-2 ceph-mon[77138]: pgmap v4080: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:11:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:09.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:11 compute-2 podman[343178]: 2025-11-29 09:11:11.733252945 +0000 UTC m=+0.135111375 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:11:11 compute-2 ceph-mon[77138]: pgmap v4081: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Nov 29 09:11:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:11.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:11.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:12 compute-2 ceph-mon[77138]: pgmap v4082: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 255 B/s wr, 5 op/s
Nov 29 09:11:12 compute-2 nova_compute[232428]: 2025-11-29 09:11:12.755 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:13 compute-2 nova_compute[232428]: 2025-11-29 09:11:13.124 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:15.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:16 compute-2 ceph-mon[77138]: pgmap v4083: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 805 KiB/s rd, 511 B/s wr, 34 op/s
Nov 29 09:11:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:17 compute-2 podman[343208]: 2025-11-29 09:11:17.653037159 +0000 UTC m=+0.057288522 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 09:11:17 compute-2 nova_compute[232428]: 2025-11-29 09:11:17.760 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:17.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:17.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:18 compute-2 ceph-mon[77138]: pgmap v4084: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 55 op/s
Nov 29 09:11:18 compute-2 nova_compute[232428]: 2025-11-29 09:11:18.126 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.170576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478170682, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1736, "num_deletes": 252, "total_data_size": 4047718, "memory_usage": 4099280, "flush_reason": "Manual Compaction"}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478188265, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 2658815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89385, "largest_seqno": 91115, "table_properties": {"data_size": 2651603, "index_size": 4218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15405, "raw_average_key_size": 20, "raw_value_size": 2637041, "raw_average_value_size": 3474, "num_data_blocks": 186, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407320, "oldest_key_time": 1764407320, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 17747 microseconds, and 7161 cpu microseconds.
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.188333) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 2658815 bytes OK
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.188354) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.189460) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.189474) EVENT_LOG_v1 {"time_micros": 1764407478189469, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.189490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 4039862, prev total WAL file size 4039862, number of live WAL files 2.
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.190630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(2596KB)], [183(11MB)]
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478190727, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 14274074, "oldest_snapshot_seqno": -1}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 11345 keys, 12314313 bytes, temperature: kUnknown
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478263297, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 12314313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244462, "index_size": 40362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300862, "raw_average_key_size": 26, "raw_value_size": 12049478, "raw_average_value_size": 1062, "num_data_blocks": 1517, "num_entries": 11345, "num_filter_entries": 11345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.263618) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 12314313 bytes
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.265063) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.4 rd, 169.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 11866, records dropped: 521 output_compression: NoCompression
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.265084) EVENT_LOG_v1 {"time_micros": 1764407478265074, "job": 118, "event": "compaction_finished", "compaction_time_micros": 72674, "compaction_time_cpu_micros": 31370, "output_level": 6, "num_output_files": 1, "total_output_size": 12314313, "num_input_records": 11866, "num_output_records": 11345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478265637, "job": 118, "event": "table_file_deletion", "file_number": 185}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478267869, "job": 118, "event": "table_file_deletion", "file_number": 183}
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.190517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.267955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.267961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.267962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.267963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:18 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:11:18.267965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:11:19 compute-2 ceph-mon[77138]: pgmap v4085: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 09:11:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:19.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:20 compute-2 ceph-mon[77138]: pgmap v4086: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 09:11:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:21.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:21 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:22 compute-2 nova_compute[232428]: 2025-11-29 09:11:22.763 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:23 compute-2 nova_compute[232428]: 2025-11-29 09:11:23.128 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:23 compute-2 ceph-mon[77138]: pgmap v4087: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 09:11:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:23.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:24 compute-2 sudo[343231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:24 compute-2 sudo[343231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:24 compute-2 sudo[343231]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:24 compute-2 sudo[343256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:11:24 compute-2 sudo[343256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:24 compute-2 sudo[343256]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:24 compute-2 sudo[343281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:24 compute-2 sudo[343281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:24 compute-2 sudo[343281]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:24 compute-2 sudo[343306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:11:24 compute-2 sudo[343306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:24 compute-2 sudo[343306]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:25 compute-2 sudo[343361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:25 compute-2 sudo[343361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:25 compute-2 sudo[343361]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:25 compute-2 sudo[343386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:25 compute-2 sudo[343386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:25 compute-2 sudo[343386]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:25.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:25.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:26 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:26 compute-2 ceph-mon[77138]: pgmap v4088: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 510 KiB/s wr, 73 op/s
Nov 29 09:11:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:11:26 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:11:27 compute-2 nova_compute[232428]: 2025-11-29 09:11:27.768 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:27.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:27.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:28 compute-2 nova_compute[232428]: 2025-11-29 09:11:28.129 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:28 compute-2 ceph-mon[77138]: pgmap v4089: 305 pgs: 305 active+clean; 173 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 895 KiB/s wr, 62 op/s
Nov 29 09:11:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:11:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:11:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:11:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:11:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:11:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3197153029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:11:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:11:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3197153029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:11:28 compute-2 podman[343413]: 2025-11-29 09:11:28.679122933 +0000 UTC m=+0.071265152 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:11:29 compute-2 ceph-mon[77138]: pgmap v4090: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 769 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Nov 29 09:11:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3197153029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:11:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3197153029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:11:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:29.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:30 compute-2 ceph-mon[77138]: pgmap v4091: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 09:11:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:31.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:11:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.3 total, 600.0 interval
                                           Cumulative writes: 18K writes, 91K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1463 writes, 6967 keys, 1463 commit groups, 1.0 writes per commit group, ingest: 15.07 MB, 0.03 MB/s
                                           Interval WAL: 1463 writes, 1463 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.4      3.32              0.52        59    0.056       0      0       0.0       0.0
                                             L6      1/0   11.74 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     80.4     69.0      9.00              2.29        58    0.155    487K    31K       0.0       0.0
                                            Sum      1/0   11.74 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     58.7     59.7     12.32              2.81       117    0.105    487K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0     31.1     31.7      2.31              0.27        10    0.231     58K   2578       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     80.4     69.0      9.00              2.29        58    0.155    487K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.3      3.24              0.52        58    0.056       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.112, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.72 GB write, 0.10 MB/s write, 0.71 GB read, 0.10 MB/s read, 12.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 2.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 81.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.00046 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4428,77.65 MB,25.5427%) FilterBlock(117,1.31 MB,0.430413%) IndexBlock(117,2.11 MB,0.693688%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 09:11:32 compute-2 nova_compute[232428]: 2025-11-29 09:11:32.771 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:32 compute-2 ceph-mon[77138]: pgmap v4092: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 09:11:33 compute-2 nova_compute[232428]: 2025-11-29 09:11:33.131 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:33 compute-2 nova_compute[232428]: 2025-11-29 09:11:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:33.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:34 compute-2 sudo[343438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:34 compute-2 sudo[343438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:34 compute-2 sudo[343438]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:34 compute-2 sudo[343463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:11:34 compute-2 sudo[343463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:34 compute-2 sudo[343463]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:11:34 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:11:34 compute-2 ceph-mon[77138]: pgmap v4093: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 09:11:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:35.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:37 compute-2 ceph-mon[77138]: pgmap v4094: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 367 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Nov 29 09:11:37 compute-2 nova_compute[232428]: 2025-11-29 09:11:37.775 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:37.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:37.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:38 compute-2 nova_compute[232428]: 2025-11-29 09:11:38.132 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 09:11:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:39.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:39 compute-2 ceph-mon[77138]: pgmap v4095: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 09:11:39 compute-2 ceph-mon[77138]: osdmap e432: 3 total, 3 up, 3 in
Nov 29 09:11:41 compute-2 ceph-mon[77138]: pgmap v4097: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 16 KiB/s wr, 11 op/s
Nov 29 09:11:41 compute-2 nova_compute[232428]: 2025-11-29 09:11:41.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:41.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:41.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:42 compute-2 nova_compute[232428]: 2025-11-29 09:11:42.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:42 compute-2 podman[343492]: 2025-11-29 09:11:42.710797789 +0000 UTC m=+0.103909006 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:11:42 compute-2 nova_compute[232428]: 2025-11-29 09:11:42.777 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:43 compute-2 nova_compute[232428]: 2025-11-29 09:11:43.135 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:43 compute-2 ceph-mon[77138]: pgmap v4098: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 13 op/s
Nov 29 09:11:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:43.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:45 compute-2 ceph-mon[77138]: pgmap v4099: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 3.5 KiB/s wr, 17 op/s
Nov 29 09:11:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:11:45 compute-2 sudo[343517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:45 compute-2 sudo[343517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:45 compute-2 sudo[343517]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:45 compute-2 sudo[343542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:11:45 compute-2 sudo[343542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:11:45 compute-2 sudo[343542]: pam_unix(sudo:session): session closed for user root
Nov 29 09:11:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:47 compute-2 sshd-session[343568]: Invalid user ubuntu from 45.148.10.240 port 55146
Nov 29 09:11:47 compute-2 sshd-session[343568]: Connection closed by invalid user ubuntu 45.148.10.240 port 55146 [preauth]
Nov 29 09:11:47 compute-2 ceph-mon[77138]: pgmap v4100: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 3.6 KiB/s wr, 20 op/s
Nov 29 09:11:47 compute-2 nova_compute[232428]: 2025-11-29 09:11:47.780 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:47.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:48 compute-2 nova_compute[232428]: 2025-11-29 09:11:48.137 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:48 compute-2 nova_compute[232428]: 2025-11-29 09:11:48.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:48 compute-2 nova_compute[232428]: 2025-11-29 09:11:48.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:11:48 compute-2 nova_compute[232428]: 2025-11-29 09:11:48.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:11:48 compute-2 nova_compute[232428]: 2025-11-29 09:11:48.218 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:11:48 compute-2 podman[343571]: 2025-11-29 09:11:48.663815234 +0000 UTC m=+0.067704021 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:11:48 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3033889382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:48 compute-2 ceph-mon[77138]: pgmap v4101: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 3.6 KiB/s wr, 19 op/s
Nov 29 09:11:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:49.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:49.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.429 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.430 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.430 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.430 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:11:50 compute-2 nova_compute[232428]: 2025-11-29 09:11:50.431 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:11:51 compute-2 ceph-mon[77138]: pgmap v4102: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 3.0 KiB/s wr, 17 op/s
Nov 29 09:11:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:11:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3999120386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:51 compute-2 nova_compute[232428]: 2025-11-29 09:11:51.296 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.865s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:11:51 compute-2 nova_compute[232428]: 2025-11-29 09:11:51.446 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:11:51 compute-2 nova_compute[232428]: 2025-11-29 09:11:51.448 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4177MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:11:51 compute-2 nova_compute[232428]: 2025-11-29 09:11:51.448 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:11:51 compute-2 nova_compute[232428]: 2025-11-29 09:11:51.448 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:11:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:51.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:52 compute-2 nova_compute[232428]: 2025-11-29 09:11:52.456 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:11:52 compute-2 nova_compute[232428]: 2025-11-29 09:11:52.456 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:11:52 compute-2 nova_compute[232428]: 2025-11-29 09:11:52.474 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:11:52 compute-2 nova_compute[232428]: 2025-11-29 09:11:52.784 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.140 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3999120386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:11:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2690304687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.327 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.336 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.750 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.752 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:11:53 compute-2 nova_compute[232428]: 2025-11-29 09:11:53.753 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:11:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:53.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:53.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:54 compute-2 ceph-mon[77138]: pgmap v4103: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 2.8 KiB/s wr, 13 op/s
Nov 29 09:11:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2690304687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:11:55 compute-2 nova_compute[232428]: 2025-11-29 09:11:55.754 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:55 compute-2 nova_compute[232428]: 2025-11-29 09:11:55.755 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:55.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:55.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:55 compute-2 ceph-mon[77138]: pgmap v4104: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.2 KiB/s rd, 2.5 KiB/s wr, 12 op/s
Nov 29 09:11:56 compute-2 nova_compute[232428]: 2025-11-29 09:11:56.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:11:56 compute-2 nova_compute[232428]: 2025-11-29 09:11:56.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:11:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:11:57 compute-2 nova_compute[232428]: 2025-11-29 09:11:57.787 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:57.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:58 compute-2 nova_compute[232428]: 2025-11-29 09:11:58.690 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:11:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:11:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432649728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:11:59 compute-2 ceph-mon[77138]: pgmap v4105: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 1023 B/s wr, 12 op/s
Nov 29 09:11:59 compute-2 podman[343639]: 2025-11-29 09:11:59.690085735 +0000 UTC m=+0.092805505 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:11:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:11:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:59.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:11:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:11:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:11:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:00 compute-2 ceph-mon[77138]: pgmap v4106: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 1.7 KiB/s wr, 12 op/s
Nov 29 09:12:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4028295833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2432649728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:12:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2215405446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:00 compute-2 ceph-mon[77138]: pgmap v4107: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 3.7 KiB/s wr, 13 op/s
Nov 29 09:12:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1244052288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:01.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:01.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:02 compute-2 nova_compute[232428]: 2025-11-29 09:12:02.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:02 compute-2 nova_compute[232428]: 2025-11-29 09:12:02.791 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:03.379 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:03.380 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:12:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:03.380 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:12:03 compute-2 nova_compute[232428]: 2025-11-29 09:12:03.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:03.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1491720927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/179801139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:12:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:03.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:04 compute-2 ceph-mon[77138]: pgmap v4108: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 3.7 KiB/s wr, 10 op/s
Nov 29 09:12:04 compute-2 ceph-mon[77138]: pgmap v4109: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 3.4 KiB/s wr, 7 op/s
Nov 29 09:12:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:05.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:06 compute-2 sudo[343662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:06 compute-2 sudo[343662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:06 compute-2 sudo[343662]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:06 compute-2 sudo[343688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:06 compute-2 sudo[343688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:06 compute-2 sudo[343688]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:06 compute-2 ceph-mon[77138]: pgmap v4110: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 3.3 KiB/s wr, 6 op/s
Nov 29 09:12:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:07 compute-2 nova_compute[232428]: 2025-11-29 09:12:07.796 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:07.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:08 compute-2 nova_compute[232428]: 2025-11-29 09:12:08.692 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:09 compute-2 ceph-mgr[77498]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 09:12:09 compute-2 nova_compute[232428]: 2025-11-29 09:12:09.183 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:09.184 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:12:09 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:09.185 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:12:09 compute-2 ceph-mon[77138]: pgmap v4111: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 2.9 KiB/s wr, 2 op/s
Nov 29 09:12:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:09.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:09.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:10 compute-2 ceph-mon[77138]: pgmap v4112: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 17 KiB/s wr, 13 op/s
Nov 29 09:12:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:12:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:12:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:11 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:12 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:12.186 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:12:12 compute-2 nova_compute[232428]: 2025-11-29 09:12:12.800 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:13 compute-2 ceph-mon[77138]: pgmap v4113: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 652 KiB/s rd, 15 KiB/s wr, 34 op/s
Nov 29 09:12:13 compute-2 nova_compute[232428]: 2025-11-29 09:12:13.695 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:13 compute-2 podman[343716]: 2025-11-29 09:12:13.722884144 +0000 UTC m=+0.116064239 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 09:12:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:14 compute-2 ceph-mon[77138]: pgmap v4114: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 61 op/s
Nov 29 09:12:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:15.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:15.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:17 compute-2 ceph-mon[77138]: pgmap v4115: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 09:12:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:17 compute-2 nova_compute[232428]: 2025-11-29 09:12:17.804 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:17.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:17.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:18 compute-2 nova_compute[232428]: 2025-11-29 09:12:18.697 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:12:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 78K writes, 308K keys, 78K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s
                                           Cumulative WAL: 78K writes, 29K syncs, 2.67 writes per sync, written: 0.31 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1807 writes, 6006 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 5.18 MB, 0.01 MB/s
                                           Interval WAL: 1807 writes, 769 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:12:19 compute-2 ceph-mon[77138]: pgmap v4116: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 09:12:19 compute-2 podman[343746]: 2025-11-29 09:12:19.699169895 +0000 UTC m=+0.098039815 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 09:12:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:19.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:19.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:21 compute-2 ceph-mon[77138]: pgmap v4117: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 09:12:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:21.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:22 compute-2 nova_compute[232428]: 2025-11-29 09:12:22.807 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:22 compute-2 ceph-mon[77138]: pgmap v4118: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.8 KiB/s wr, 65 op/s
Nov 29 09:12:23 compute-2 nova_compute[232428]: 2025-11-29 09:12:23.700 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:23.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:25 compute-2 ceph-mon[77138]: pgmap v4119: 305 pgs: 305 active+clean; 208 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 215 KiB/s wr, 55 op/s
Nov 29 09:12:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:25.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:26 compute-2 sudo[343766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:26 compute-2 sudo[343766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:26 compute-2 sudo[343766]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:26 compute-2 sudo[343791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:26 compute-2 sudo[343791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:26 compute-2 sudo[343791]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:26 compute-2 ceph-mon[77138]: pgmap v4120: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 240 KiB/s wr, 50 op/s
Nov 29 09:12:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:27 compute-2 nova_compute[232428]: 2025-11-29 09:12:27.812 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:27.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:27.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:12:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4028359071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:12:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:12:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4028359071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:12:28 compute-2 nova_compute[232428]: 2025-11-29 09:12:28.702 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:29 compute-2 ceph-mon[77138]: pgmap v4121: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 937 KiB/s rd, 493 KiB/s wr, 48 op/s
Nov 29 09:12:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4028359071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:12:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/4028359071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:12:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:29.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:29.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:30 compute-2 podman[343818]: 2025-11-29 09:12:30.666221435 +0000 UTC m=+0.071214150 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 09:12:31 compute-2 ceph-mon[77138]: pgmap v4122: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 529 KiB/s wr, 56 op/s
Nov 29 09:12:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:31.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:32 compute-2 ceph-mon[77138]: pgmap v4123: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 529 KiB/s wr, 56 op/s
Nov 29 09:12:32 compute-2 nova_compute[232428]: 2025-11-29 09:12:32.816 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:33 compute-2 nova_compute[232428]: 2025-11-29 09:12:33.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:33 compute-2 nova_compute[232428]: 2025-11-29 09:12:33.704 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:34 compute-2 sudo[343841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:34 compute-2 sudo[343841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:34 compute-2 sudo[343841]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:34 compute-2 sudo[343866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:12:34 compute-2 sudo[343866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:34 compute-2 sudo[343866]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:34 compute-2 sudo[343891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:34 compute-2 sudo[343891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:34 compute-2 sudo[343891]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:34 compute-2 sudo[343916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:12:34 compute-2 sudo[343916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:35 compute-2 sudo[343916]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:35 compute-2 ceph-mon[77138]: pgmap v4124: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 539 KiB/s wr, 57 op/s
Nov 29 09:12:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:36 compute-2 ceph-mon[77138]: pgmap v4125: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 333 KiB/s wr, 46 op/s
Nov 29 09:12:37 compute-2 nova_compute[232428]: 2025-11-29 09:12:37.820 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:38 compute-2 nova_compute[232428]: 2025-11-29 09:12:38.706 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:39 compute-2 nova_compute[232428]: 2025-11-29 09:12:39.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:39 compute-2 ceph-mon[77138]: pgmap v4126: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 308 KiB/s wr, 24 op/s
Nov 29 09:12:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:39.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:39.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:12:40 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:12:40 compute-2 ceph-mon[77138]: pgmap v4127: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 124 KiB/s rd, 50 KiB/s wr, 9 op/s
Nov 29 09:12:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:41.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:42 compute-2 nova_compute[232428]: 2025-11-29 09:12:42.823 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:43 compute-2 nova_compute[232428]: 2025-11-29 09:12:43.266 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:43 compute-2 nova_compute[232428]: 2025-11-29 09:12:43.266 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:43 compute-2 ceph-mon[77138]: pgmap v4128: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 1 op/s
Nov 29 09:12:43 compute-2 nova_compute[232428]: 2025-11-29 09:12:43.708 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:43.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:44 compute-2 ceph-mon[77138]: pgmap v4129: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 16 KiB/s wr, 1 op/s
Nov 29 09:12:44 compute-2 podman[343977]: 2025-11-29 09:12:44.743511132 +0000 UTC m=+0.135311561 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 29 09:12:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:46 compute-2 sudo[344006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:46 compute-2 sudo[344006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:46 compute-2 sudo[344006]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:46 compute-2 sudo[344031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:46 compute-2 sudo[344031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:46 compute-2 sudo[344031]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:47 compute-2 sudo[344056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:12:47 compute-2 sudo[344056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:47 compute-2 sudo[344056]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:47 compute-2 sudo[344081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:12:47 compute-2 sudo[344081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:12:47 compute-2 sudo[344081]: pam_unix(sudo:session): session closed for user root
Nov 29 09:12:47 compute-2 ceph-mon[77138]: pgmap v4130: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 6.0 KiB/s wr, 0 op/s
Nov 29 09:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:47 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:12:47 compute-2 nova_compute[232428]: 2025-11-29 09:12:47.828 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:47.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:48 compute-2 nova_compute[232428]: 2025-11-29 09:12:48.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:48 compute-2 nova_compute[232428]: 2025-11-29 09:12:48.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:12:48 compute-2 nova_compute[232428]: 2025-11-29 09:12:48.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:12:48 compute-2 nova_compute[232428]: 2025-11-29 09:12:48.514 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:12:48 compute-2 ceph-mon[77138]: pgmap v4131: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 6.0 KiB/s wr, 0 op/s
Nov 29 09:12:48 compute-2 nova_compute[232428]: 2025-11-29 09:12:48.710 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:49.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:50 compute-2 podman[344110]: 2025-11-29 09:12:50.710740905 +0000 UTC m=+0.096988033 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.801 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.802 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.802 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.803 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:12:50 compute-2 nova_compute[232428]: 2025-11-29 09:12:50.803 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:12:50 compute-2 ceph-mon[77138]: pgmap v4132: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 1 op/s
Nov 29 09:12:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:12:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3932030158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.320 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.488 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.490 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4157MB free_disk=20.98794174194336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.490 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.490 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:12:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3932030158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.935 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.936 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:12:51 compute-2 nova_compute[232428]: 2025-11-29 09:12:51.972 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:12:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:12:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1192281931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.463 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.473 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.827 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.829 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.830 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:12:52 compute-2 nova_compute[232428]: 2025-11-29 09:12:52.831 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:53 compute-2 ceph-mon[77138]: pgmap v4133: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 2.3 KiB/s wr, 1 op/s
Nov 29 09:12:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1192281931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:53 compute-2 nova_compute[232428]: 2025-11-29 09:12:53.711 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:53 compute-2 nova_compute[232428]: 2025-11-29 09:12:53.820 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:53 compute-2 nova_compute[232428]: 2025-11-29 09:12:53.821 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:54 compute-2 nova_compute[232428]: 2025-11-29 09:12:54.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:55.500 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:12:55 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:55.501 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:12:55 compute-2 nova_compute[232428]: 2025-11-29 09:12:55.500 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:55 compute-2 ceph-mon[77138]: pgmap v4134: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 2.7 KiB/s wr, 17 op/s
Nov 29 09:12:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:55.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:56 compute-2 ceph-mon[77138]: pgmap v4135: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 938 B/s wr, 53 op/s
Nov 29 09:12:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2225647507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1599357342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:12:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1599357342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:12:57 compute-2 nova_compute[232428]: 2025-11-29 09:12:57.835 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:12:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:12:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:12:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:12:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:12:58 compute-2 nova_compute[232428]: 2025-11-29 09:12:58.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:12:58 compute-2 nova_compute[232428]: 2025-11-29 09:12:58.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:12:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:12:58.502 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:12:58 compute-2 nova_compute[232428]: 2025-11-29 09:12:58.713 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:12:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 09:12:58 compute-2 ceph-mon[77138]: pgmap v4136: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 596 B/s wr, 53 op/s
Nov 29 09:12:59 compute-2 ceph-mon[77138]: osdmap e433: 3 total, 3 up, 3 in
Nov 29 09:12:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/464795899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3712192792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:12:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:12:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 09:12:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 09:13:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:00.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:00 compute-2 ceph-mon[77138]: pgmap v4138: 305 pgs: 305 active+clean; 205 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 KiB/s wr, 225 op/s
Nov 29 09:13:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/961603926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4071930598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:01 compute-2 podman[344180]: 2025-11-29 09:13:01.649929926 +0000 UTC m=+0.056296082 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:13:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:01.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:02.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:02 compute-2 ceph-mon[77138]: pgmap v4139: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 2.3 KiB/s wr, 283 op/s
Nov 29 09:13:02 compute-2 nova_compute[232428]: 2025-11-29 09:13:02.838 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:13:03.381 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:13:03.381 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:13:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:13:03.381 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:13:03 compute-2 nova_compute[232428]: 2025-11-29 09:13:03.716 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:13:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:03.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:13:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:04.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:05 compute-2 ceph-mon[77138]: pgmap v4140: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 169 KiB/s rd, 2.3 KiB/s wr, 272 op/s
Nov 29 09:13:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4039828435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:05.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:06.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:13:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011858042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:13:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:13:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011858042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:13:06 compute-2 ceph-mon[77138]: pgmap v4141: 305 pgs: 305 active+clean; 157 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 3.0 KiB/s wr, 260 op/s
Nov 29 09:13:06 compute-2 sudo[344204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:06 compute-2 sudo[344204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:06 compute-2 sudo[344204]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:07 compute-2 sudo[344229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:07 compute-2 sudo[344229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:07 compute-2 sudo[344229]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 09:13:07 compute-2 nova_compute[232428]: 2025-11-29 09:13:07.842 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3011858042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:13:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3011858042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:13:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:07.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:08.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:08 compute-2 nova_compute[232428]: 2025-11-29 09:13:08.717 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 09:13:09 compute-2 ceph-mon[77138]: osdmap e434: 3 total, 3 up, 3 in
Nov 29 09:13:09 compute-2 ceph-mon[77138]: pgmap v4143: 305 pgs: 305 active+clean; 157 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 3.3 KiB/s wr, 195 op/s
Nov 29 09:13:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:09.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:10.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:10 compute-2 ceph-mon[77138]: osdmap e435: 3 total, 3 up, 3 in
Nov 29 09:13:10 compute-2 ceph-mon[77138]: pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Nov 29 09:13:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:11.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:12.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:12 compute-2 ceph-mon[77138]: pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 09:13:12 compute-2 nova_compute[232428]: 2025-11-29 09:13:12.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:13 compute-2 nova_compute[232428]: 2025-11-29 09:13:13.720 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:14.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:15 compute-2 ceph-mon[77138]: pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.1 KiB/s wr, 28 op/s
Nov 29 09:13:15 compute-2 podman[344258]: 2025-11-29 09:13:15.69353633 +0000 UTC m=+0.095835758 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:13:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:16 compute-2 ceph-mon[77138]: pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 31 op/s
Nov 29 09:13:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 09:13:17 compute-2 nova_compute[232428]: 2025-11-29 09:13:17.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:18.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:18 compute-2 nova_compute[232428]: 2025-11-29 09:13:18.723 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:18 compute-2 ceph-mon[77138]: osdmap e436: 3 total, 3 up, 3 in
Nov 29 09:13:18 compute-2 ceph-mon[77138]: pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 27 op/s
Nov 29 09:13:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:20.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:21 compute-2 ceph-mon[77138]: pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 22 op/s
Nov 29 09:13:21 compute-2 podman[344287]: 2025-11-29 09:13:21.657205181 +0000 UTC m=+0.055162777 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 09:13:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:22.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:22 compute-2 nova_compute[232428]: 2025-11-29 09:13:22.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:23 compute-2 ceph-mon[77138]: pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 921 B/s wr, 21 op/s
Nov 29 09:13:23 compute-2 nova_compute[232428]: 2025-11-29 09:13:23.725 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:23.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:24 compute-2 ceph-mon[77138]: pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 826 KiB/s rd, 102 B/s wr, 9 op/s
Nov 29 09:13:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:25.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:27 compute-2 ceph-mon[77138]: pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 8 op/s
Nov 29 09:13:27 compute-2 sudo[344309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:27 compute-2 sudo[344309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:27 compute-2 sudo[344309]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:27 compute-2 sudo[344334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:27 compute-2 sudo[344334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:27 compute-2 sudo[344334]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:27 compute-2 nova_compute[232428]: 2025-11-29 09:13:27.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:27.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:28 compute-2 nova_compute[232428]: 2025-11-29 09:13:28.727 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2184447687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:13:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2184447687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:13:29 compute-2 ceph-mon[77138]: pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 199 B/s wr, 8 op/s
Nov 29 09:13:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:29.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:30 compute-2 ceph-mon[77138]: pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Nov 29 09:13:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:31.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:32 compute-2 podman[344362]: 2025-11-29 09:13:32.644157623 +0000 UTC m=+0.053909599 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 09:13:32 compute-2 nova_compute[232428]: 2025-11-29 09:13:32.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:33 compute-2 ceph-mon[77138]: pgmap v4157: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 771 KiB/s wr, 29 op/s
Nov 29 09:13:33 compute-2 nova_compute[232428]: 2025-11-29 09:13:33.729 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:33.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:34 compute-2 nova_compute[232428]: 2025-11-29 09:13:34.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:34 compute-2 ceph-mon[77138]: pgmap v4158: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 36 op/s
Nov 29 09:13:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:35.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:36.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:36 compute-2 ceph-mon[77138]: pgmap v4159: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 09:13:37 compute-2 nova_compute[232428]: 2025-11-29 09:13:37.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:37.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1794623106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:38.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:38 compute-2 nova_compute[232428]: 2025-11-29 09:13:38.731 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:13:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3701176624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:13:39 compute-2 ceph-mon[77138]: pgmap v4160: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 09:13:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:13:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:13:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3701176624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:13:41 compute-2 ceph-mon[77138]: pgmap v4161: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 09:13:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:41.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:42 compute-2 nova_compute[232428]: 2025-11-29 09:13:42.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:42 compute-2 nova_compute[232428]: 2025-11-29 09:13:42.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 09:13:42 compute-2 nova_compute[232428]: 2025-11-29 09:13:42.219 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 09:13:42 compute-2 ceph-mon[77138]: pgmap v4162: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.647865) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622647988, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1697, "num_deletes": 257, "total_data_size": 4085049, "memory_usage": 4130768, "flush_reason": "Manual Compaction"}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622676102, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2662688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91121, "largest_seqno": 92812, "table_properties": {"data_size": 2655422, "index_size": 4272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15104, "raw_average_key_size": 20, "raw_value_size": 2640926, "raw_average_value_size": 3516, "num_data_blocks": 187, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407479, "oldest_key_time": 1764407479, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 28310 microseconds, and 12713 cpu microseconds.
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.676172) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2662688 bytes OK
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.676212) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.678143) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.678176) EVENT_LOG_v1 {"time_micros": 1764407622678166, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.678206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4077293, prev total WAL file size 4077293, number of live WAL files 2.
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.680582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353234' seq:72057594037927935, type:22 .. '6C6F676D0033373735' seq:0, type:0; will stop at (end)
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2600KB)], [186(11MB)]
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622680700, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14977001, "oldest_snapshot_seqno": -1}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11563 keys, 14845573 bytes, temperature: kUnknown
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622862831, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14845573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14771499, "index_size": 44073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28933, "raw_key_size": 306417, "raw_average_key_size": 26, "raw_value_size": 14569775, "raw_average_value_size": 1260, "num_data_blocks": 1674, "num_entries": 11563, "num_filter_entries": 11563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.863084) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14845573 bytes
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.865423) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.2 rd, 81.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 12096, records dropped: 533 output_compression: NoCompression
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.865439) EVENT_LOG_v1 {"time_micros": 1764407622865431, "job": 120, "event": "compaction_finished", "compaction_time_micros": 182203, "compaction_time_cpu_micros": 62940, "output_level": 6, "num_output_files": 1, "total_output_size": 14845573, "num_input_records": 12096, "num_output_records": 11563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622866066, "job": 120, "event": "table_file_deletion", "file_number": 188}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622868327, "job": 120, "event": "table_file_deletion", "file_number": 186}
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.680477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.868425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.868430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.868431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 nova_compute[232428]: 2025-11-29 09:13:42.867 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.868433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:13:42.868434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:13:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2045064442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:13:43 compute-2 nova_compute[232428]: 2025-11-29 09:13:43.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:43.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:44.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:44 compute-2 nova_compute[232428]: 2025-11-29 09:13:44.218 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:44 compute-2 ceph-mon[77138]: pgmap v4163: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.0 MiB/s wr, 12 op/s
Nov 29 09:13:45 compute-2 nova_compute[232428]: 2025-11-29 09:13:45.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:46 compute-2 nova_compute[232428]: 2025-11-29 09:13:46.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:46 compute-2 nova_compute[232428]: 2025-11-29 09:13:46.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 09:13:46 compute-2 podman[344391]: 2025-11-29 09:13:46.677178349 +0000 UTC m=+0.083450696 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 09:13:46 compute-2 ceph-mon[77138]: pgmap v4164: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 679 KiB/s wr, 8 op/s
Nov 29 09:13:47 compute-2 sudo[344417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:47 compute-2 sudo[344417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 sudo[344417]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 sudo[344442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:47 compute-2 sudo[344442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 sudo[344442]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 sudo[344445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:13:47 compute-2 sudo[344445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 sudo[344445]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 sudo[344492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:47 compute-2 sudo[344492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 sudo[344492]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 sudo[344495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:47 compute-2 sudo[344495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 sudo[344495]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 sudo[344542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:13:47 compute-2 sudo[344542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:47 compute-2 nova_compute[232428]: 2025-11-29 09:13:47.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:47 compute-2 sudo[344542]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:48 compute-2 sudo[344599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:48 compute-2 sudo[344599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344599]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:48 compute-2 sudo[344624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:13:48 compute-2 sudo[344624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344624]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 nova_compute[232428]: 2025-11-29 09:13:48.214 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:48 compute-2 nova_compute[232428]: 2025-11-29 09:13:48.214 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:13:48 compute-2 nova_compute[232428]: 2025-11-29 09:13:48.214 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:13:48 compute-2 nova_compute[232428]: 2025-11-29 09:13:48.227 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:13:48 compute-2 sudo[344649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:48 compute-2 sudo[344649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344649]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 sudo[344674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 09:13:48 compute-2 sudo[344674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344674]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 sudo[344720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:48 compute-2 sudo[344720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344720]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 sudo[344745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:13:48 compute-2 sudo[344745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344745]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 sudo[344770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:48 compute-2 sudo[344770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 sudo[344770]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:48 compute-2 nova_compute[232428]: 2025-11-29 09:13:48.734 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:48 compute-2 sudo[344795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -- inventory --format=json-pretty --filter-for-batch
Nov 29 09:13:48 compute-2 sudo[344795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:48 compute-2 ceph-mon[77138]: pgmap v4165: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 4 op/s
Nov 29 09:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:48 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.122943728 +0000 UTC m=+0.046545982 container create 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 09:13:49 compute-2 systemd[1]: Started libpod-conmon-416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204.scope.
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.102263572 +0000 UTC m=+0.025865846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 09:13:49 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.224115049 +0000 UTC m=+0.147717363 container init 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.232303171 +0000 UTC m=+0.155905435 container start 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.235840029 +0000 UTC m=+0.159442283 container attach 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 09:13:49 compute-2 nervous_shirley[344876]: 167 167
Nov 29 09:13:49 compute-2 systemd[1]: libpod-416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204.scope: Deactivated successfully.
Nov 29 09:13:49 compute-2 conmon[344876]: conmon 416b1e3c3e92057d75ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204.scope/container/memory.events
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.239715279 +0000 UTC m=+0.163317533 container died 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 09:13:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-ab123e9692ee5dbe27203f71fba79380cbf939faa2a50c10b2f6f69192b1fc0f-merged.mount: Deactivated successfully.
Nov 29 09:13:49 compute-2 podman[344860]: 2025-11-29 09:13:49.283854385 +0000 UTC m=+0.207456649 container remove 416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:13:49 compute-2 systemd[1]: libpod-conmon-416b1e3c3e92057d75efc2f4739130451e3be0403baf19f532c81a3a509b5204.scope: Deactivated successfully.
Nov 29 09:13:49 compute-2 podman[344900]: 2025-11-29 09:13:49.464863791 +0000 UTC m=+0.042679943 container create dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 09:13:49 compute-2 systemd[1]: Started libpod-conmon-dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e.scope.
Nov 29 09:13:49 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:13:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4085f10ff3923b6a4f5e0d5771b6f41357d3a595f7e0d9e30319e81ad3646d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 09:13:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4085f10ff3923b6a4f5e0d5771b6f41357d3a595f7e0d9e30319e81ad3646d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 09:13:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4085f10ff3923b6a4f5e0d5771b6f41357d3a595f7e0d9e30319e81ad3646d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 09:13:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4085f10ff3923b6a4f5e0d5771b6f41357d3a595f7e0d9e30319e81ad3646d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 09:13:49 compute-2 podman[344900]: 2025-11-29 09:13:49.542395545 +0000 UTC m=+0.120211697 container init dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 09:13:49 compute-2 podman[344900]: 2025-11-29 09:13:49.447888789 +0000 UTC m=+0.025704941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 09:13:49 compute-2 podman[344900]: 2025-11-29 09:13:49.548852004 +0000 UTC m=+0.126668166 container start dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 09:13:49 compute-2 podman[344900]: 2025-11-29 09:13:49.552882627 +0000 UTC m=+0.130698789 container attach dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 09:13:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]: [
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:     {
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "available": false,
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "ceph_device": false,
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "lsm_data": {},
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "lvs": [],
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "path": "/dev/sr0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "rejected_reasons": [
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "Has a FileSystem",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "Insufficient space (<5GB)"
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         ],
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         "sys_api": {
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "actuators": null,
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "device_nodes": "sr0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "devname": "sr0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "human_readable_size": "482.00 KB",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "id_bus": "ata",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "model": "QEMU DVD-ROM",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "nr_requests": "2",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "parent": "/dev/sr0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "partitions": {},
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "path": "/dev/sr0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "removable": "1",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "rev": "2.5+",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "ro": "0",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "rotational": "1",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "sas_address": "",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "sas_device_handle": "",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "scheduler_mode": "mq-deadline",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "sectors": 0,
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "sectorsize": "2048",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "size": 493568.0,
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "support_discard": "2048",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "type": "disk",
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:             "vendor": "QEMU"
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:         }
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]:     }
Nov 29 09:13:50 compute-2 nostalgic_hodgkin[344916]: ]
Nov 29 09:13:50 compute-2 systemd[1]: libpod-dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e.scope: Deactivated successfully.
Nov 29 09:13:50 compute-2 systemd[1]: libpod-dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e.scope: Consumed 1.464s CPU time.
Nov 29 09:13:50 compute-2 podman[344900]: 2025-11-29 09:13:50.956501624 +0000 UTC m=+1.534317816 container died dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 09:13:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-f4085f10ff3923b6a4f5e0d5771b6f41357d3a595f7e0d9e30319e81ad3646d7-merged.mount: Deactivated successfully.
Nov 29 09:13:51 compute-2 podman[344900]: 2025-11-29 09:13:51.029914941 +0000 UTC m=+1.607731073 container remove dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 09:13:51 compute-2 ceph-mon[77138]: pgmap v4166: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 09:13:51 compute-2 systemd[1]: libpod-conmon-dcefd6101b6a76efd60ef04cd935dda88e98c13dea6662f967139fd6ca46284e.scope: Deactivated successfully.
Nov 29 09:13:51 compute-2 sudo[344795]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.226 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.226 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.227 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.227 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.227 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:13:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:13:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1211060591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.713 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.949 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.950 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4140MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.950 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:13:51 compute-2 nova_compute[232428]: 2025-11-29 09:13:51.950 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:13:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:51.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.048 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.049 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.066 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:13:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1211060591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:13:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3591354579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.514 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.519 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.537 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.539 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.539 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:13:52 compute-2 podman[346160]: 2025-11-29 09:13:52.669004608 +0000 UTC m=+0.062741961 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 09:13:52 compute-2 nova_compute[232428]: 2025-11-29 09:13:52.874 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:53 compute-2 ceph-mon[77138]: pgmap v4167: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:13:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3591354579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:53 compute-2 nova_compute[232428]: 2025-11-29 09:13:53.736 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:54.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:55 compute-2 ceph-mon[77138]: pgmap v4168: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:13:55 compute-2 nova_compute[232428]: 2025-11-29 09:13:55.537 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:55 compute-2 nova_compute[232428]: 2025-11-29 09:13:55.538 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:55.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:56.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:56 compute-2 ceph-mon[77138]: pgmap v4169: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:13:57 compute-2 sshd-session[346181]: Invalid user ubuntu from 45.148.10.240 port 37338
Nov 29 09:13:57 compute-2 sshd-session[346181]: Connection closed by invalid user ubuntu 45.148.10.240 port 37338 [preauth]
Nov 29 09:13:57 compute-2 nova_compute[232428]: 2025-11-29 09:13:57.878 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:13:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:13:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:13:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:13:58 compute-2 sudo[346184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:13:58 compute-2 sudo[346184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:58 compute-2 sudo[346184]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:58 compute-2 sudo[346209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:13:58 compute-2 sudo[346209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:13:58 compute-2 sudo[346209]: pam_unix(sudo:session): session closed for user root
Nov 29 09:13:58 compute-2 nova_compute[232428]: 2025-11-29 09:13:58.739 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:13:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:13:58 compute-2 ceph-mon[77138]: pgmap v4170: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Nov 29 09:13:59 compute-2 nova_compute[232428]: 2025-11-29 09:13:59.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:13:59 compute-2 nova_compute[232428]: 2025-11-29 09:13:59.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:13:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1601604406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:13:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:13:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:13:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1444134340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:00 compute-2 ceph-mon[77138]: pgmap v4171: 305 pgs: 305 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 101 op/s
Nov 29 09:14:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4222022024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:01.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:02 compute-2 ceph-mon[77138]: pgmap v4172: 305 pgs: 305 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Nov 29 09:14:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1345061501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:02 compute-2 nova_compute[232428]: 2025-11-29 09:14:02.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:03.382 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:03.383 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:14:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:03.383 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:14:03 compute-2 podman[346236]: 2025-11-29 09:14:03.683005349 +0000 UTC m=+0.081371873 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:14:03 compute-2 nova_compute[232428]: 2025-11-29 09:14:03.742 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:04 compute-2 ceph-mon[77138]: pgmap v4173: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 09:14:05 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 09:14:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:06.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:06 compute-2 nova_compute[232428]: 2025-11-29 09:14:06.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:07 compute-2 ceph-mon[77138]: pgmap v4174: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:14:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:07.468 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:14:07 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:07.469 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:14:07 compute-2 nova_compute[232428]: 2025-11-29 09:14:07.469 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:07 compute-2 sudo[346257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:07 compute-2 sudo[346257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:07 compute-2 sudo[346257]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:07 compute-2 sudo[346282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:07 compute-2 sudo[346282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:07 compute-2 sudo[346282]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:07 compute-2 nova_compute[232428]: 2025-11-29 09:14:07.884 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3230745112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:08.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:08 compute-2 nova_compute[232428]: 2025-11-29 09:14:08.744 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:09 compute-2 ceph-mon[77138]: pgmap v4175: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 09:14:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:11 compute-2 ceph-mon[77138]: pgmap v4176: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 29 09:14:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:11.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:12 compute-2 nova_compute[232428]: 2025-11-29 09:14:12.888 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:13 compute-2 ceph-mon[77138]: pgmap v4177: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 126 KiB/s rd, 523 KiB/s wr, 47 op/s
Nov 29 09:14:13 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:14:13.471 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:14:13 compute-2 nova_compute[232428]: 2025-11-29 09:14:13.747 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:13.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:15 compute-2 ceph-mon[77138]: pgmap v4178: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 76 KiB/s wr, 18 op/s
Nov 29 09:14:15 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3437376576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:15 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:14:15 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785986621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:14:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:14:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:16.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:14:16 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/785986621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:14:17 compute-2 ceph-mon[77138]: pgmap v4179: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 29 09:14:17 compute-2 podman[346312]: 2025-11-29 09:14:17.688380285 +0000 UTC m=+0.094692083 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 09:14:17 compute-2 nova_compute[232428]: 2025-11-29 09:14:17.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:17.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:18.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:18 compute-2 ceph-mon[77138]: pgmap v4180: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 15 op/s
Nov 29 09:14:18 compute-2 nova_compute[232428]: 2025-11-29 09:14:18.749 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:19 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2810679674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:14:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:20 compute-2 ceph-mon[77138]: pgmap v4181: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 15 op/s
Nov 29 09:14:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:22 compute-2 nova_compute[232428]: 2025-11-29 09:14:22.893 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:23 compute-2 ceph-mon[77138]: pgmap v4182: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 KiB/s rd, 9.7 KiB/s wr, 12 op/s
Nov 29 09:14:23 compute-2 podman[346341]: 2025-11-29 09:14:23.695248176 +0000 UTC m=+0.096941951 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 29 09:14:23 compute-2 nova_compute[232428]: 2025-11-29 09:14:23.751 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:25 compute-2 ceph-mon[77138]: pgmap v4183: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 16 op/s
Nov 29 09:14:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:27 compute-2 ceph-mon[77138]: pgmap v4184: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:14:27 compute-2 sudo[346363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:27 compute-2 sudo[346363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:27 compute-2 sudo[346363]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:27 compute-2 sudo[346388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:27 compute-2 sudo[346388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:27 compute-2 sudo[346388]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:27 compute-2 nova_compute[232428]: 2025-11-29 09:14:27.897 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:28 compute-2 nova_compute[232428]: 2025-11-29 09:14:28.752 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:29 compute-2 ceph-mon[77138]: pgmap v4185: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:14:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2632729154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:14:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2632729154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:14:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:14:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:14:31 compute-2 ceph-mon[77138]: pgmap v4186: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:14:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:32.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:32 compute-2 nova_compute[232428]: 2025-11-29 09:14:32.902 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:33 compute-2 ceph-mon[77138]: pgmap v4187: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 09:14:33 compute-2 nova_compute[232428]: 2025-11-29 09:14:33.756 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:34.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:34.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:34 compute-2 ceph-mon[77138]: pgmap v4188: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Nov 29 09:14:34 compute-2 podman[346417]: 2025-11-29 09:14:34.650347048 +0000 UTC m=+0.054456055 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 09:14:35 compute-2 nova_compute[232428]: 2025-11-29 09:14:35.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:36.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:36.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:37 compute-2 ceph-mon[77138]: pgmap v4189: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 94 op/s
Nov 29 09:14:37 compute-2 nova_compute[232428]: 2025-11-29 09:14:37.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:38.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:38.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:38 compute-2 nova_compute[232428]: 2025-11-29 09:14:38.759 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:39 compute-2 ceph-mon[77138]: pgmap v4190: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 461 KiB/s rd, 13 KiB/s wr, 37 op/s
Nov 29 09:14:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:40.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:41 compute-2 ceph-mon[77138]: pgmap v4191: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 29 09:14:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:42.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:42.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:42 compute-2 nova_compute[232428]: 2025-11-29 09:14:42.908 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:43 compute-2 ceph-mon[77138]: pgmap v4192: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 29 09:14:43 compute-2 nova_compute[232428]: 2025-11-29 09:14:43.762 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:44 compute-2 nova_compute[232428]: 2025-11-29 09:14:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:45 compute-2 ceph-mon[77138]: pgmap v4193: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 26 KiB/s wr, 44 op/s
Nov 29 09:14:45 compute-2 nova_compute[232428]: 2025-11-29 09:14:45.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:47 compute-2 ceph-mon[77138]: pgmap v4194: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 507 KiB/s rd, 26 KiB/s wr, 42 op/s
Nov 29 09:14:47 compute-2 nova_compute[232428]: 2025-11-29 09:14:47.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:47 compute-2 sudo[346444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:47 compute-2 sudo[346444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:47 compute-2 sudo[346444]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:47 compute-2 sudo[346475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:47 compute-2 sudo[346475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:47 compute-2 sudo[346475]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:48 compute-2 podman[346468]: 2025-11-29 09:14:48.0412138 +0000 UTC m=+0.108356673 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 09:14:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:48 compute-2 nova_compute[232428]: 2025-11-29 09:14:48.763 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:49 compute-2 ceph-mon[77138]: pgmap v4195: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 29 09:14:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:50 compute-2 nova_compute[232428]: 2025-11-29 09:14:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:50 compute-2 nova_compute[232428]: 2025-11-29 09:14:50.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:14:50 compute-2 nova_compute[232428]: 2025-11-29 09:14:50.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:14:50 compute-2 nova_compute[232428]: 2025-11-29 09:14:50.217 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:14:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:51 compute-2 ceph-mon[77138]: pgmap v4196: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.212957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691213212, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 965, "num_deletes": 251, "total_data_size": 1976643, "memory_usage": 2008944, "flush_reason": "Manual Compaction"}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691223889, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1292982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92818, "largest_seqno": 93777, "table_properties": {"data_size": 1288530, "index_size": 2103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9930, "raw_average_key_size": 19, "raw_value_size": 1279631, "raw_average_value_size": 2564, "num_data_blocks": 91, "num_entries": 499, "num_filter_entries": 499, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407622, "oldest_key_time": 1764407622, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 10979 microseconds, and 3841 cpu microseconds.
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.223932) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1292982 bytes OK
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.223954) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.226048) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.226065) EVENT_LOG_v1 {"time_micros": 1764407691226060, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.226082) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1971845, prev total WAL file size 1971845, number of live WAL files 2.
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.226907) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1262KB)], [189(14MB)]
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691226959, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 16138555, "oldest_snapshot_seqno": -1}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11543 keys, 14096087 bytes, temperature: kUnknown
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691347294, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 14096087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14022801, "index_size": 43349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 306669, "raw_average_key_size": 26, "raw_value_size": 13822284, "raw_average_value_size": 1197, "num_data_blocks": 1637, "num_entries": 11543, "num_filter_entries": 11543, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.347570) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 14096087 bytes
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.349547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.0 rd, 117.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.2 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.4) write-amplify(10.9) OK, records in: 12062, records dropped: 519 output_compression: NoCompression
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.349568) EVENT_LOG_v1 {"time_micros": 1764407691349560, "job": 122, "event": "compaction_finished", "compaction_time_micros": 120451, "compaction_time_cpu_micros": 49980, "output_level": 6, "num_output_files": 1, "total_output_size": 14096087, "num_input_records": 12062, "num_output_records": 11543, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691349886, "job": 122, "event": "table_file_deletion", "file_number": 191}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691352948, "job": 122, "event": "table_file_deletion", "file_number": 189}
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.226786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:51 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:14:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:52 compute-2 nova_compute[232428]: 2025-11-29 09:14:52.915 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:53 compute-2 ceph-mon[77138]: pgmap v4197: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s wr, 0 op/s
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.237 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.237 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.238 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.238 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.239 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:14:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:14:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/436347010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.739 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.764 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.909 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.910 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4151MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.911 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:14:53 compute-2 nova_compute[232428]: 2025-11-29 09:14:53.911 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:14:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:54.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.065 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.066 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.142 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.192 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.193 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.207 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.226 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 09:14:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/436347010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.249 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:14:54 compute-2 podman[346566]: 2025-11-29 09:14:54.645195679 +0000 UTC m=+0.050108032 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 09:14:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:14:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2633754132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.725 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.733 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.751 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.754 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:14:54 compute-2 nova_compute[232428]: 2025-11-29 09:14:54.755 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:14:55 compute-2 ceph-mon[77138]: pgmap v4198: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Nov 29 09:14:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2633754132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:14:55 compute-2 nova_compute[232428]: 2025-11-29 09:14:55.755 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:55 compute-2 nova_compute[232428]: 2025-11-29 09:14:55.756 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:56.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:57 compute-2 ceph-mon[77138]: pgmap v4199: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 09:14:57 compute-2 nova_compute[232428]: 2025-11-29 09:14:57.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:14:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:14:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:58.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:14:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:14:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:14:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:14:58 compute-2 sudo[346592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:58 compute-2 sudo[346592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:58 compute-2 sudo[346592]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:58 compute-2 sudo[346617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:14:58 compute-2 sudo[346617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:58 compute-2 sudo[346617]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:58 compute-2 sudo[346642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:14:58 compute-2 sudo[346642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:58 compute-2 sudo[346642]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:58 compute-2 ceph-mon[77138]: pgmap v4200: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 09:14:58 compute-2 sudo[346667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:14:58 compute-2 sudo[346667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:14:58 compute-2 nova_compute[232428]: 2025-11-29 09:14:58.767 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:14:58 compute-2 sudo[346667]: pam_unix(sudo:session): session closed for user root
Nov 29 09:14:59 compute-2 nova_compute[232428]: 2025-11-29 09:14:59.200 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:14:59 compute-2 nova_compute[232428]: 2025-11-29 09:14:59.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:15:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 09:15:01 compute-2 ceph-mon[77138]: pgmap v4201: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 09:15:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1198527805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:02.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:02 compute-2 ceph-mon[77138]: osdmap e437: 3 total, 3 up, 3 in
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:15:02 compute-2 ceph-mon[77138]: pgmap v4203: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 09:15:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3253250129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:02 compute-2 nova_compute[232428]: 2025-11-29 09:15:02.922 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:03.383 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:03.383 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:03.383 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3335757106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:03 compute-2 nova_compute[232428]: 2025-11-29 09:15:03.769 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:04.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:04 compute-2 ceph-mon[77138]: pgmap v4204: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 4.5 KiB/s wr, 15 op/s
Nov 29 09:15:05 compute-2 podman[346724]: 2025-11-29 09:15:05.676537506 +0000 UTC m=+0.072419088 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 09:15:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/616702941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:06.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:06.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:07 compute-2 ceph-mon[77138]: pgmap v4205: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 5.0 KiB/s wr, 29 op/s
Nov 29 09:15:07 compute-2 nova_compute[232428]: 2025-11-29 09:15:07.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:08.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:08 compute-2 sudo[346747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:08 compute-2 sudo[346747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:08 compute-2 sudo[346747]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:08 compute-2 sudo[346772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:08 compute-2 sudo[346772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:08 compute-2 sudo[346772]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:08 compute-2 sudo[346797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:08 compute-2 sudo[346797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:08 compute-2 sudo[346797]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:08 compute-2 sudo[346822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:15:08 compute-2 sudo[346822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:08 compute-2 sudo[346822]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:08 compute-2 nova_compute[232428]: 2025-11-29 09:15:08.770 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:08 compute-2 ceph-mon[77138]: pgmap v4206: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 5.0 KiB/s wr, 29 op/s
Nov 29 09:15:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.067 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.068 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.174 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.316 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.317 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.323 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.324 232432 INFO nova.compute.claims [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Claim successful on node compute-2.ctlplane.example.com
Nov 29 09:15:09 compute-2 nova_compute[232428]: 2025-11-29 09:15:09.747 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:10.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:10 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:15:10 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3268068564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.501 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.508 232432 DEBUG nova.compute.provider_tree [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:15:10 compute-2 ceph-mon[77138]: pgmap v4207: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 5.4 KiB/s wr, 31 op/s
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.588 232432 DEBUG nova.scheduler.client.report [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.768 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.769 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.974 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 09:15:10 compute-2 nova_compute[232428]: 2025-11-29 09:15:10.975 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.028 232432 INFO nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.067 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.215 232432 INFO nova.virt.block_device [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Booting with volume 5a123e28-261c-472c-adab-73c89e0d557e at /dev/vda
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.268 232432 DEBUG nova.policy [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.357 232432 DEBUG os_brick.utils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.359 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.376 244579 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.377 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[779f7c9f-8e65-4fba-8fda-774825f2c99f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.378 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.391 244579 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.392 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[1596249d-0435-4264-96e2-6eae7fb0fd8f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0c9f057a05c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.394 244579 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.404 244579 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.404 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[2738d68a-b537-43cf-8a10-196ee5b353bf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.406 244579 DEBUG oslo.privsep.daemon [-] privsep: reply[312ad8cb-f198-408e-8e6c-76d63cb6fa22]: (4, '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.407 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.443 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.445 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.445 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.445 232432 DEBUG os_brick.initiator.connectors.lightos [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.446 232432 DEBUG os_brick.utils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0c9f057a05c', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '841b8909-9838-4df3-bf7c-bb9b0c2a4d0c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 09:15:11 compute-2 nova_compute[232428]: 2025-11-29 09:15:11.446 232432 DEBUG nova.virt.block_device [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating existing volume attachment record: 6c597563-06b9-465f-ad56-234cf0bf3a56 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 09:15:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3268068564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:12.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:15:12 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312210863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:15:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.662 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Successfully created port: 10ee3c8d-ae72-4761-9f39-b637dc5e841a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 09:15:12 compute-2 ceph-mon[77138]: pgmap v4208: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 6.8 KiB/s wr, 31 op/s
Nov 29 09:15:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/312210863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.790 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.792 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.793 232432 INFO nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Creating image(s)
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.793 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.794 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Ensure instance console log exists: /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.794 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.795 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.795 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:12 compute-2 nova_compute[232428]: 2025-11-29 09:15:12.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:13 compute-2 nova_compute[232428]: 2025-11-29 09:15:13.772 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:14.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.089 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Successfully updated port: 10ee3c8d-ae72-4761-9f39-b637dc5e841a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.129 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.129 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.129 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 09:15:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:14.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.373 232432 DEBUG nova.compute.manager [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.373 232432 DEBUG nova.compute.manager [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing instance network info cache due to event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:15:14 compute-2 nova_compute[232428]: 2025-11-29 09:15:14.374 232432 DEBUG oslo_concurrency.lockutils [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:15:15 compute-2 nova_compute[232428]: 2025-11-29 09:15:15.043 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 09:15:15 compute-2 ceph-mon[77138]: pgmap v4209: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 6.0 KiB/s wr, 27 op/s
Nov 29 09:15:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:16.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.374 232432 DEBUG nova.network.neutron [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.495 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.496 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Instance network_info: |[{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.496 232432 DEBUG oslo_concurrency.lockutils [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.497 232432 DEBUG nova.network.neutron [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.501 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Start _get_guest_xml network_info=[{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5a123e28-261c-472c-adab-73c89e0d557e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5a123e28-261c-472c-adab-73c89e0d557e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '994f08a3-185a-4f37-a457-a99f66bba646', 'attached_at': '', 'detached_at': '', 'volume_id': '5a123e28-261c-472c-adab-73c89e0d557e', 'serial': '5a123e28-261c-472c-adab-73c89e0d557e'}, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '6c597563-06b9-465f-ad56-234cf0bf3a56', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.507 232432 WARNING nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.511 232432 DEBUG nova.virt.libvirt.host [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.512 232432 DEBUG nova.virt.libvirt.host [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.519 232432 DEBUG nova.virt.libvirt.host [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.520 232432 DEBUG nova.virt.libvirt.host [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.522 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.522 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.523 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.523 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.524 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.524 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.524 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.525 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.525 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.526 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.526 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.526 232432 DEBUG nova.virt.hardware [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.562 232432 DEBUG nova.storage.rbd_utils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 994f08a3-185a-4f37-a457-a99f66bba646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:15:16 compute-2 nova_compute[232428]: 2025-11-29 09:15:16.566 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:16 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 09:15:16 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696093637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.003 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.174 232432 DEBUG nova.virt.libvirt.vif [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1138991441',display_name='tempest-TestVolumeBootPattern-server-1138991441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1138991441',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-n3d8n714',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:15:11Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=994f08a3-185a-4f37-a457-a99f66bba646,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.174 232432 DEBUG nova.network.os_vif_util [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.177 232432 DEBUG nova.network.os_vif_util [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.178 232432 DEBUG nova.objects.instance [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 994f08a3-185a-4f37-a457-a99f66bba646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.230 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] End _get_guest_xml xml=<domain type="kvm">
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <uuid>994f08a3-185a-4f37-a457-a99f66bba646</uuid>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <name>instance-000000e0</name>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <memory>131072</memory>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <vcpu>1</vcpu>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <metadata>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:name>tempest-TestVolumeBootPattern-server-1138991441</nova:name>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:creationTime>2025-11-29 09:15:16</nova:creationTime>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:flavor name="m1.nano">
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:memory>128</nova:memory>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:disk>1</nova:disk>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:swap>0</nova:swap>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:vcpus>1</nova:vcpus>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </nova:flavor>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:owner>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </nova:owner>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <nova:ports>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <nova:port uuid="10ee3c8d-ae72-4761-9f39-b637dc5e841a">
Nov 29 09:15:17 compute-2 nova_compute[232428]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         </nova:port>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </nova:ports>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </nova:instance>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </metadata>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <sysinfo type="smbios">
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <system>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="manufacturer">RDO</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="product">OpenStack Compute</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="serial">994f08a3-185a-4f37-a457-a99f66bba646</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="uuid">994f08a3-185a-4f37-a457-a99f66bba646</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <entry name="family">Virtual Machine</entry>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </system>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </sysinfo>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <os>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <boot dev="hd"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <smbios mode="sysinfo"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </os>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <features>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <acpi/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <apic/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <vmcoreinfo/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </features>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <clock offset="utc">
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <timer name="hpet" present="no"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </clock>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <cpu mode="custom" match="exact">
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <model>Nehalem</model>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </cpu>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   <devices>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <disk type="network" device="cdrom">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <driver type="raw" cache="none"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <source protocol="rbd" name="vms/994f08a3-185a-4f37-a457-a99f66bba646_disk.config">
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </source>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <target dev="sda" bus="sata"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <disk type="network" device="disk">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <source protocol="rbd" name="volumes/volume-5a123e28-261c-472c-adab-73c89e0d557e">
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.100" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.102" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <host name="192.168.122.101" port="6789"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </source>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <auth username="openstack">
Nov 29 09:15:17 compute-2 nova_compute[232428]:         <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       </auth>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <target dev="vda" bus="virtio"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <serial>5a123e28-261c-472c-adab-73c89e0d557e</serial>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </disk>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <interface type="ethernet">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <mac address="fa:16:3e:27:cd:95"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <mtu size="1442"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <target dev="tap10ee3c8d-ae"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </interface>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <serial type="pty">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <log file="/var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/console.log" append="off"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </serial>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <video>
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <model type="virtio"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </video>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <input type="tablet" bus="usb"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <rng model="virtio">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <backend model="random">/dev/urandom</backend>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </rng>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <controller type="usb" index="0"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     <memballoon model="virtio">
Nov 29 09:15:17 compute-2 nova_compute[232428]:       <stats period="10"/>
Nov 29 09:15:17 compute-2 nova_compute[232428]:     </memballoon>
Nov 29 09:15:17 compute-2 nova_compute[232428]:   </devices>
Nov 29 09:15:17 compute-2 nova_compute[232428]: </domain>
Nov 29 09:15:17 compute-2 nova_compute[232428]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.232 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Preparing to wait for external event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.232 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.232 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.233 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.233 232432 DEBUG nova.virt.libvirt.vif [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1138991441',display_name='tempest-TestVolumeBootPattern-server-1138991441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1138991441',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-n3d8n714',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:15:11Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=994f08a3-185a-4f37-a457-a99f66bba646,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.234 232432 DEBUG nova.network.os_vif_util [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.234 232432 DEBUG nova.network.os_vif_util [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.234 232432 DEBUG os_vif [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.235 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.235 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.236 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.240 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.241 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10ee3c8d-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.241 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10ee3c8d-ae, col_values=(('external_ids', {'iface-id': '10ee3c8d-ae72-4761-9f39-b637dc5e841a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:cd:95', 'vm-uuid': '994f08a3-185a-4f37-a457-a99f66bba646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.243 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:17 compute-2 NetworkManager[48993]: <info>  [1764407717.2444] manager: (tap10ee3c8d-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.245 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.249 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.251 232432 INFO os_vif [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae')
Nov 29 09:15:17 compute-2 ceph-mon[77138]: pgmap v4210: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Nov 29 09:15:17 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2696093637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.296 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.296 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.297 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:27:cd:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.297 232432 INFO nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Using config drive
Nov 29 09:15:17 compute-2 nova_compute[232428]: 2025-11-29 09:15:17.322 232432 DEBUG nova.storage.rbd_utils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 994f08a3-185a-4f37-a457-a99f66bba646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:15:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.096 232432 INFO nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Creating config drive at /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.101 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57jf4cug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.229 232432 DEBUG nova.network.neutron [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updated VIF entry in instance network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.230 232432 DEBUG nova.network.neutron [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.241 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57jf4cug" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.291 232432 DEBUG nova.storage.rbd_utils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 994f08a3-185a-4f37-a457-a99f66bba646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.295 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config 994f08a3-185a-4f37-a457-a99f66bba646_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:18 compute-2 ceph-mon[77138]: pgmap v4211: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 KiB/s rd, 1.8 KiB/s wr, 2 op/s
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.331 232432 DEBUG oslo_concurrency.lockutils [req-fa0a1a28-18b0-4516-a9fd-05ccac425e7d req-e31e900f-cd8d-486a-bd45-7ce18c0f4cb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.667 232432 DEBUG oslo_concurrency.processutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config 994f08a3-185a-4f37-a457-a99f66bba646_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.668 232432 INFO nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Deleting local config drive /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646/disk.config because it was imported into RBD.
Nov 29 09:15:18 compute-2 podman[346979]: 2025-11-29 09:15:18.685343522 +0000 UTC m=+0.087561284 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 09:15:18 compute-2 kernel: tap10ee3c8d-ae: entered promiscuous mode
Nov 29 09:15:18 compute-2 ovn_controller[134375]: 2025-11-29T09:15:18Z|01016|binding|INFO|Claiming lport 10ee3c8d-ae72-4761-9f39-b637dc5e841a for this chassis.
Nov 29 09:15:18 compute-2 ovn_controller[134375]: 2025-11-29T09:15:18Z|01017|binding|INFO|10ee3c8d-ae72-4761-9f39-b637dc5e841a: Claiming fa:16:3e:27:cd:95 10.100.0.10
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.722 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.7257] manager: (tap10ee3c8d-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.727 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.732 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.737 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.7385] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.7391] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Nov 29 09:15:18 compute-2 systemd-udevd[347018]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:15:18 compute-2 systemd-machined[194747]: New machine qemu-104-instance-000000e0.
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.7617] device (tap10ee3c8d-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.7623] device (tap10ee3c8d-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.834 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:cd:95 10.100.0.10'], port_security=['fa:16:3e:27:cd:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '994f08a3-185a-4f37-a457-a99f66bba646', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=10ee3c8d-ae72-4761-9f39-b637dc5e841a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.835 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 10ee3c8d-ae72-4761-9f39-b637dc5e841a in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.835 143801 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.846 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 systemd[1]: Started Virtual Machine qemu-104-instance-000000e0.
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.846 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[263a91e4-89b2-4f95-a6c7-fb82f009636d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.847 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.849 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.851 238475 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.851 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[5eda6c8f-83e9-4742-a528-3709d0c6b45d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.852 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0e1aba-fa50-41fe-83c6-5ce11f01d2e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.866 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[17c5a6b7-75b9-471d-a1e5-a1fdc15295e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_controller[134375]: 2025-11-29T09:15:18Z|01018|binding|INFO|Setting lport 10ee3c8d-ae72-4761-9f39-b637dc5e841a ovn-installed in OVS
Nov 29 09:15:18 compute-2 ovn_controller[134375]: 2025-11-29T09:15:18Z|01019|binding|INFO|Setting lport 10ee3c8d-ae72-4761-9f39-b637dc5e841a up in Southbound
Nov 29 09:15:18 compute-2 nova_compute[232428]: 2025-11-29 09:15:18.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.891 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7acca25d-b921-4136-a088-43d5c5cb8f5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.924 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[55deb105-7716-468f-9ed5-309ebde92464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 NetworkManager[48993]: <info>  [1764407718.9324] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Nov 29 09:15:18 compute-2 systemd-udevd[347021]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.931 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[8c01f34e-48f5-4c41-a8ce-fde09f992504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.968 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3b986722-f0b4-4ad9-adf6-d5e358863470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:18 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:18.975 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a970f6-66aa-497a-b2f5-0fe9a0ac175b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 NetworkManager[48993]: <info>  [1764407719.0011] device (tap8aaf4606-90): carrier: link connected
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.005 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cd45c1dc-99a6-4a98-a33c-3950f4359fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.026 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9b92a6aa-ce9c-448e-a5f4-50f1f2b5feb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068526, 'reachable_time': 40620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347052, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.046 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f4e1fe-e22a-418e-9cd9-e340cdd61eab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1068526, 'tstamp': 1068526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347053, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.069 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[76921cd7-adb5-45b0-85a1-9cef854916c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068526, 'reachable_time': 40620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347054, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.103 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a54c8f-be3d-4028-8e41-f5e0637987d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.173 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[40955ae4-c91d-413b-bf71-1f9bb1d601dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.176 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.176 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.177 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:19 compute-2 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 09:15:19 compute-2 NetworkManager[48993]: <info>  [1764407719.1801] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.179 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.182 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.184 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.185 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:19 compute-2 ovn_controller[134375]: 2025-11-29T09:15:19Z|01020|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.186 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.188 143801 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.190 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2853aa-61f6-4fdc-b8d0-6b029c54242a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.191 143801 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: global
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     log         /dev/log local0 debug
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     user        root
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     group       root
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     maxconn     1024
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     daemon
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: defaults
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     log global
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     mode http
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     option httplog
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     option dontlognull
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     option http-server-close
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     option forwardfor
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     retries                 3
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     timeout http-request    30s
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     timeout connect         30s
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     timeout client          32s
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     timeout server          32s
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     timeout http-keep-alive 30s
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: listen listener
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     bind 169.254.169.254:80
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:     http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 09:15:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:19.192 143801 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.198 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.390 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407719.3898652, 994f08a3-185a-4f37-a457-a99f66bba646 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.391 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] VM Started (Lifecycle Event)
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.472 232432 DEBUG nova.compute.manager [req-e3874c55-0629-41c6-bd30-bb11734f55bc req-72d45ebe-2e75-4b14-9e34-6dd4b940b3c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.473 232432 DEBUG oslo_concurrency.lockutils [req-e3874c55-0629-41c6-bd30-bb11734f55bc req-72d45ebe-2e75-4b14-9e34-6dd4b940b3c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.474 232432 DEBUG oslo_concurrency.lockutils [req-e3874c55-0629-41c6-bd30-bb11734f55bc req-72d45ebe-2e75-4b14-9e34-6dd4b940b3c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.474 232432 DEBUG oslo_concurrency.lockutils [req-e3874c55-0629-41c6-bd30-bb11734f55bc req-72d45ebe-2e75-4b14-9e34-6dd4b940b3c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.475 232432 DEBUG nova.compute.manager [req-e3874c55-0629-41c6-bd30-bb11734f55bc req-72d45ebe-2e75-4b14-9e34-6dd4b940b3c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Processing event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.476 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.480 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.484 232432 INFO nova.virt.libvirt.driver [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Instance spawned successfully.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.484 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.541 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.544 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.622 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.622 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407719.390937, 994f08a3-185a-4f37-a457-a99f66bba646 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.622 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] VM Paused (Lifecycle Event)
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.628 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.628 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.629 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.629 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.629 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 podman[347127]: 2025-11-29 09:15:19.630275916 +0000 UTC m=+0.102711690 container create fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.630 232432 DEBUG nova.virt.libvirt.driver [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 09:15:19 compute-2 podman[347127]: 2025-11-29 09:15:19.551244016 +0000 UTC m=+0.023679790 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.659 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.663 232432 DEBUG nova.virt.driver [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] Emitting event <LifecycleEvent: 1764407719.4792554, 994f08a3-185a-4f37-a457-a99f66bba646 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.663 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] VM Resumed (Lifecycle Event)
Nov 29 09:15:19 compute-2 systemd[1]: Started libpod-conmon-fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64.scope.
Nov 29 09:15:19 compute-2 systemd[1]: Started libcrun container.
Nov 29 09:15:19 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/633d344d03b717aa7da5909973b26cd3486334bc7922a71a220173a9542f01a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 09:15:19 compute-2 podman[347127]: 2025-11-29 09:15:19.733627283 +0000 UTC m=+0.206063057 container init fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 09:15:19 compute-2 podman[347127]: 2025-11-29 09:15:19.746613862 +0000 UTC m=+0.219049636 container start fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:15:19 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [NOTICE]   (347146) : New worker (347148) forked
Nov 29 09:15:19 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [NOTICE]   (347146) : Loading success.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.771 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.776 232432 DEBUG nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.814 232432 INFO nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Took 7.02 seconds to spawn the instance on the hypervisor.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.815 232432 DEBUG nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.817 232432 INFO nova.compute.manager [None req-84b1abe1-56d2-4e92-9b80-53c04e58a274 - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.915 232432 INFO nova.compute.manager [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Took 10.62 seconds to build instance.
Nov 29 09:15:19 compute-2 nova_compute[232428]: 2025-11-29 09:15:19.965 232432 DEBUG oslo_concurrency.lockutils [None req-a30ccb0b-b822-4b28-9f14-9af825fef8cf 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:20 compute-2 nova_compute[232428]: 2025-11-29 09:15:20.151 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:20.153 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:15:20 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:20.154 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:15:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:21 compute-2 ceph-mon[77138]: pgmap v4212: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 14 KiB/s wr, 11 op/s
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.663 232432 DEBUG nova.compute.manager [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.663 232432 DEBUG oslo_concurrency.lockutils [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.664 232432 DEBUG oslo_concurrency.lockutils [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.664 232432 DEBUG oslo_concurrency.lockutils [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.664 232432 DEBUG nova.compute.manager [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] No waiting events found dispatching network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:15:21 compute-2 nova_compute[232428]: 2025-11-29 09:15:21.664 232432 WARNING nova.compute.manager [req-b9608d9e-2ba4-4070-964f-3260c98e6ef3 req-20a33921-b3aa-42b9-8875-33cf1db7ac51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received unexpected event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a for instance with vm_state active and task_state None.
Nov 29 09:15:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:22.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:22 compute-2 nova_compute[232428]: 2025-11-29 09:15:22.244 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:23 compute-2 ceph-mon[77138]: pgmap v4213: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 14 KiB/s wr, 10 op/s
Nov 29 09:15:23 compute-2 nova_compute[232428]: 2025-11-29 09:15:23.850 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:24.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:24 compute-2 nova_compute[232428]: 2025-11-29 09:15:24.236 232432 DEBUG nova.compute.manager [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:24 compute-2 nova_compute[232428]: 2025-11-29 09:15:24.236 232432 DEBUG nova.compute.manager [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing instance network info cache due to event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:15:24 compute-2 nova_compute[232428]: 2025-11-29 09:15:24.237 232432 DEBUG oslo_concurrency.lockutils [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:15:24 compute-2 nova_compute[232428]: 2025-11-29 09:15:24.237 232432 DEBUG oslo_concurrency.lockutils [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:15:24 compute-2 nova_compute[232428]: 2025-11-29 09:15:24.237 232432 DEBUG nova.network.neutron [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:15:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:24 compute-2 ceph-mon[77138]: pgmap v4214: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 441 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 09:15:25 compute-2 nova_compute[232428]: 2025-11-29 09:15:25.525 232432 DEBUG nova.network.neutron [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updated VIF entry in instance network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:15:25 compute-2 nova_compute[232428]: 2025-11-29 09:15:25.526 232432 DEBUG nova.network.neutron [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:25 compute-2 nova_compute[232428]: 2025-11-29 09:15:25.547 232432 DEBUG oslo_concurrency.lockutils [req-0bf4a9a1-e871-4e06-b600-620c0d637232 req-c22e2824-e986-49ae-a81b-d4eabe502320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:15:25 compute-2 podman[347160]: 2025-11-29 09:15:25.653498429 +0000 UTC m=+0.059289304 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:15:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:26.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:27 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:27.156 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:27 compute-2 nova_compute[232428]: 2025-11-29 09:15:27.246 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:27 compute-2 ceph-mon[77138]: pgmap v4215: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 09:15:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:28.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:15:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1066162615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:15:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:15:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1066162615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:15:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:28.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:28 compute-2 sudo[347181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:28 compute-2 sudo[347181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:28 compute-2 sudo[347181]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:28 compute-2 sudo[347206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:28 compute-2 sudo[347206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:28 compute-2 sudo[347206]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1066162615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:15:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1066162615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:15:28 compute-2 nova_compute[232428]: 2025-11-29 09:15:28.853 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:29 compute-2 ceph-mon[77138]: pgmap v4216: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 09:15:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:30.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:30 compute-2 ceph-mon[77138]: pgmap v4217: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 09:15:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:32.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:32 compute-2 nova_compute[232428]: 2025-11-29 09:15:32.248 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:32.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:32 compute-2 ovn_controller[134375]: 2025-11-29T09:15:32Z|00135|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Nov 29 09:15:32 compute-2 ovn_controller[134375]: 2025-11-29T09:15:32Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:27:cd:95 10.100.0.10
Nov 29 09:15:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:33 compute-2 ceph-mon[77138]: pgmap v4218: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 68 op/s
Nov 29 09:15:33 compute-2 nova_compute[232428]: 2025-11-29 09:15:33.854 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:34.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:34.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:35 compute-2 ceph-mon[77138]: pgmap v4219: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 411 KiB/s wr, 84 op/s
Nov 29 09:15:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:36.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:36 compute-2 podman[347236]: 2025-11-29 09:15:36.686439025 +0000 UTC m=+0.078293229 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:15:37 compute-2 nova_compute[232428]: 2025-11-29 09:15:37.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:37 compute-2 nova_compute[232428]: 2025-11-29 09:15:37.249 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:37 compute-2 ceph-mon[77138]: pgmap v4220: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 504 KiB/s wr, 102 op/s
Nov 29 09:15:37 compute-2 ovn_controller[134375]: 2025-11-29T09:15:37Z|00137|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Nov 29 09:15:37 compute-2 ovn_controller[134375]: 2025-11-29T09:15:37Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:27:cd:95 10.100.0.10
Nov 29 09:15:37 compute-2 ovn_controller[134375]: 2025-11-29T09:15:37Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:cd:95 10.100.0.10
Nov 29 09:15:37 compute-2 ovn_controller[134375]: 2025-11-29T09:15:37Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:cd:95 10.100.0.10
Nov 29 09:15:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:38.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:38 compute-2 nova_compute[232428]: 2025-11-29 09:15:38.857 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:39 compute-2 ceph-mon[77138]: pgmap v4221: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 54 op/s
Nov 29 09:15:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:40.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:40 compute-2 ceph-mon[77138]: pgmap v4222: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 517 KiB/s wr, 56 op/s
Nov 29 09:15:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:42 compute-2 nova_compute[232428]: 2025-11-29 09:15:42.253 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:42.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:42 compute-2 ceph-mon[77138]: pgmap v4223: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 587 KiB/s wr, 57 op/s
Nov 29 09:15:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:43 compute-2 nova_compute[232428]: 2025-11-29 09:15:43.859 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:44.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:44 compute-2 ceph-mon[77138]: pgmap v4224: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 587 KiB/s wr, 57 op/s
Nov 29 09:15:45 compute-2 nova_compute[232428]: 2025-11-29 09:15:45.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:46.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:47 compute-2 ceph-mon[77138]: pgmap v4225: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 766 KiB/s rd, 179 KiB/s wr, 39 op/s
Nov 29 09:15:47 compute-2 nova_compute[232428]: 2025-11-29 09:15:47.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:47 compute-2 nova_compute[232428]: 2025-11-29 09:15:47.256 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:48.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:48 compute-2 sudo[347262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:48 compute-2 sudo[347262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:48 compute-2 sudo[347262]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:48 compute-2 sudo[347287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:15:48 compute-2 sudo[347287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:15:48 compute-2 sudo[347287]: pam_unix(sudo:session): session closed for user root
Nov 29 09:15:48 compute-2 nova_compute[232428]: 2025-11-29 09:15:48.861 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:49 compute-2 ceph-mon[77138]: pgmap v4226: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 87 KiB/s wr, 3 op/s
Nov 29 09:15:49 compute-2 podman[347312]: 2025-11-29 09:15:49.670976064 +0000 UTC m=+0.080298879 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 09:15:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:50.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:51 compute-2 ceph-mon[77138]: pgmap v4227: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 87 KiB/s wr, 3 op/s
Nov 29 09:15:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:52.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:52 compute-2 nova_compute[232428]: 2025-11-29 09:15:52.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:52 compute-2 nova_compute[232428]: 2025-11-29 09:15:52.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:15:52 compute-2 nova_compute[232428]: 2025-11-29 09:15:52.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:15:52 compute-2 nova_compute[232428]: 2025-11-29 09:15:52.259 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:52.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:53 compute-2 nova_compute[232428]: 2025-11-29 09:15:53.074 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:15:53 compute-2 nova_compute[232428]: 2025-11-29 09:15:53.074 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquired lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:15:53 compute-2 nova_compute[232428]: 2025-11-29 09:15:53.075 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 09:15:53 compute-2 nova_compute[232428]: 2025-11-29 09:15:53.075 232432 DEBUG nova.objects.instance [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 994f08a3-185a-4f37-a457-a99f66bba646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:15:53 compute-2 ceph-mon[77138]: pgmap v4228: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 74 KiB/s wr, 1 op/s
Nov 29 09:15:53 compute-2 ovn_controller[134375]: 2025-11-29T09:15:53Z|01021|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 09:15:53 compute-2 nova_compute[232428]: 2025-11-29 09:15:53.863 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:54 compute-2 ceph-mon[77138]: pgmap v4229: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 43 KiB/s wr, 2 op/s
Nov 29 09:15:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:54.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.568 232432 DEBUG nova.network.neutron [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.599 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Releasing lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.599 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.600 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.601 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.633 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.634 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.634 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.634 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:15:54 compute-2 nova_compute[232428]: 2025-11-29 09:15:54.635 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:55 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:15:55 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/737407428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.116 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/737407428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.819 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.820 232432 DEBUG nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.992 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.993 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3937MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.993 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:55 compute-2 nova_compute[232428]: 2025-11-29 09:15:55.993 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.089 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Instance 994f08a3-185a-4f37-a457-a99f66bba646 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.089 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.089 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:15:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:56.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.142 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 09:15:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:56.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 09:15:56 compute-2 ceph-mon[77138]: pgmap v4230: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 53 KiB/s wr, 4 op/s
Nov 29 09:15:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:15:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3251169094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.623 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.629 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.647 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:15:56 compute-2 podman[347385]: 2025-11-29 09:15:56.66023237 +0000 UTC m=+0.052267329 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.673 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.674 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.883 232432 DEBUG nova.compute.manager [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.884 232432 DEBUG nova.compute.manager [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing instance network info cache due to event network-changed-10ee3c8d-ae72-4761-9f39-b637dc5e841a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.884 232432 DEBUG oslo_concurrency.lockutils [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.885 232432 DEBUG oslo_concurrency.lockutils [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.885 232432 DEBUG nova.network.neutron [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Refreshing network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.972 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.972 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.972 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.973 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.973 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.974 232432 INFO nova.compute.manager [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Terminating instance
Nov 29 09:15:56 compute-2 nova_compute[232428]: 2025-11-29 09:15:56.975 232432 DEBUG nova.compute.manager [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 09:15:57 compute-2 kernel: tap10ee3c8d-ae (unregistering): left promiscuous mode
Nov 29 09:15:57 compute-2 NetworkManager[48993]: <info>  [1764407757.0373] device (tap10ee3c8d-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 09:15:57 compute-2 ovn_controller[134375]: 2025-11-29T09:15:57Z|01022|binding|INFO|Releasing lport 10ee3c8d-ae72-4761-9f39-b637dc5e841a from this chassis (sb_readonly=0)
Nov 29 09:15:57 compute-2 ovn_controller[134375]: 2025-11-29T09:15:57Z|01023|binding|INFO|Setting lport 10ee3c8d-ae72-4761-9f39-b637dc5e841a down in Southbound
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.048 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 ovn_controller[134375]: 2025-11-29T09:15:57Z|01024|binding|INFO|Removing iface tap10ee3c8d-ae ovn-installed in OVS
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.055 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:cd:95 10.100.0.10'], port_security=['fa:16:3e:27:cd:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '994f08a3-185a-4f37-a457-a99f66bba646', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>], logical_port=10ee3c8d-ae72-4761-9f39-b637dc5e841a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0430a7e8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.056 143801 INFO neutron.agent.ovn.metadata.agent [-] Port 10ee3c8d-ae72-4761-9f39-b637dc5e841a in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.057 143801 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.060 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e431174a-fd99-43d4-b57b-b563ced09e18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.060 143801 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.066 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Nov 29 09:15:57 compute-2 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000e0.scope: Consumed 15.234s CPU time.
Nov 29 09:15:57 compute-2 systemd-machined[194747]: Machine qemu-104-instance-000000e0 terminated.
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [NOTICE]   (347146) : haproxy version is 2.8.14-c23fe91
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [NOTICE]   (347146) : path to executable is /usr/sbin/haproxy
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [WARNING]  (347146) : Exiting Master process...
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [WARNING]  (347146) : Exiting Master process...
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [ALERT]    (347146) : Current worker (347148) exited with code 143 (Terminated)
Nov 29 09:15:57 compute-2 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[347142]: [WARNING]  (347146) : All workers exited. Exiting... (0)
Nov 29 09:15:57 compute-2 systemd[1]: libpod-fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64.scope: Deactivated successfully.
Nov 29 09:15:57 compute-2 podman[347431]: 2025-11-29 09:15:57.192360921 +0000 UTC m=+0.044731466 container died fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 09:15:57 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64-userdata-shm.mount: Deactivated successfully.
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.216 232432 INFO nova.virt.libvirt.driver [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Instance destroyed successfully.
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.216 232432 DEBUG nova.objects.instance [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid 994f08a3-185a-4f37-a457-a99f66bba646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 09:15:57 compute-2 systemd[1]: var-lib-containers-storage-overlay-633d344d03b717aa7da5909973b26cd3486334bc7922a71a220173a9542f01a1-merged.mount: Deactivated successfully.
Nov 29 09:15:57 compute-2 podman[347431]: 2025-11-29 09:15:57.229978378 +0000 UTC m=+0.082348923 container cleanup fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 09:15:57 compute-2 systemd[1]: libpod-conmon-fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64.scope: Deactivated successfully.
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.260 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 podman[347471]: 2025-11-29 09:15:57.28923606 +0000 UTC m=+0.039953880 container remove fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.295 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[54de535c-4927-406c-b32e-0efc74707a77]: (4, ('Sat Nov 29 09:15:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64)\nfbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64\nSat Nov 29 09:15:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (fbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64)\nfbc502d4aae0ded89c96f25947c06a2919eb4151331ad706665f060412e3db64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.297 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3d5d72-fa0d-4f2c-9d43-7e35c082dd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.298 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.298 232432 DEBUG nova.virt.libvirt.vif [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1138991441',display_name='tempest-TestVolumeBootPattern-server-1138991441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1138991441',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:15:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-n3d8n714',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:15:19Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=994f08a3-185a-4f37-a457-a99f66bba646,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.298 232432 DEBUG nova.network.os_vif_util [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.299 232432 DEBUG nova.network.os_vif_util [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.300 232432 DEBUG os_vif [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 09:15:57 compute-2 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.302 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.303 232432 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10ee3c8d-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.306 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.320 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.322 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.323 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b73ad7-a1d4-4309-a5bd-147a85c46d6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.325 232432 INFO os_vif [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:cd:95,bridge_name='br-int',has_traffic_filtering=True,id=10ee3c8d-ae72-4761-9f39-b637dc5e841a,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ee3c8d-ae')
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.336 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[6a27fec3-4f29-47e9-bb8b-b5078343dd1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.337 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[830ff5c9-7fd4-4dfb-bd8b-c1d0b151cc4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.354 238475 DEBUG oslo.privsep.daemon [-] privsep: reply[9bccae78-044b-445c-9925-0436a1105014]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068518, 'reachable_time': 27946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347497, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.358 143917 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 09:15:57 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:57.359 143917 DEBUG oslo.privsep.daemon [-] privsep: reply[132ac2f9-773e-4a36-a0fc-34dc17b2649d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.390 232432 DEBUG nova.compute.manager [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-unplugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.390 232432 DEBUG oslo_concurrency.lockutils [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.390 232432 DEBUG oslo_concurrency.lockutils [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.391 232432 DEBUG oslo_concurrency.lockutils [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.391 232432 DEBUG nova.compute.manager [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] No waiting events found dispatching network-vif-unplugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.391 232432 DEBUG nova.compute.manager [req-310c909c-ada8-4f37-aac8-bdfce6066232 req-4ea936ca-c5e9-4c87-90ac-5abf70e672f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-unplugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 09:15:57 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3251169094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.737 232432 INFO nova.virt.libvirt.driver [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Deleting instance files /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646_del
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.738 232432 INFO nova.virt.libvirt.driver [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Deletion of /var/lib/nova/instances/994f08a3-185a-4f37-a457-a99f66bba646_del complete
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.797 232432 INFO nova.compute.manager [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.798 232432 DEBUG oslo.service.loopingcall [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.799 232432 DEBUG nova.compute.manager [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 09:15:57 compute-2 nova_compute[232428]: 2025-11-29 09:15:57.799 232432 DEBUG nova.network.neutron [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 09:15:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:15:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:15:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.274 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.274 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:15:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:15:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:15:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:58.465 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:15:58 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:15:58.466 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.474 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.484 232432 DEBUG nova.network.neutron [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.507 232432 INFO nova.compute.manager [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Took 0.71 seconds to deallocate network for instance.
Nov 29 09:15:58 compute-2 ceph-mon[77138]: pgmap v4231: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 50 KiB/s wr, 4 op/s
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.568 232432 DEBUG nova.compute.manager [req-91afdf8e-5551-43bb-bc72-a0f52761da55 req-eb099dbc-0656-4309-856d-f89642d22ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-deleted-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.665 232432 DEBUG nova.network.neutron [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updated VIF entry in instance network info cache for port 10ee3c8d-ae72-4761-9f39-b637dc5e841a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.665 232432 DEBUG nova.network.neutron [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Updating instance_info_cache with network_info: [{"id": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "address": "fa:16:3e:27:cd:95", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ee3c8d-ae", "ovs_interfaceid": "10ee3c8d-ae72-4761-9f39-b637dc5e841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.688 232432 DEBUG oslo_concurrency.lockutils [req-755286ea-3b76-401f-938e-1ca87e86a3bb req-a1a8bd45-a480-4c80-ae56-1fe30f42219b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-994f08a3-185a-4f37-a457-a99f66bba646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.732 232432 INFO nova.compute.manager [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Took 0.22 seconds to detach 1 volumes for instance.
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.778 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.779 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.814 232432 DEBUG oslo_concurrency.processutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:15:58 compute-2 nova_compute[232428]: 2025-11-29 09:15:58.865 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:15:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:15:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1163033203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.250 232432 DEBUG oslo_concurrency.processutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.258 232432 DEBUG nova.compute.provider_tree [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.277 232432 DEBUG nova.scheduler.client.report [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.299 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.326 232432 INFO nova.scheduler.client.report [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance 994f08a3-185a-4f37-a457-a99f66bba646
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.392 232432 DEBUG oslo_concurrency.lockutils [None req-d312f3f7-dbf9-40b3-91c9-fba2e636602e 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.498 232432 DEBUG nova.compute.manager [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.499 232432 DEBUG oslo_concurrency.lockutils [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "994f08a3-185a-4f37-a457-a99f66bba646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.499 232432 DEBUG oslo_concurrency.lockutils [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.499 232432 DEBUG oslo_concurrency.lockutils [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "994f08a3-185a-4f37-a457-a99f66bba646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.499 232432 DEBUG nova.compute.manager [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] No waiting events found dispatching network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 09:15:59 compute-2 nova_compute[232428]: 2025-11-29 09:15:59.500 232432 WARNING nova.compute.manager [req-5df79b90-20f6-49e8-b103-f66babe19c85 req-dfbe4fe2-20c5-4d4e-9b75-ef58db13dc3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Received unexpected event network-vif-plugged-10ee3c8d-ae72-4761-9f39-b637dc5e841a for instance with vm_state deleted and task_state None.
Nov 29 09:15:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1163033203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:00.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:16:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2799124077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:16:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:16:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2799124077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:16:00 compute-2 ceph-mon[77138]: pgmap v4232: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 188 KiB/s rd, 50 KiB/s wr, 19 op/s
Nov 29 09:16:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2799124077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:16:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2799124077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:16:01 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 09:16:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:02 compute-2 nova_compute[232428]: 2025-11-29 09:16:02.305 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:02.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:02 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:16:02.468 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:16:02 compute-2 ceph-mon[77138]: osdmap e438: 3 total, 3 up, 3 in
Nov 29 09:16:02 compute-2 ceph-mon[77138]: pgmap v4234: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 60 KiB/s wr, 27 op/s
Nov 29 09:16:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3940384204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:16:03.384 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:16:03.385 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:16:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:16:03.385 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:16:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1495066546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1677203671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2643356207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:03 compute-2 nova_compute[232428]: 2025-11-29 09:16:03.868 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:04.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:04 compute-2 ceph-mon[77138]: pgmap v4235: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 278 KiB/s rd, 14 KiB/s wr, 43 op/s
Nov 29 09:16:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1278760975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:06.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:07 compute-2 ceph-mon[77138]: pgmap v4236: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Nov 29 09:16:07 compute-2 nova_compute[232428]: 2025-11-29 09:16:07.307 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 e439: 3 total, 3 up, 3 in
Nov 29 09:16:07 compute-2 podman[347534]: 2025-11-29 09:16:07.701183463 +0000 UTC m=+0.098172110 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:16:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:08.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:08 compute-2 sudo[347557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:08 compute-2 sudo[347557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 sudo[347557]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:08 compute-2 sudo[347580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:08 compute-2 sudo[347580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 sudo[347580]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:08 compute-2 auditd[705]: Audit daemon rotating log files
Nov 29 09:16:08 compute-2 sudo[347595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:16:08 compute-2 sudo[347595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 sudo[347595]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:08 compute-2 ceph-mon[77138]: osdmap e439: 3 total, 3 up, 3 in
Nov 29 09:16:08 compute-2 ceph-mon[77138]: pgmap v4238: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 151 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 09:16:08 compute-2 sudo[347630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:08 compute-2 sudo[347630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 sudo[347630]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:08 compute-2 sudo[347650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:08 compute-2 sudo[347650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 sudo[347650]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:08 compute-2 sudo[347682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 09:16:08 compute-2 sudo[347682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:08 compute-2 nova_compute[232428]: 2025-11-29 09:16:08.870 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:16:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728612272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:16:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:16:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728612272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:16:09 compute-2 nova_compute[232428]: 2025-11-29 09:16:09.191 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:09 compute-2 podman[347781]: 2025-11-29 09:16:09.384636563 +0000 UTC m=+0.070136217 container exec 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 09:16:09 compute-2 podman[347781]: 2025-11-29 09:16:09.480198021 +0000 UTC m=+0.165697665 container exec_died 295c934bfc1c6c545777789a406cdd29926d81c8c5f4e3bca652b40f7c8119fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2728612272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2728612272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:09 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:10 compute-2 podman[347936]: 2025-11-29 09:16:10.099269895 +0000 UTC m=+0.067708143 container exec 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 09:16:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:10.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:10 compute-2 podman[347936]: 2025-11-29 09:16:10.133290662 +0000 UTC m=+0.101728880 container exec_died 9f7088711f84b3d8fb8bed90176942fbb36e4103bb013b251fa531c9bc6dd89d (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-2-goeiuk)
Nov 29 09:16:10 compute-2 podman[347998]: 2025-11-29 09:16:10.353470581 +0000 UTC m=+0.056423705 container exec bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, description=keepalived for Ceph, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 09:16:10 compute-2 podman[347998]: 2025-11-29 09:16:10.369750242 +0000 UTC m=+0.072703356 container exec_died bba1b0d057415db42acbb6367f5b6d068b0b964d3703c5458ee73f6dd81636ed (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-2-gecapa, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, version=2.2.4, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 09:16:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:10 compute-2 sudo[347682]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:10 compute-2 sudo[348031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:10 compute-2 sudo[348031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:10 compute-2 sudo[348031]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:10 compute-2 sudo[348056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:16:10 compute-2 sudo[348056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:10 compute-2 sudo[348056]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:10 compute-2 sudo[348081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:10 compute-2 sudo[348081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:10 compute-2 sudo[348081]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:10 compute-2 sudo[348106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:16:10 compute-2 sudo[348106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:10 compute-2 ceph-mon[77138]: pgmap v4239: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 09:16:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:10 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:11 compute-2 sudo[348106]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:16:11 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:16:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:12 compute-2 nova_compute[232428]: 2025-11-29 09:16:12.213 232432 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407757.2122934, 994f08a3-185a-4f37-a457-a99f66bba646 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 09:16:12 compute-2 nova_compute[232428]: 2025-11-29 09:16:12.214 232432 INFO nova.compute.manager [-] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] VM Stopped (Lifecycle Event)
Nov 29 09:16:12 compute-2 nova_compute[232428]: 2025-11-29 09:16:12.234 232432 DEBUG nova.compute.manager [None req-b1255dfd-e9ce-4f02-8bf3-7bcd1c12236e - - - - - -] [instance: 994f08a3-185a-4f37-a457-a99f66bba646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 09:16:12 compute-2 nova_compute[232428]: 2025-11-29 09:16:12.309 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:12 compute-2 sshd-session[348163]: Invalid user ubuntu from 45.148.10.240 port 39256
Nov 29 09:16:12 compute-2 sshd-session[348163]: Connection closed by invalid user ubuntu 45.148.10.240 port 39256 [preauth]
Nov 29 09:16:12 compute-2 ceph-mon[77138]: pgmap v4240: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 60 op/s
Nov 29 09:16:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:13 compute-2 nova_compute[232428]: 2025-11-29 09:16:13.872 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:15 compute-2 ceph-mon[77138]: pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 KiB/s wr, 53 op/s
Nov 29 09:16:15 compute-2 nova_compute[232428]: 2025-11-29 09:16:15.234 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:15 compute-2 nova_compute[232428]: 2025-11-29 09:16:15.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:17 compute-2 ceph-mon[77138]: pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 716 B/s wr, 22 op/s
Nov 29 09:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:17 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:16:17 compute-2 sudo[348168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:17 compute-2 sudo[348168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:17 compute-2 sudo[348168]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:17 compute-2 sudo[348193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:16:17 compute-2 sudo[348193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:17 compute-2 sudo[348193]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:17 compute-2 nova_compute[232428]: 2025-11-29 09:16:17.311 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:18.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:18 compute-2 nova_compute[232428]: 2025-11-29 09:16:18.873 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:19 compute-2 ceph-mon[77138]: pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 696 B/s wr, 21 op/s
Nov 29 09:16:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:20 compute-2 podman[348220]: 2025-11-29 09:16:20.763360429 +0000 UTC m=+0.155479381 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:16:21 compute-2 ceph-mon[77138]: pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 09:16:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:22 compute-2 nova_compute[232428]: 2025-11-29 09:16:22.312 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:22.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:23 compute-2 ceph-mon[77138]: pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 14 op/s
Nov 29 09:16:23 compute-2 nova_compute[232428]: 2025-11-29 09:16:23.876 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:24.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:25 compute-2 ceph-mon[77138]: pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 09:16:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:27 compute-2 ceph-mon[77138]: pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:27 compute-2 nova_compute[232428]: 2025-11-29 09:16:27.315 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:27 compute-2 podman[348249]: 2025-11-29 09:16:27.674716239 +0000 UTC m=+0.079228156 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 09:16:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 09:16:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 09:16:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:28 compute-2 sudo[348269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:28 compute-2 sudo[348269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:28 compute-2 sudo[348269]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:28 compute-2 nova_compute[232428]: 2025-11-29 09:16:28.877 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:28 compute-2 sudo[348294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:28 compute-2 sudo[348294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:28 compute-2 sudo[348294]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:29 compute-2 ceph-mon[77138]: pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3062304403' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:16:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3062304403' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:16:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:31 compute-2 ceph-mon[77138]: pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:32 compute-2 nova_compute[232428]: 2025-11-29 09:16:32.319 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:33 compute-2 ceph-mon[77138]: pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:33 compute-2 nova_compute[232428]: 2025-11-29 09:16:33.879 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:34 compute-2 ceph-mon[77138]: pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:34.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:37 compute-2 ceph-mon[77138]: pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:37 compute-2 nova_compute[232428]: 2025-11-29 09:16:37.323 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:16:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:16:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:38.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:38 compute-2 podman[348325]: 2025-11-29 09:16:38.663079254 +0000 UTC m=+0.071267842 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 09:16:38 compute-2 nova_compute[232428]: 2025-11-29 09:16:38.880 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:39 compute-2 ceph-mon[77138]: pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:39 compute-2 nova_compute[232428]: 2025-11-29 09:16:39.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:41 compute-2 ceph-mon[77138]: pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:42 compute-2 nova_compute[232428]: 2025-11-29 09:16:42.327 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:43 compute-2 ceph-mon[77138]: pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:43 compute-2 nova_compute[232428]: 2025-11-29 09:16:43.883 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:45 compute-2 ceph-mon[77138]: pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:16:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:16:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:47 compute-2 ceph-mon[77138]: pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:47 compute-2 nova_compute[232428]: 2025-11-29 09:16:47.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:47 compute-2 nova_compute[232428]: 2025-11-29 09:16:47.332 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:48 compute-2 nova_compute[232428]: 2025-11-29 09:16:48.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:48 compute-2 nova_compute[232428]: 2025-11-29 09:16:48.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:48 compute-2 sudo[348350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:48 compute-2 sudo[348350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:48 compute-2 sudo[348350]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:49 compute-2 sudo[348375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:16:49 compute-2 sudo[348375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:16:49 compute-2 sudo[348375]: pam_unix(sudo:session): session closed for user root
Nov 29 09:16:49 compute-2 ceph-mon[77138]: pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:50 compute-2 ceph-mon[77138]: pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:51 compute-2 podman[348401]: 2025-11-29 09:16:51.6884549 +0000 UTC m=+0.090571636 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 09:16:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:52 compute-2 nova_compute[232428]: 2025-11-29 09:16:52.334 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:52.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:53 compute-2 ceph-mon[77138]: pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.232 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.232 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.293 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.294 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.294 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.294 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.294 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:16:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:16:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3311436576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.725 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.886 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.932 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.934 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4147MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.934 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.934 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.987 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:16:53 compute-2 nova_compute[232428]: 2025-11-29 09:16:53.988 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.007 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:16:54 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3311436576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:54 compute-2 ovn_controller[134375]: 2025-11-29T09:16:54Z|01025|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 09:16:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:16:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/136202890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.435 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.445 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:16:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.516 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.546 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:16:54 compute-2 nova_compute[232428]: 2025-11-29 09:16:54.547 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:16:55 compute-2 ceph-mon[77138]: pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:55 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/136202890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:16:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:16:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:56.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:16:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:56 compute-2 nova_compute[232428]: 2025-11-29 09:16:56.537 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:56 compute-2 nova_compute[232428]: 2025-11-29 09:16:56.539 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:56 compute-2 nova_compute[232428]: 2025-11-29 09:16:56.539 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:16:57 compute-2 ceph-mon[77138]: pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:16:57 compute-2 nova_compute[232428]: 2025-11-29 09:16:57.338 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:16:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:58.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:16:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:16:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:58.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:16:58 compute-2 podman[348474]: 2025-11-29 09:16:58.664778038 +0000 UTC m=+0.074696108 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:16:58 compute-2 nova_compute[232428]: 2025-11-29 09:16:58.889 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:16:59 compute-2 ceph-mon[77138]: pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:17:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:00.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:17:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:01 compute-2 nova_compute[232428]: 2025-11-29 09:17:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:01 compute-2 nova_compute[232428]: 2025-11-29 09:17:01.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:17:01 compute-2 ceph-mon[77138]: pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:02 compute-2 nova_compute[232428]: 2025-11-29 09:17:02.341 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:02 compute-2 ceph-mon[77138]: pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:17:03.386 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:17:03.386 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:17:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:17:03.386 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:17:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1974728699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:03 compute-2 nova_compute[232428]: 2025-11-29 09:17:03.891 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:17:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:17:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:04.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4053162053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:04 compute-2 ceph-mon[77138]: pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2176026491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1643408493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:06.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:06 compute-2 ceph-mon[77138]: pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:07 compute-2 nova_compute[232428]: 2025-11-29 09:17:07.344 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.711082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827711144, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 1706, "num_deletes": 252, "total_data_size": 3932775, "memory_usage": 3995096, "flush_reason": "Manual Compaction"}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827724269, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 1604665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93782, "largest_seqno": 95483, "table_properties": {"data_size": 1599186, "index_size": 2682, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14502, "raw_average_key_size": 21, "raw_value_size": 1587073, "raw_average_value_size": 2310, "num_data_blocks": 119, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407692, "oldest_key_time": 1764407692, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 13265 microseconds, and 4881 cpu microseconds.
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.724333) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 1604665 bytes OK
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.724368) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.726613) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.726710) EVENT_LOG_v1 {"time_micros": 1764407827726691, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.726760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3925045, prev total WAL file size 3925045, number of live WAL files 2.
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.729188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323638' seq:72057594037927935, type:22 .. '6D6772737461740033353230' seq:0, type:0; will stop at (end)
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(1567KB)], [192(13MB)]
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827729392, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 15700752, "oldest_snapshot_seqno": -1}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11767 keys, 12760500 bytes, temperature: kUnknown
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827845207, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 12760500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12688562, "index_size": 41425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29445, "raw_key_size": 311525, "raw_average_key_size": 26, "raw_value_size": 12486787, "raw_average_value_size": 1061, "num_data_blocks": 1562, "num_entries": 11767, "num_filter_entries": 11767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.845686) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12760500 bytes
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.848569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.4 rd, 110.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(17.7) write-amplify(8.0) OK, records in: 12230, records dropped: 463 output_compression: NoCompression
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.848601) EVENT_LOG_v1 {"time_micros": 1764407827848586, "job": 124, "event": "compaction_finished", "compaction_time_micros": 115985, "compaction_time_cpu_micros": 60652, "output_level": 6, "num_output_files": 1, "total_output_size": 12760500, "num_input_records": 12230, "num_output_records": 11767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827850202, "job": 124, "event": "table_file_deletion", "file_number": 194}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827855598, "job": 124, "event": "table_file_deletion", "file_number": 192}
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.729022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.855719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.855727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.855730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.855733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:07 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:17:07.855736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:17:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:08 compute-2 ceph-mon[77138]: pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:08 compute-2 nova_compute[232428]: 2025-11-29 09:17:08.895 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:09 compute-2 sudo[348499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:09 compute-2 sudo[348499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:09 compute-2 sudo[348499]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:09 compute-2 sudo[348530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:09 compute-2 sudo[348530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:09 compute-2 podman[348523]: 2025-11-29 09:17:09.236684809 +0000 UTC m=+0.069862979 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 09:17:09 compute-2 sudo[348530]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:11 compute-2 ceph-mon[77138]: pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:12.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:12 compute-2 nova_compute[232428]: 2025-11-29 09:17:12.348 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:13 compute-2 ceph-mon[77138]: pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:13 compute-2 nova_compute[232428]: 2025-11-29 09:17:13.898 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:15 compute-2 ceph-mon[77138]: pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:17 compute-2 sudo[348571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:17 compute-2 sudo[348571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:17 compute-2 sudo[348571]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:17 compute-2 sudo[348596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:17:17 compute-2 sudo[348596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:17 compute-2 sudo[348596]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:17 compute-2 nova_compute[232428]: 2025-11-29 09:17:17.352 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:17 compute-2 ceph-mon[77138]: pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:17 compute-2 sudo[348621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:17 compute-2 sudo[348621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:17 compute-2 sudo[348621]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:17 compute-2 sudo[348646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:17:17 compute-2 sudo[348646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:17 compute-2 sudo[348646]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:18.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:17:18 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:18.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:18 compute-2 sshd-session[348703]: Accepted publickey for zuul from 192.168.122.10 port 56940 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 09:17:18 compute-2 systemd-logind[787]: New session 60 of user zuul.
Nov 29 09:17:18 compute-2 systemd[1]: Started Session 60 of User zuul.
Nov 29 09:17:18 compute-2 sshd-session[348703]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 09:17:18 compute-2 sudo[348707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 09:17:18 compute-2 sudo[348707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 09:17:18 compute-2 nova_compute[232428]: 2025-11-29 09:17:18.901 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:19 compute-2 ceph-mon[77138]: pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:20.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:22.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:22 compute-2 podman[348912]: 2025-11-29 09:17:22.338198875 +0000 UTC m=+0.108079964 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 09:17:22 compute-2 nova_compute[232428]: 2025-11-29 09:17:22.353 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:22 compute-2 ceph-mon[77138]: from='client.43683 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:22 compute-2 ceph-mon[77138]: pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:22 compute-2 ceph-mon[77138]: from='client.43689 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:22 compute-2 ceph-mon[77138]: from='client.50432 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 09:17:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2546249879' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.50438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.47428 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3523648440' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.47437 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1515366372' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2546249879' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:23 compute-2 nova_compute[232428]: 2025-11-29 09:17:23.903 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:24 compute-2 ceph-mon[77138]: pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:26 compute-2 ovs-vsctl[349021]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 09:17:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:26.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:26.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:27 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 09:17:27 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 09:17:27 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 09:17:27 compute-2 ceph-mon[77138]: pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:27 compute-2 nova_compute[232428]: 2025-11-29 09:17:27.355 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:27 compute-2 sudo[349259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:27 compute-2 sudo[349259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:27 compute-2 sudo[349259]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:27 compute-2 sudo[349307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:17:27 compute-2 sudo[349307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:27 compute-2 sudo[349307]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:27 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: cache status {prefix=cache status} (starting...)
Nov 29 09:17:27 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: client ls {prefix=client ls} (starting...)
Nov 29 09:17:27 compute-2 lvm[349395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 09:17:27 compute-2 lvm[349395]: VG ceph_vg0 finished
Nov 29 09:17:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:17:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951965200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:17:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:17:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951965200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:17:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:28.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:17:28 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:17:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 09:17:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 09:17:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3503979759' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 09:17:28 compute-2 nova_compute[232428]: 2025-11-29 09:17:28.905 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 09:17:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 09:17:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/57493066' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.50450 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.43710 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.50459 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1951965200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1951965200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/491600817' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3817662681' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.47449 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2218446617' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3321029869' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3503979759' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3264757028' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2372799268' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/57493066' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 09:17:29 compute-2 sudo[349626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:29 compute-2 sudo[349626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:29 compute-2 sudo[349626]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:29 compute-2 sudo[349659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:29 compute-2 sudo[349659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:29 compute-2 podman[349652]: 2025-11-29 09:17:29.409973548 +0000 UTC m=+0.072504551 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 09:17:29 compute-2 sudo[349659]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 09:17:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1415748282' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 09:17:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 09:17:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3359732391' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 09:17:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4114073559' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:17:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: ops {prefix=ops} (starting...)
Nov 29 09:17:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:30.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 09:17:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2901856085' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:30 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: session ls {prefix=session ls} (starting...)
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.43728 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.47479 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.50513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.43782 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2093559059' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1415748282' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3941305374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2589653559' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1528657275' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3795383077' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3359732391' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4114073559' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2709486746' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 09:17:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3582597324' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:30 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: status {prefix=status} (starting...)
Nov 29 09:17:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 09:17:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2704820618' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 09:17:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3378738535' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 09:17:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3395841291' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 09:17:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2165285435' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.47503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.43806 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.50567 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.43824 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.50579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.47539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2901856085' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.47545 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3001446612' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1292861131' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3582597324' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/466745283' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4114203625' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/548445841' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2704820618' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/454002165' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3378738535' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3416001044' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/470528491' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/57399009' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3395841291' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2364403902' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2165285435' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:17:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 09:17:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3335060551' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:17:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:32.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:32 compute-2 nova_compute[232428]: 2025-11-29 09:17:32.357 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 09:17:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/436718315' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:17:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 09:17:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1628480024' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 09:17:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2171717815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.50630 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.43884 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3589149424' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4252873416' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.47605 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3335060551' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1981851145' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3243181954' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3390438926' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/952766660' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/436718315' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1628480024' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1259470410' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/875275919' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 09:17:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2221737088' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 09:17:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1734908931' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:33 compute-2 nova_compute[232428]: 2025-11-29 09:17:33.908 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.43938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.50684 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.47635 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2171717815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.43947 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1506877392' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/862402459' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.50699 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.47647 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2221737088' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2273924913' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/938565693' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1734908931' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/131710880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:42.245026+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:43.245199+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:44.245678+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5267132 data_alloc: 234881024 data_used: 22622208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x55877e6c8800 session 0x558775cda960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558773017000 session 0x558774471c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:45.245842+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:46.246154+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:47.246381+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504471552 unmapped: 110788608 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:48.246565+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:49.246738+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5267000 data_alloc: 234881024 data_used: 22622208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:50.246921+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:51.247235+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:52.247382+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:53.247947+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dc800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.776094437s of 18.072147369s, submitted: 108
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:54.249075+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5266764 data_alloc: 234881024 data_used: 22626304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:55.249301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504479744 unmapped: 110780416 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:56.249623+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bb3c000/0x0/0x1bfc00000, data 0x2eacc01/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504487936 unmapped: 110772224 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:57.249875+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558776704c00 session 0x558775841860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558775776000 session 0x5587758df2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587768f4000 session 0x558775c625a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504487936 unmapped: 110772224 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587750c4000 session 0x558775841680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773016800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558773016800 session 0x558774500b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587750c4000 session 0x558775d5c1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558775776000 session 0x55877b4874a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558776704c00 session 0x5587740d50e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587768f4000 session 0x5587740763c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:58.249997+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 504651776 unmapped: 110608384 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558776fd5400 session 0x5587744394a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587770dc800 session 0x558774077860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:46:59.250175+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587750c4000 session 0x558775c634a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500973568 unmapped: 114286592 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5087497 data_alloc: 218103808 data_used: 10706944
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:00.250326+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500973568 unmapped: 114286592 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:01.250574+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500973568 unmapped: 114286592 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c818000/0x0/0x1bfc00000, data 0x21d1bf1/0x23f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777510800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558777510800 session 0x558775cda960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:02.250818+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c818000/0x0/0x1bfc00000, data 0x21d1bf1/0x23f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500973568 unmapped: 114286592 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558775131400 session 0x55877450f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:03.251014+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500973568 unmapped: 114286592 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558773699400 session 0x558775d090e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587750c4000 session 0x558775f9e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:04.251187+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500981760 unmapped: 114278400 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dc800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.465862274s of 10.652558327s, submitted: 55
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5091284 data_alloc: 218103808 data_used: 10715136
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:05.251395+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500989952 unmapped: 114270208 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c817000/0x0/0x1bfc00000, data 0x21d1c01/0x23f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:06.251650+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:07.251929+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:08.252095+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:09.252290+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5124404 data_alloc: 218103808 data_used: 15302656
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:10.252574+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:11.252798+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c817000/0x0/0x1bfc00000, data 0x21d1c01/0x23f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:12.253155+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:13.253399+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:14.253600+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5124404 data_alloc: 218103808 data_used: 15302656
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:15.253832+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501006336 unmapped: 114253824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c817000/0x0/0x1bfc00000, data 0x21d1c01/0x23f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:16.253995+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.484946251s of 11.490578651s, submitted: 1
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505135104 unmapped: 110125056 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:17.254256+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19be3a000/0x0/0x1bfc00000, data 0x2ba8c01/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506167296 unmapped: 109092864 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:18.254455+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506183680 unmapped: 109076480 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:19.254690+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506183680 unmapped: 109076480 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5223590 data_alloc: 218103808 data_used: 17235968
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:20.254896+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506183680 unmapped: 109076480 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19bdfe000/0x0/0x1bfc00000, data 0x2be1c01/0x2e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:21.255102+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506183680 unmapped: 109076480 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:22.255302+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506183680 unmapped: 109076480 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:23.255629+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506118144 unmapped: 109142016 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:24.255801+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506118144 unmapped: 109142016 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5217858 data_alloc: 218103808 data_used: 17235968
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:25.256013+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19be05000/0x0/0x1bfc00000, data 0x2be3c01/0x2e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506118144 unmapped: 109142016 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:26.256181+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.799043655s of 10.092299461s, submitted: 110
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19be03000/0x0/0x1bfc00000, data 0x2be5c01/0x2e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:27.256472+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:28.256665+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19be02000/0x0/0x1bfc00000, data 0x2be6c01/0x2e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:29.256853+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x558775131400 session 0x5587779570e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x5587770dc800 session 0x5587740d5680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5224202 data_alloc: 218103808 data_used: 17260544
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 heartbeat osd_stat(store_statfs(0x19be02000/0x0/0x1bfc00000, data 0x2be6c01/0x2e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:30.257010+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 ms_handle_reset con 0x55877db82000 session 0x558773229e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:31.257194+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f5800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506126336 unmapped: 109133824 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:32.257383+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x5587768f5800 session 0x558775db9a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x558775131400 session 0x558775f9f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x5587750c4000 session 0x55877b487a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dc800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 518062080 unmapped: 97198080 heap: 615260160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x5587770dc800 session 0x55877b487680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777a0d000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x558777a0d000 session 0x55877450ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a787400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x55877a787400 session 0x558773e421e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x5587750c4000 session 0x5587779572c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 ms_handle_reset con 0x558775131400 session 0x5587758def00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dc800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:33.257509+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _renew_subs
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 412 ms_handle_reset con 0x55877db82000 session 0x55877317e3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777a0d000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 412 ms_handle_reset con 0x5587770dc800 session 0x5587736365a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513122304 unmapped: 106864640 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:34.257709+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x558777a0d000 session 0x5587736172c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513130496 unmapped: 106856448 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5462750 data_alloc: 234881024 data_used: 27103232
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:35.257968+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6b8000/0x0/0x1bfc00000, data 0x432b17c/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513130496 unmapped: 106856448 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775777c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x558775777c00 session 0x5587732330e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x558776704800 session 0x5587736361e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x558775776800 session 0x558775db8780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:36.258153+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6b8000/0x0/0x1bfc00000, data 0x432b17c/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513130496 unmapped: 106856448 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6b8000/0x0/0x1bfc00000, data 0x432b17c/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:37.258393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6b8000/0x0/0x1bfc00000, data 0x432b17c/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513130496 unmapped: 106856448 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:38.258582+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513138688 unmapped: 106848256 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.347458839s of 12.694256783s, submitted: 82
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x5587731d8c00 session 0x55877317f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:39.258742+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d215400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510181376 unmapped: 109805568 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252780 data_alloc: 218103808 data_used: 10723328
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:40.258967+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a697000/0x0/0x1bfc00000, data 0x434f16c/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 ms_handle_reset con 0x55877d215400 session 0x558775c3e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 500146176 unmapped: 119840768 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:41.259142+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:42.259370+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:43.259529+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 heartbeat osd_stat(store_statfs(0x19b632000/0x0/0x1bfc00000, data 0x33b2c9b/0x35db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:44.259710+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5362065 data_alloc: 234881024 data_used: 25640960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 heartbeat osd_stat(store_statfs(0x19b632000/0x0/0x1bfc00000, data 0x33b2c9b/0x35db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:45.259855+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:46.260041+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:47.261079+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 heartbeat osd_stat(store_statfs(0x19b632000/0x0/0x1bfc00000, data 0x33b2c9b/0x35db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:48.261387+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:49.261838+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501784576 unmapped: 118202368 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5362225 data_alloc: 234881024 data_used: 25645056
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e45000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x558785e45000 session 0x5587779570e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587750c4000 session 0x558775f9e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1ab800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x55878c1ab800 session 0x558775d090e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:50.262016+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dd400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587770dd400 session 0x55877450f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.846543312s of 11.679408073s, submitted: 45
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x558775890c00 session 0x558775c634a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x55877db82000 session 0x558775cda960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587750c4000 session 0x5587740d50e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 501792768 unmapped: 118194176 heap: 619986944 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x558775890c00 session 0x558774500b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dd400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587770dd400 session 0x5587758df2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e45000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x558785e45000 session 0x558779c430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:51.262351+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587750c4000 session 0x558773ede780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 507265024 unmapped: 116498432 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x558775890c00 session 0x558775841e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:52.262485+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dd400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x5587770dd400 session 0x558775cda5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 heartbeat osd_stat(store_statfs(0x199f40000/0x0/0x1bfc00000, data 0x4aa3d0d/0x4cce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 507641856 unmapped: 116121600 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:53.262687+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x55877db82000 session 0x55877450eb40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1ab800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 ms_handle_reset con 0x55878c1ab800 session 0x5587758df0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509173760 unmapped: 114589696 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:54.262921+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dd400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509173760 unmapped: 114589696 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5670212 data_alloc: 234881024 data_used: 35618816
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:55.263069+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515383296 unmapped: 108380160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 415 ms_handle_reset con 0x5587770dd400 session 0x5587735cfe00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:56.263255+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515383296 unmapped: 108380160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:57.264236+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515383296 unmapped: 108380160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 415 heartbeat osd_stat(store_statfs(0x199e1e000/0x0/0x1bfc00000, data 0x4bc39ca/0x4df0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:58.264426+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515383296 unmapped: 108380160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:47:59.265212+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515391488 unmapped: 108371968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5641570 data_alloc: 234881024 data_used: 36548608
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:00.265411+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 415 heartbeat osd_stat(store_statfs(0x199dff000/0x0/0x1bfc00000, data 0x4be29ca/0x4e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515391488 unmapped: 108371968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:01.265863+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515391488 unmapped: 108371968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:02.266052+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515391488 unmapped: 108371968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:03.266237+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515391488 unmapped: 108371968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:04.266567+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.589804649s of 14.219210625s, submitted: 193
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515399680 unmapped: 108363776 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5641834 data_alloc: 234881024 data_used: 36548608
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:05.266767+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515399680 unmapped: 108363776 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 415 heartbeat osd_stat(store_statfs(0x199df1000/0x0/0x1bfc00000, data 0x4bf09ca/0x4e1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:06.267034+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515399680 unmapped: 108363776 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:07.267264+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515481600 unmapped: 108281856 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:08.267480+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515481600 unmapped: 108281856 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:09.267720+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515481600 unmapped: 108281856 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5680220 data_alloc: 251658240 data_used: 39473152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:10.267853+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199ded000/0x0/0x1bfc00000, data 0x4bf2509/0x4e20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515481600 unmapped: 108281856 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:11.268129+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515506176 unmapped: 108257280 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:12.268409+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515530752 unmapped: 108232704 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:13.268642+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199de6000/0x0/0x1bfc00000, data 0x4bfa509/0x4e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515530752 unmapped: 108232704 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:14.268817+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75cc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877a75cc00 session 0x55877353cd20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774355c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558774355c00 session 0x558775cdba40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878d63d800 session 0x558775cdb860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558772b9e400 session 0x558775d5cd20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774355c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558774355c00 session 0x55877b4870e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515579904 unmapped: 108183552 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5779855 data_alloc: 251658240 data_used: 41201664
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:15.269149+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.761178970s of 10.962375641s, submitted: 62
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587750c4000 session 0x558774501c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775890c00 session 0x55877353d2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199219000/0x0/0x1bfc00000, data 0x57c7509/0x59f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734ec800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:16.269355+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587734ec800 session 0x558775c632c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:17.269588+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:18.269792+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a18b000/0x0/0x1bfc00000, data 0x40ad487/0x42d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:19.270190+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a18b000/0x0/0x1bfc00000, data 0x40ad487/0x42d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5489606 data_alloc: 234881024 data_used: 26165248
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:20.270396+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:21.270544+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:22.270831+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877db82000 session 0x558774470f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734ec800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a18b000/0x0/0x1bfc00000, data 0x40ad487/0x42d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:23.271174+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 512122880 unmapped: 111640576 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:24.271381+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5576938 data_alloc: 234881024 data_used: 37822464
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:25.271521+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:26.271801+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776fd5800 session 0x558775c3e5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777510800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:27.272002+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558777510800 session 0x558774500960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:28.272220+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a936000/0x0/0x1bfc00000, data 0x40ad487/0x42d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.952484131s of 13.239359856s, submitted: 52
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558778312800 session 0x5587779565a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:29.272436+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5579684 data_alloc: 251658240 data_used: 39657472
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:30.272627+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:31.272834+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a8000 session 0x5587740763c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fdc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587754fdc00 session 0x558779dfde00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fdc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514277376 unmapped: 109486080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:32.273000+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587754fdc00 session 0x558778aa21e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776fd5800 session 0x55877450fe00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514285568 unmapped: 109477888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:33.273177+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514285568 unmapped: 109477888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a49e000/0x0/0x1bfc00000, data 0x4544496/0x4770000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:34.273464+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514285568 unmapped: 109477888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5615470 data_alloc: 251658240 data_used: 39673856
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:35.273653+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a49e000/0x0/0x1bfc00000, data 0x4544496/0x4770000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515629056 unmapped: 108134400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:36.273777+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199ebf000/0x0/0x1bfc00000, data 0x4b23496/0x4d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515637248 unmapped: 108126208 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:37.274010+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:38.274214+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:39.274439+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:40.274610+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5666048 data_alloc: 251658240 data_used: 39817216
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199ebc000/0x0/0x1bfc00000, data 0x4b26496/0x4d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:41.274798+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:42.275022+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199ebc000/0x0/0x1bfc00000, data 0x4b26496/0x4d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515678208 unmapped: 108085248 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:43.275179+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.455636024s of 14.711808205s, submitted: 50
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515694592 unmapped: 108068864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:44.275360+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587734ec800 session 0x558775cdad20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x199eb7000/0x0/0x1bfc00000, data 0x4b2b496/0x4d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75c800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d6c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587731d6c00 session 0x558775841e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877a75c800 session 0x558775cdb860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b03e000/0x0/0x1bfc00000, data 0x39a34b9/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510263296 unmapped: 113500160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db83c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63dc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:45.275501+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5451587 data_alloc: 234881024 data_used: 27303936
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b03e000/0x0/0x1bfc00000, data 0x39a34b9/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510263296 unmapped: 113500160 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:46.275621+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510271488 unmapped: 113491968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:47.275849+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510271488 unmapped: 113491968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:48.276051+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510271488 unmapped: 113491968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:49.276262+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510271488 unmapped: 113491968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b03e000/0x0/0x1bfc00000, data 0x39a34b9/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:50.276437+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5477667 data_alloc: 234881024 data_used: 30978048
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877435a800 session 0x558773204000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587770d9400 session 0x558775c62f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:51.276616+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877e6ca800 session 0x55877450e780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b03e000/0x0/0x1bfc00000, data 0x39a34b9/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:52.276832+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:53.277070+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:54.277284+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:55.277476+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5184959 data_alloc: 218103808 data_used: 16277504
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c8d6000/0x0/0x1bfc00000, data 0x210b4b9/0x2338000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:56.277692+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 505954304 unmapped: 117809152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:57.277971+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 506175488 unmapped: 117587968 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.179559708s of 14.330222130s, submitted: 50
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:58.278204+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509173760 unmapped: 114589696 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:48:59.278418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:00.278626+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5262755 data_alloc: 218103808 data_used: 16470016
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c012000/0x0/0x1bfc00000, data 0x29cf4b9/0x2bfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:01.278871+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:02.279089+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:03.279278+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c012000/0x0/0x1bfc00000, data 0x29cf4b9/0x2bfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:04.279513+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:05.279733+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5261907 data_alloc: 218103808 data_used: 16470016
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:06.279898+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:07.280100+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:08.280237+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877db83c00 session 0x558775d090e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878d63dc00 session 0x558775f9e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:09.280428+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c00d000/0x0/0x1bfc00000, data 0x29d44b9/0x2c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c00d000/0x0/0x1bfc00000, data 0x29d44b9/0x2c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:10.280619+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5262171 data_alloc: 218103808 data_used: 16470016
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c00d000/0x0/0x1bfc00000, data 0x29d44b9/0x2c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:11.280852+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:12.280984+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:13.281109+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c00d000/0x0/0x1bfc00000, data 0x29d44b9/0x2c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.299812317s of 15.500900269s, submitted: 57
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587750c4000 session 0x55877b487e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509140992 unmapped: 114622464 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:14.281288+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c031000/0x0/0x1bfc00000, data 0x29b0496/0x2bdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509140992 unmapped: 114622464 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:15.281517+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5254599 data_alloc: 218103808 data_used: 16355328
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c031000/0x0/0x1bfc00000, data 0x29b0496/0x2bdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509140992 unmapped: 114622464 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:16.281741+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509140992 unmapped: 114622464 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:17.281959+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878d63d800 session 0x5587736ca1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131000 session 0x558779c430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db83800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877db83800 session 0x558775cda960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dcc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587770dcc00 session 0x55877b4865a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509140992 unmapped: 114622464 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:18.282117+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587750c4000 session 0x558773205a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c031000/0x0/0x1bfc00000, data 0x29b0496/0x2bdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131000 session 0x558775f9f4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd84000/0x0/0x1bfc00000, data 0x2c5d4f8/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509157376 unmapped: 114606080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:19.282393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509157376 unmapped: 114606080 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:20.282588+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5284920 data_alloc: 218103808 data_used: 16355328
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:21.282809+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:22.283043+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:23.283303+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd84000/0x0/0x1bfc00000, data 0x2c5d4f8/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:24.283569+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:25.283762+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5284920 data_alloc: 218103808 data_used: 16355328
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877e6ca400 session 0x558775f861e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:26.283937+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a8400 session 0x5587735ce960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd84000/0x0/0x1bfc00000, data 0x2c5d4f8/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:27.284183+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 114597888 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775130400 session 0x5587744381e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.095935822s of 14.240704536s, submitted: 44
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587750c4000 session 0x558775f9f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:28.284376+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509100032 unmapped: 114663424 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:29.284564+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509100032 unmapped: 114663424 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131000 session 0x558775d09860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd5f000/0x0/0x1bfc00000, data 0x2c8151b/0x2eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:30.284731+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509116416 unmapped: 114647040 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5295854 data_alloc: 218103808 data_used: 17084416
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd5f000/0x0/0x1bfc00000, data 0x2c8151b/0x2eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:31.285046+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d216800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:32.285198+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:33.285392+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:34.285636+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:35.285764+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5309294 data_alloc: 234881024 data_used: 18944000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd5f000/0x0/0x1bfc00000, data 0x2c8151b/0x2eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:36.285887+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:37.286034+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:38.286141+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:39.286259+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd5f000/0x0/0x1bfc00000, data 0x2c8151b/0x2eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:40.286364+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5309294 data_alloc: 234881024 data_used: 18944000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19bd5f000/0x0/0x1bfc00000, data 0x2c8151b/0x2eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:41.286528+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 509124608 unmapped: 114638848 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.548545837s of 13.601076126s, submitted: 13
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:42.286701+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513908736 unmapped: 109854720 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b17b000/0x0/0x1bfc00000, data 0x385f51b/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:43.286902+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:44.287077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:45.287196+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5440068 data_alloc: 234881024 data_used: 22319104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:46.287410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0e7000/0x0/0x1bfc00000, data 0x38f351b/0x3b21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:47.287612+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0e7000/0x0/0x1bfc00000, data 0x38f351b/0x3b21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:48.287775+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:49.287971+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:50.288161+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5438672 data_alloc: 234881024 data_used: 22343680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:51.288385+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515055616 unmapped: 108707840 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0ce000/0x0/0x1bfc00000, data 0x391251b/0x3b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:52.288579+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:53.288770+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.050540924s of 12.413598061s, submitted: 144
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:54.288945+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:55.289141+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5438932 data_alloc: 234881024 data_used: 22343680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:56.289354+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0c1000/0x0/0x1bfc00000, data 0x391f51b/0x3b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:57.289610+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:58.289803+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515063808 unmapped: 108699648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:49:59.289978+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:00.290172+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0c1000/0x0/0x1bfc00000, data 0x391f51b/0x3b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5440068 data_alloc: 234881024 data_used: 22368256
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:01.290383+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:02.290775+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0b3000/0x0/0x1bfc00000, data 0x392d51b/0x3b5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:03.290931+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:04.291118+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:05.291355+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587731d7800 session 0x558777956960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877d216800 session 0x558775c62b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0b3000/0x0/0x1bfc00000, data 0x392d51b/0x3b5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5448812 data_alloc: 234881024 data_used: 23760896
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.740989685s of 11.769769669s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a4000 session 0x558773204d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a8400 session 0x558774500f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877e6ca400 session 0x558779c43680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:06.291533+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515072000 unmapped: 108691456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b0d7000/0x0/0x1bfc00000, data 0x390951b/0x3b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587731d7800 session 0x558775cdb4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:07.291757+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 515088384 unmapped: 108675072 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x5587750c4000 session 0x558774470b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:08.291916+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131000 session 0x558775db9680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:09.292077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:10.292275+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141785 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:11.292444+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:12.292661+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:13.292812+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:14.292962+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:15.293164+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141785 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:16.293404+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:17.293640+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:18.293841+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:19.294019+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:20.294223+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141785 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:21.294375+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:22.294533+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:23.294730+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510574592 unmapped: 113188864 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:24.294890+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:25.295071+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141785 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:26.295232+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:27.295444+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:28.296022+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:29.296195+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:30.296389+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510582784 unmapped: 113180672 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141785 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:31.296562+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510590976 unmapped: 113172480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:32.296739+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510590976 unmapped: 113172480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:33.297370+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510590976 unmapped: 113172480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:34.297553+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510590976 unmapped: 113172480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x1c53487/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d216800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877d216800 session 0x558775cdb4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776994800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776994800 session 0x558774500f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877d217800 session 0x558773204d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:35.297717+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a5800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a5800 session 0x558775c62b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e45c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.586284637s of 29.787176132s, submitted: 59
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510590976 unmapped: 113172480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5250275 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558785e45c00 session 0x558775d09860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776994800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776994800 session 0x558775f861e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d216800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877d216800 session 0x558773205a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877d217800 session 0x55877b4865a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a5800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a5800 session 0x558775cda960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:36.297880+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 113156096 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:37.298095+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 113156096 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:38.298266+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 113156096 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c222000/0x0/0x1bfc00000, data 0x27c0497/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777a0d400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558777a0d400 session 0x5587736ca1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:39.298465+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 113156096 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776994800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776994800 session 0x55877b487e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:40.298619+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877586a400 session 0x558775f9e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 113156096 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5236519 data_alloc: 218103808 data_used: 12591104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776992400 session 0x55877450e780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:41.298767+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510763008 unmapped: 113000448 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776995c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:42.298904+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c1fc000/0x0/0x1bfc00000, data 0x27e44ca/0x2a12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510722048 unmapped: 113041408 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:43.299057+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:44.299462+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:45.299995+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5324602 data_alloc: 234881024 data_used: 24567808
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:46.300175+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c1fc000/0x0/0x1bfc00000, data 0x27e44ca/0x2a12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:47.300438+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:48.300633+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:49.300886+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:50.301075+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5324602 data_alloc: 234881024 data_used: 24567808
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:51.301245+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510689280 unmapped: 113074176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131800 session 0x55877353de00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877577bc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877577bc00 session 0x5587779565a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131800 session 0x558775d083c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877586a400 session 0x55877c2c7a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.312112808s of 16.479454041s, submitted: 23
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:52.301418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19c1fc000/0x0/0x1bfc00000, data 0x27e44ca/0x2a12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [0,1,2])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776992400 session 0x558775c3f4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510705664 unmapped: 113057792 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:53.301575+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 510705664 unmapped: 113057792 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:54.301746+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513015808 unmapped: 110747648 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b094000/0x0/0x1bfc00000, data 0x39464ca/0x3b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:55.301897+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513024000 unmapped: 110739456 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5476364 data_alloc: 234881024 data_used: 25473024
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:56.302073+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513032192 unmapped: 110731264 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:57.302253+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513032192 unmapped: 110731264 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:58.302404+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513032192 unmapped: 110731264 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b060000/0x0/0x1bfc00000, data 0x39784ca/0x3ba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:50:59.302614+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513032192 unmapped: 110731264 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877db82400 session 0x558773205680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:00.302777+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b068000/0x0/0x1bfc00000, data 0x39784ca/0x3ba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513040384 unmapped: 110723072 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5479521 data_alloc: 234881024 data_used: 25440256
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777510800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:01.302963+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 513048576 unmapped: 110714880 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:02.303146+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514588672 unmapped: 109174784 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:03.303366+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514588672 unmapped: 109174784 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b065000/0x0/0x1bfc00000, data 0x397b4ca/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558776995c00 session 0x558775cdb860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877e6c8c00 session 0x558775db8b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:04.303520+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.491988182s of 12.615092278s, submitted: 105
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514588672 unmapped: 109174784 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:05.303687+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131800 session 0x5587758dfc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5295506 data_alloc: 234881024 data_used: 21913600
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:06.303871+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:07.304109+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:08.304288+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b2f5000/0x0/0x1bfc00000, data 0x254d487/0x2778000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19b2f5000/0x0/0x1bfc00000, data 0x254d487/0x2778000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:09.304471+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:10.304660+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5295506 data_alloc: 234881024 data_used: 21913600
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:11.304884+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:12.305040+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 514605056 unmapped: 109158400 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:13.305208+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 519266304 unmapped: 104497152 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:14.305409+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19adb5000/0x0/0x1bfc00000, data 0x2a8e487/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 519446528 unmapped: 104316928 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:15.306489+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.136985779s of 10.987410545s, submitted: 111
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 519446528 unmapped: 104316928 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407668 data_alloc: 234881024 data_used: 22421504
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a54b000/0x0/0x1bfc00000, data 0x32ea487/0x3515000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:16.306661+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 521576448 unmapped: 102187008 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:17.306903+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 521576448 unmapped: 102187008 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19938d000/0x0/0x1bfc00000, data 0x3300487/0x352b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:18.307065+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x19938d000/0x0/0x1bfc00000, data 0x3300487/0x352b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 521584640 unmapped: 102178816 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:19.307276+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 521584640 unmapped: 102178816 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:20.307448+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 521584640 unmapped: 102178816 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418404 data_alloc: 234881024 data_used: 22380544
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:21.307619+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:22.307826+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:23.308026+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f90000/0x0/0x1bfc00000, data 0x3303487/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:24.308192+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:25.308369+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407060 data_alloc: 234881024 data_used: 22380544
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:26.308578+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f90000/0x0/0x1bfc00000, data 0x3303487/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520921088 unmapped: 102842368 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:27.308814+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f90000/0x0/0x1bfc00000, data 0x3303487/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:28.308920+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:29.309095+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:30.309300+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407060 data_alloc: 234881024 data_used: 22380544
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:31.309529+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:32.309701+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f90000/0x0/0x1bfc00000, data 0x3303487/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:33.309884+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520929280 unmapped: 102834176 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558785e47800 session 0x558775f86f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a9000 session 0x558775db94a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558773017000 session 0x5587758dfa40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558773017000 session 0x558775d5d2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:34.310079+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.546197891s of 18.807968140s, submitted: 36
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558775131800 session 0x558779c434a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55877e6c8c00 session 0x558774470780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x558785e47800 session 0x558773f2d0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520830976 unmapped: 102932480 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a9000 session 0x558775c3e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 ms_handle_reset con 0x55878c1a9000 session 0x558775c3ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198a6a000/0x0/0x1bfc00000, data 0x38284e9/0x3a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:35.310222+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520568832 unmapped: 103194624 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5458978 data_alloc: 234881024 data_used: 22380544
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:36.310415+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520577024 unmapped: 103186432 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:37.310585+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520609792 unmapped: 103153664 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:38.310747+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 heartbeat osd_stat(store_statfs(0x198a67000/0x0/0x1bfc00000, data 0x382a4e9/0x3a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520609792 unmapped: 103153664 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:39.310930+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 520609792 unmapped: 103153664 heap: 623763456 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775777c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d6c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 417 ms_handle_reset con 0x5587731d6c00 session 0x558775c63860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 417 ms_handle_reset con 0x558775777c00 session 0x558775cdaf00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db83000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:40.311105+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 417 ms_handle_reset con 0x55877db83000 session 0x5587740d50e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529309696 unmapped: 102154240 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 417 ms_handle_reset con 0x55878c1a8000 session 0x5587744381e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 418 ms_handle_reset con 0x558772b9fc00 session 0x558775db81e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5622047 data_alloc: 234881024 data_used: 33218560
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d6c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:41.311530+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775777c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529342464 unmapped: 102121472 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x5587731d6c00 session 0x5587779572c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:42.311686+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530685952 unmapped: 100777984 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db83000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55877db83000 session 0x558779dfde00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55878c1a9000 session 0x55877b487c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55878d63d400 session 0x5587740d5860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:43.311835+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 heartbeat osd_stat(store_statfs(0x197ab2000/0x0/0x1bfc00000, data 0x47d9a87/0x4a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 heartbeat osd_stat(store_statfs(0x197ab2000/0x0/0x1bfc00000, data 0x47d9a87/0x4a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:44.312003+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:45.312206+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5657797 data_alloc: 234881024 data_used: 35184640
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:46.312363+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.155641556s of 12.725331306s, submitted: 134
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:47.312576+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530702336 unmapped: 100761600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:48.312721+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 73K writes, 289K keys, 73K commit groups, 1.0 writes per commit group, ingest: 0.29 GB, 0.05 MB/s
                                           Cumulative WAL: 73K writes, 27K syncs, 2.69 writes per sync, written: 0.29 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5186 writes, 21K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 26.25 MB, 0.04 MB/s
                                           Interval WAL: 5186 writes, 1999 syncs, 2.59 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 100728832 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x197ab0000/0x0/0x1bfc00000, data 0x47db5c6/0x4a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:49.312910+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558773616b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770d6400 session 0x558778aa3680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558778312400 session 0x558774471c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 100728832 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558779334c00 session 0x558775841680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d6400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587731d6400 session 0x558775f9f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x5587735ac5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:50.313059+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558778aa23c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770d6400 session 0x558774471a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558778312400 session 0x558778aa2960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5662233 data_alloc: 234881024 data_used: 35192832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:51.313234+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x197aaf000/0x0/0x1bfc00000, data 0x47db638/0x4a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:52.313394+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:53.313590+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535273472 unmapped: 96190464 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:54.313743+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196e3c000/0x0/0x1bfc00000, data 0x5448638/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 95690752 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558775d09e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:55.313902+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558775c3fa40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 95690752 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5771441 data_alloc: 251658240 data_used: 36446208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:56.314034+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x558775f9f860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558779c432c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d8a000/0x0/0x1bfc00000, data 0x54fa638/0x572e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 95543296 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d66000/0x0/0x1bfc00000, data 0x551e638/0x5752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:57.314182+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.980627060s of 10.456892014s, submitted: 174
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535937024 unmapped: 95526912 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:58.314403+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535953408 unmapped: 95510528 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:59.314583+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d66000/0x0/0x1bfc00000, data 0x551e638/0x5752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:00.314725+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5838057 data_alloc: 251658240 data_used: 45514752
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:01.314877+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d48000/0x0/0x1bfc00000, data 0x5542638/0x5776000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:02.315048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:03.315253+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:04.315421+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:05.315553+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d48000/0x0/0x1bfc00000, data 0x5542638/0x5776000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5838057 data_alloc: 251658240 data_used: 45514752
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:06.315694+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:07.315931+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:08.316124+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:09.316281+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d3e000/0x0/0x1bfc00000, data 0x554c638/0x5780000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:10.316438+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.751883507s of 12.793128967s, submitted: 10
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 88932352 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5897192 data_alloc: 251658240 data_used: 47095808
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:11.316611+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542752768 unmapped: 88711168 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:12.316761+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542752768 unmapped: 88711168 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:13.316952+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x55878c1a9000 session 0x5587779574a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dbc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770dbc00 session 0x558775d5c780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x5587745003c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196978000/0x0/0x1bfc00000, data 0x5912638/0x5b46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558773229680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 554729472 unmapped: 80936960 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558773233a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:14.317116+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540835840 unmapped: 94830592 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:15.317262+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540835840 unmapped: 94830592 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6023022 data_alloc: 251658240 data_used: 47476736
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:16.317421+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:17.317669+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x55878c1a4400 session 0x558773205860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:18.317829+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fd800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587754fd800 session 0x55877450e5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195892000/0x0/0x1bfc00000, data 0x69f8638/0x6c2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:19.318021+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x558778aa32c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x55877353de00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:20.318247+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6025777 data_alloc: 251658240 data_used: 47501312
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:21.318438+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547012608 unmapped: 88653824 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.476983070s of 11.754078865s, submitted: 40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:22.318597+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 87195648 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:23.318757+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195890000/0x0/0x1bfc00000, data 0x69f866b/0x6c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548544512 unmapped: 87121920 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:24.318919+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 87040000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:25.319146+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6121297 data_alloc: 268435456 data_used: 57442304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:26.319352+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x19588f000/0x0/0x1bfc00000, data 0x69f866b/0x6c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:27.319585+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:28.319818+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:29.320048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 86990848 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:30.320212+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 86990848 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6121405 data_alloc: 268435456 data_used: 57438208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:31.320365+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195888000/0x0/0x1bfc00000, data 0x69fd66b/0x6c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 86982656 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:32.320569+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.082427025s of 10.630439758s, submitted: 254
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549421056 unmapped: 86245376 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:33.320787+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 86016000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:34.320972+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:35.321132+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x194974000/0x0/0x1bfc00000, data 0x790c66b/0x7b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6242787 data_alloc: 268435456 data_used: 57643008
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:36.321296+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:37.321572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551149568 unmapped: 84516864 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:38.321750+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dac00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551149568 unmapped: 84516864 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770dac00 session 0x5587758ded20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:39.321962+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x194974000/0x0/0x1bfc00000, data 0x790c66b/0x7b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558775890c00 session 0x55877b486d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558785e47000 session 0x558775f9e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x5587770da800 session 0x558775f865a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558833664 unmapped: 76832768 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558772b9e800 session 0x558775d08960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:40.322218+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 76800000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6401963 data_alloc: 268435456 data_used: 62681088
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 422 ms_handle_reset con 0x5587750d4400 session 0x5587758de1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:41.322418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556187648 unmapped: 79478784 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772a0e000 session 0x55877c2c7a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 heartbeat osd_stat(store_statfs(0x19361f000/0x0/0x1bfc00000, data 0x8c62bf6/0x8e9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:42.322581+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558779337800 session 0x558773f2d0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558776fd5c00 session 0x558779c43c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.595304489s of 10.217897415s, submitted: 124
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550486016 unmapped: 85180416 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558779337800 session 0x558775d08b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:43.322719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772a0e000 session 0x5587736ca1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772b9e800 session 0x5587744394a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x5587750d4400 session 0x558773f2cb40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:44.322884+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:45.323109+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x558772a0e000 session 0x558775d090e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5913003 data_alloc: 251658240 data_used: 49061888
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:46.323285+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x5587770d6400 session 0x558774471e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x558778312400 session 0x55877450e3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x55877435a400 session 0x55877317e3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x558775777c00 session 0x558775db9c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:47.323526+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 heartbeat osd_stat(store_statfs(0x196960000/0x0/0x1bfc00000, data 0x59233d7/0x5b5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x558772a0e000 session 0x558775c621e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x55877435a400 session 0x558775d5cd20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:48.323689+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 heartbeat osd_stat(store_statfs(0x197c18000/0x0/0x1bfc00000, data 0x429b2e0/0x44d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:49.323845+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:50.323969+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5694943 data_alloc: 251658240 data_used: 41340928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:51.324182+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:52.324338+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.754790306s of 10.184646606s, submitted: 160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:53.324506+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558775131c00 session 0x558775f86d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541745152 unmapped: 93921280 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:54.324680+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x198f6b000/0x0/0x1bfc00000, data 0x3318acc/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x198f6b000/0x0/0x1bfc00000, data 0x3318acc/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541745152 unmapped: 93921280 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558777510800 session 0x558775c62000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x5587770d6c00 session 0x558773ede1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:55.324858+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533651456 unmapped: 102014976 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5239563 data_alloc: 218103808 data_used: 10526720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:56.325011+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x1c66acc/0x1e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 102055936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:57.325249+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558772a0e000 session 0x558775c62b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:58.325432+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:59.325622+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x1c66acc/0x1e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:00.325786+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5238971 data_alloc: 218103808 data_used: 10526720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:01.325954+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:02.326249+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:03.326421+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:04.326594+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:05.326765+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:06.326945+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:07.327144+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:08.327353+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:09.327496+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:10.327666+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:11.327860+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:12.328056+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:13.328263+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:14.328562+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:15.328793+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:16.328972+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:17.329198+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:18.329393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:19.329589+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:20.329775+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:21.329942+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:22.330106+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558775f9ef00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775bd0c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775bd0c00 session 0x558775d5c3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877b4874a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e47c00 session 0x5587744b7a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.615688324s of 29.984188080s, submitted: 76
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:23.330276+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:24.330420+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772a0e000 session 0x5587736814a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:25.330588+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:26.330805+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5339749 data_alloc: 218103808 data_used: 10534912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:27.331033+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7b000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7b000 session 0x55877c2c70e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:28.331197+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750c5c00 session 0x5587744b7c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:29.331416+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db80c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877db80c00 session 0x55877450e780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775bd1c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775bd1c00 session 0x558775c3e3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534093824 unmapped: 101572608 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:30.331530+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534118400 unmapped: 101548032 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:31.331668+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348962 data_alloc: 218103808 data_used: 11583488
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:32.331898+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:33.332149+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:34.332391+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:35.332555+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:36.332734+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5422402 data_alloc: 234881024 data_used: 21983232
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:37.332987+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:38.333213+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:39.333372+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:40.333510+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:41.333692+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5422402 data_alloc: 234881024 data_used: 21983232
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:42.333825+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.275669098s of 19.426235199s, submitted: 29
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 92340224 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:43.333958+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543334400 unmapped: 92332032 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:44.334128+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19872a000/0x0/0x1bfc00000, data 0x3b5160b/0x3d8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:45.334275+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:46.334367+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5597166 data_alloc: 234881024 data_used: 24129536
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:47.334539+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:48.334687+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:49.334822+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:50.334984+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:51.335142+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5587890 data_alloc: 234881024 data_used: 24129536
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:52.335302+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:53.335472+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868e000/0x0/0x1bfc00000, data 0x3bf660b/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:54.335644+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868e000/0x0/0x1bfc00000, data 0x3bf660b/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:55.335776+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:56.335942+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588210 data_alloc: 234881024 data_used: 24137728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.573825836s of 14.371501923s, submitted: 167
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:57.336177+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:58.336378+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:59.336553+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:00.336710+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:01.336902+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588266 data_alloc: 234881024 data_used: 24137728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:02.337075+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:03.337233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:04.337428+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:05.337703+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:06.337879+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588342 data_alloc: 234881024 data_used: 24137728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:07.338116+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198685000/0x0/0x1bfc00000, data 0x3bff60b/0x3e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:08.338451+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:09.338635+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:10.338830+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:11.338969+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5589462 data_alloc: 234881024 data_used: 24166400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:12.339138+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:13.339291+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.385713577s of 16.471719742s, submitted: 4
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19867b000/0x0/0x1bfc00000, data 0x3c0960b/0x3e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:14.339472+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:15.339846+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543178752 unmapped: 92487680 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198679000/0x0/0x1bfc00000, data 0x3c0a60b/0x3e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:16.340665+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5590818 data_alloc: 234881024 data_used: 24166400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:17.341865+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:18.342060+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:19.342251+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x558775c3f4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:20.342424+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ece000/0x0/0x1bfc00000, data 0x43b660b/0x45f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:21.342993+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5648916 data_alloc: 234881024 data_used: 24166400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec5000/0x0/0x1bfc00000, data 0x43bf60b/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:22.343407+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec5000/0x0/0x1bfc00000, data 0x43bf60b/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:23.344137+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:24.344369+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:25.344527+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:26.344810+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5648916 data_alloc: 234881024 data_used: 24166400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775841c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:27.345206+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.738831520s of 14.080610275s, submitted: 18
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558778312400 session 0x558775cdba40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 96018432 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:28.345540+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 96018432 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775db9a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec2000/0x0/0x1bfc00000, data 0x43c260b/0x45fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130c00 session 0x5587735ced20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:29.345815+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 95870976 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:30.345950+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 95870976 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:31.346263+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5706948 data_alloc: 234881024 data_used: 31490048
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:32.346479+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:33.346738+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:34.347019+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:35.347227+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:36.347402+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5707748 data_alloc: 234881024 data_used: 31555584
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:37.347645+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:38.347844+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359608650s of 11.381764412s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:39.348006+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:40.348233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e98000/0x0/0x1bfc00000, data 0x43ec60b/0x4626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:41.348538+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5707980 data_alloc: 234881024 data_used: 31555584
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:42.348735+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 91185152 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197acf000/0x0/0x1bfc00000, data 0x47b560b/0x49ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,10])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:43.348951+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546734080 unmapped: 92610560 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:44.349164+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546742272 unmapped: 92602368 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:45.349396+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546750464 unmapped: 92594176 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:46.349522+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5776212 data_alloc: 234881024 data_used: 31690752
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:47.349757+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:48.349932+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197798000/0x0/0x1bfc00000, data 0x4aec60b/0x4d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:49.350096+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:50.350231+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:51.350452+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197798000/0x0/0x1bfc00000, data 0x4aec60b/0x4d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.695519447s of 12.682845116s, submitted: 75
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775048 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:52.350635+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776f000/0x0/0x1bfc00000, data 0x4b1560b/0x4d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:53.350786+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:54.350919+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:55.351050+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776f000/0x0/0x1bfc00000, data 0x4b1560b/0x4d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:56.351454+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774748 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:57.351792+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:58.352012+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:59.352256+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776c000/0x0/0x1bfc00000, data 0x4b1860b/0x4d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:00.352418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:01.352625+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774280 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:02.352817+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.590047836s of 11.008296967s, submitted: 8
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547012608 unmapped: 92332032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:03.353031+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197767000/0x0/0x1bfc00000, data 0x4b1d60b/0x4d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:04.353284+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:05.353484+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197764000/0x0/0x1bfc00000, data 0x4b2060b/0x4d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:06.353630+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197764000/0x0/0x1bfc00000, data 0x4b2060b/0x4d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547028992 unmapped: 92315648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774760 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:07.353820+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547028992 unmapped: 92315648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:08.354010+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:09.354214+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:10.354416+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197761000/0x0/0x1bfc00000, data 0x4b2360b/0x4d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:11.354584+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774420 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:12.354824+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267741203s of 10.673305511s, submitted: 6
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:13.355041+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:14.355258+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19775e000/0x0/0x1bfc00000, data 0x4b2660b/0x4d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:15.355436+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:16.355656+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775044 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:17.355866+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:18.356076+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:19.356274+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:20.356448+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:21.356646+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775192 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:22.356827+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:23.356988+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:24.357136+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522049904s of 11.567011833s, submitted: 8
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:25.357305+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197755000/0x0/0x1bfc00000, data 0x4b2f60b/0x4d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:26.357555+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197755000/0x0/0x1bfc00000, data 0x4b2f60b/0x4d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775060 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:27.357843+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:28.358245+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:29.358418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:30.358534+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:31.358745+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197752000/0x0/0x1bfc00000, data 0x4b3260b/0x4d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775136 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:32.358919+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:33.359154+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:34.359392+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:35.359603+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.531800270s of 11.552363396s, submitted: 4
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:36.359832+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19774d000/0x0/0x1bfc00000, data 0x4b3760b/0x4d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775156 data_alloc: 234881024 data_used: 31694848
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:37.360048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5400 session 0x558773204960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558773205c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:38.360230+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19774d000/0x0/0x1bfc00000, data 0x4b3760b/0x4d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650400 session 0x558775db81e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:39.360442+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:40.360618+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:41.360817+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5602266 data_alloc: 234881024 data_used: 24010752
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:42.361001+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:43.361245+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19863f000/0x0/0x1bfc00000, data 0x3c4560b/0x3e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:44.361453+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:45.361628+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:46.361806+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5603238 data_alloc: 234881024 data_used: 24010752
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:47.362031+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:48.362399+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19863f000/0x0/0x1bfc00000, data 0x3c4560b/0x3e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5000 session 0x55877450e5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.228904724s of 12.636835098s, submitted: 17
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776704000 session 0x55877c2c65a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547102720 unmapped: 92241920 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:49.362554+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775f86d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543916032 unmapped: 95428608 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:50.362970+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:51.363593+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:52.364233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:53.364572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:54.365263+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:55.365688+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:56.366078+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:57.366554+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:58.366901+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:59.367065+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:00.367351+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:01.367550+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:02.367934+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:03.368228+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:04.368548+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:05.368822+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:06.369077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:07.369303+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:08.369598+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:09.369844+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:10.370068+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:11.370287+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:12.370621+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:13.370848+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:14.371041+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:15.371395+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:16.371681+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:17.371894+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:18.372120+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:19.372355+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:20.372538+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:21.372697+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:22.372830+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:23.372992+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:24.373151+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:25.373267+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:26.373485+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:27.373710+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:28.373941+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:29.374154+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:30.374361+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:31.374583+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:32.374835+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:33.375006+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:34.375213+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:35.375454+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:36.375666+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:37.375960+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:38.376186+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:39.376381+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:40.376582+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:41.376822+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:42.377021+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:43.377194+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:44.377372+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:45.377534+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:46.377760+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543997952 unmapped: 95346688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a786c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a786c00 session 0x558775d08960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130800 session 0x558775f865a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x558775f9e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d9000 session 0x55877353de00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.806312561s of 57.890869141s, submitted: 32
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775d09860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x55877317e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130800 session 0x558775d5c5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a786c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a786c00 session 0x558774501c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:47.377949+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770db000 session 0x558773616b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369337 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:48.378136+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:49.378290+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:50.378509+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199957000/0x0/0x1bfc00000, data 0x292d60b/0x2b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:51.378721+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:52.378961+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369337 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:53.379167+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877586a400 session 0x5587735ce960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199957000/0x0/0x1bfc00000, data 0x292d60b/0x2b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:54.379386+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776704c00 session 0x558777957860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:55.379527+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x558775d5c000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:56.379658+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c9000 session 0x55877317fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878874779s of 10.056379318s, submitted: 37
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:57.379853+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372351 data_alloc: 218103808 data_used: 10342400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:58.380024+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:59.380153+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199956000/0x0/0x1bfc00000, data 0x292d62e/0x2b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:00.380291+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:01.380458+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:02.380614+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465151 data_alloc: 234881024 data_used: 23355392
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d6000 session 0x5587735ade00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558775d092c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:03.380740+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533118976 unmapped: 110428160 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x55877450f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:04.380941+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:05.381094+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:06.381296+0000)
Nov 29 09:17:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:07.381525+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5278201 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:08.381740+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:09.381905+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:10.382081+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:11.382304+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:12.382550+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5278201 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:13.382729+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:14.382987+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877c2c74a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d4800 session 0x5587758df0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5000 session 0x558778aa3a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:15.383148+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558774438b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.406515121s of 18.591299057s, submitted: 50
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533094400 unmapped: 110452736 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x558775c62780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558773e430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a1be000/0x0/0x1bfc00000, data 0x20c561b/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d4800 session 0x558777957a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774354000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774354000 session 0x5587732043c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558775cdb680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:16.383370+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:17.383575+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399349 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:18.383746+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:19.383969+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:20.384169+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199767000/0x0/0x1bfc00000, data 0x2b1c61b/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877b486b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:21.384372+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6ca400 session 0x558775d085a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199767000/0x0/0x1bfc00000, data 0x2b1c61b/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:22.384553+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399349 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6cb800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6cb800 session 0x558775f9f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:23.384706+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778313000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558778313000 session 0x558773205860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:24.384895+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 109813760 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:25.385059+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:26.385202+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199741000/0x0/0x1bfc00000, data 0x2b4064e/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:27.385525+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5516089 data_alloc: 234881024 data_used: 25350144
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199741000/0x0/0x1bfc00000, data 0x2b4064e/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:28.385675+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558773e430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.425266266s of 13.606207848s, submitted: 36
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877450fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:29.386628+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d4400 session 0x5587745005a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:30.387257+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:31.387572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:32.388108+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:33.388568+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:34.388776+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:35.388959+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:36.389298+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:37.389683+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:38.389868+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:39.390043+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:40.390338+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:41.390571+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:42.390809+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:43.391000+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:44.391195+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:45.391368+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:46.391579+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:47.391790+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:48.391991+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:49.392157+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:50.392408+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:51.392630+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:52.392805+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:53.393003+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:54.393164+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:55.393505+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:56.393691+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:57.393887+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:58.394118+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:59.394278+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:00.394447+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:01.394616+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:02.394797+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:03.394980+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524877824 unmapped: 118669312 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:04.395180+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777510800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777510800 session 0x558773637c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7a000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7a000 session 0x558775d5d0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877b4861e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524877824 unmapped: 118669312 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d4400 session 0x558775f863c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.514713287s of 35.804637909s, submitted: 38
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x558775d090e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:05.395364+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:06.395512+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:07.395718+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5350972 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:08.395886+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1aa400 session 0x558775d5c960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:09.396004+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c4000 session 0x5587735acf00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:10.396202+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8800 session 0x558775d5de00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775841e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:11.396393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:12.396561+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5351292 data_alloc: 218103808 data_used: 10371072
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:13.396705+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:14.396876+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:15.397029+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:16.397217+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:17.398104+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5395772 data_alloc: 234881024 data_used: 16703488
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:18.398290+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:19.398552+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:20.398783+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:21.398996+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:22.399195+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5395772 data_alloc: 234881024 data_used: 16703488
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:23.399409+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:24.399517+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.604757309s of 19.673570633s, submitted: 13
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528121856 unmapped: 115425280 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199a4a000/0x0/0x1bfc00000, data 0x283260b/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:25.440504+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528293888 unmapped: 115253248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:26.440644+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:27.440847+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469034 data_alloc: 234881024 data_used: 17444864
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:28.441003+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:29.441200+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:30.441539+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:31.443964+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:32.444446+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469050 data_alloc: 234881024 data_used: 17444864
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:33.444640+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:34.444798+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:35.445224+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:36.445423+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:37.445709+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470010 data_alloc: 234881024 data_used: 17469440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:38.445961+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:39.446198+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:40.446442+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:41.446676+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x5587732332c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:42.446904+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.278804779s of 18.051580429s, submitted: 51
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x5587735125a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:43.447068+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:44.447233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:45.447431+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:46.447628+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:47.447832+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:48.447978+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:49.448137+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:50.448363+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:51.448512+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:52.448673+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:53.448865+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:54.449003+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:55.449138+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:56.449295+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:57.449527+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:58.449764+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:59.449942+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:00.450107+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:01.450306+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:02.450511+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:03.450723+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:04.450906+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:05.451069+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:06.451199+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:07.451366+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:08.451545+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:09.451751+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:10.451914+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:11.452127+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:12.452270+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2873816643' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:13.452492+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:14.452702+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:15.452868+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:16.453008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:17.453451+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:18.453603+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:19.453856+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6cb800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6cb800 session 0x5587735ac960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:20.454059+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558775db94a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x55877317e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775db81e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.154705048s of 38.187271118s, submitted: 12
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527671296 unmapped: 115875840 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:21.454265+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20b000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8800 session 0x55877c2c65a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779337000 session 0x5587735ade00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527695872 unmapped: 115851264 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558773232960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558779c42000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:22.454486+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775d08960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5368479 data_alloc: 218103808 data_used: 10342400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:23.454684+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:24.454900+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:25.455066+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:26.455278+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e8000/0x0/0x1bfc00000, data 0x248b61b/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:27.455557+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776fd5000 session 0x5587779570e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5368479 data_alloc: 218103808 data_used: 10342400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:28.456010+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775d5dc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x5587779565a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x5587758df680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:29.456141+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527728640 unmapped: 115818496 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:30.456305+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528351232 unmapped: 115195904 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:31.456462+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529309696 unmapped: 114237440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:32.456612+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431734 data_alloc: 234881024 data_used: 18862080
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529309696 unmapped: 114237440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e6000/0x0/0x1bfc00000, data 0x248b64e/0x26c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:33.456780+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:34.456979+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:35.457135+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:36.457732+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:37.458014+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431734 data_alloc: 234881024 data_used: 18862080
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:38.459015+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e6000/0x0/0x1bfc00000, data 0x248b64e/0x26c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:39.459390+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:40.459944+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.894926071s of 20.206439972s, submitted: 27
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x55877450e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529416192 unmapped: 114130944 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x5587740d43c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a6c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a6c00 session 0x558774471a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558773513680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558774470780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:41.460483+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 531234816 unmapped: 112312320 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:42.460656+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5637342 data_alloc: 234881024 data_used: 19996672
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 111411200 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:43.461000+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:44.461411+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198035000/0x0/0x1bfc00000, data 0x3e2d6b0/0x406b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x558773edf860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:45.461792+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x55877b486000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:46.462140+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db80c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877db80c00 session 0x5587758df860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587758df4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198035000/0x0/0x1bfc00000, data 0x3e2d6b0/0x406b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 111050752 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:47.462512+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650905 data_alloc: 234881024 data_used: 19963904
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 111050752 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:48.462729+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536240128 unmapped: 107307008 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:49.462954+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:50.463178+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:51.463400+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:52.463997+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801e000/0x0/0x1bfc00000, data 0x3e516c0/0x4090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741305 data_alloc: 234881024 data_used: 32706560
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:53.464246+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:54.464479+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:55.464660+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:56.465115+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801e000/0x0/0x1bfc00000, data 0x3e516c0/0x4090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:57.465511+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741305 data_alloc: 234881024 data_used: 32706560
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:58.465875+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:59.466167+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:00.466400+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.909603119s of 19.408382416s, submitted: 140
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 104980480 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:01.466740+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197895000/0x0/0x1bfc00000, data 0x45da6c0/0x4819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538615808 unmapped: 104931328 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:02.466932+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822965 data_alloc: 234881024 data_used: 32972800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:03.467235+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:04.467386+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:05.467629+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:06.467818+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197808000/0x0/0x1bfc00000, data 0x46676c0/0x48a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:07.468092+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197808000/0x0/0x1bfc00000, data 0x46676c0/0x48a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822965 data_alloc: 234881024 data_used: 32972800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:08.468202+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:09.468294+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1977e7000/0x0/0x1bfc00000, data 0x46886c0/0x48c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:10.468457+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:11.468588+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:12.468760+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1977e7000/0x0/0x1bfc00000, data 0x46886c0/0x48c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5818249 data_alloc: 234881024 data_used: 32985088
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:13.468945+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.360217094s of 13.718045235s, submitted: 81
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:14.469149+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:15.469446+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558774501c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x558775c632c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x558773f2cf00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:16.469592+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19993e000/0x0/0x1bfc00000, data 0x31d664e/0x3413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:17.469749+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569054 data_alloc: 234881024 data_used: 19968000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:18.469949+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:19.470127+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19993e000/0x0/0x1bfc00000, data 0x31d664e/0x3413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:20.470269+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:21.470359+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:22.470471+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776fd5000 session 0x55877317f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775d092c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775cdad20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:23.470572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:24.470724+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:25.470882+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:26.471065+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:27.471195+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:28.471408+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:29.471607+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:30.471736+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:31.471915+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:32.472079+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:33.472233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:34.472392+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:35.472562+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:36.472662+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:37.472802+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:38.472970+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:39.473166+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:40.473410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:41.473532+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:42.473765+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:43.473948+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:44.474129+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:45.474354+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:46.474499+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:47.474774+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:48.474900+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558775f9f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334400 session 0x55877317ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75d800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a75d800 session 0x558775d5c5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587740d50e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.351760864s of 34.652317047s, submitted: 103
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558778aa2960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x55877450e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334400 session 0x55877450f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877353dc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775c63a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:49.475066+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:50.475279+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:51.475518+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:52.475719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x5587779570e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a971000/0x0/0x1bfc00000, data 0x254261b/0x277d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:53.475870+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5400387 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75d000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a75d000 session 0x558773637c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a7c00 session 0x558775f863c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:54.476086+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a5c00 session 0x558775f9f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534552576 unmapped: 112672768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:55.476267+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534552576 unmapped: 112672768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:56.476451+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:57.476719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:58.476917+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470660 data_alloc: 218103808 data_used: 19628032
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:59.477048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:00.477214+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:01.477392+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:02.477596+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:03.477849+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470660 data_alloc: 218103808 data_used: 19628032
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:04.478013+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:05.478198+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:06.478353+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:07.478546+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.669998169s of 18.780950546s, submitted: 15
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:08.478715+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a280000/0x0/0x1bfc00000, data 0x2c3361b/0x2e6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5530192 data_alloc: 218103808 data_used: 19800064
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535822336 unmapped: 111403008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:09.479015+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:10.479210+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a24f000/0x0/0x1bfc00000, data 0x2c6461b/0x2e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:11.479410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:12.479620+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:13.479790+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543540 data_alloc: 218103808 data_used: 19800064
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:14.479951+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:15.480282+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:16.480374+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a24c000/0x0/0x1bfc00000, data 0x2c6761b/0x2ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:17.480608+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:18.480754+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540404 data_alloc: 218103808 data_used: 19800064
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:19.480914+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587732332c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.474609375s of 12.674955368s, submitted: 53
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558773e42000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:20.481095+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535838720 unmapped: 111386624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d5c00 session 0x5587758412c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:21.481588+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:22.481758+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:23.481944+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:24.482096+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:25.482379+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:26.482590+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:27.482747+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:28.482885+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:29.483001+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:30.483138+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:31.483363+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:32.483632+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:33.483762+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:34.483888+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:35.484094+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:36.484251+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:37.484484+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:38.484672+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:39.484820+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:40.485011+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:41.485218+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:42.485364+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:43.485461+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:44.485611+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:45.485764+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:46.485938+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:47.486204+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:48.486380+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 302K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 28K syncs, 2.68 writes per sync, written: 0.30 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3139 writes, 12K keys, 3139 commit groups, 1.0 writes per commit group, ingest: 12.34 MB, 0.02 MB/s
                                           Interval WAL: 3139 writes, 1258 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:49.486532+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:50.486690+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x558775cdb0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d174000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878d174000 session 0x5587740770e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775d5d860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d5c00 session 0x558773f2c3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.497575760s of 30.588603973s, submitted: 30
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775d5c780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x55877317f680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734da800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587734da800 session 0x558775d5c960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775db8d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734da800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587734da800 session 0x558775c3fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:51.486872+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:52.487006+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:53.487188+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462668 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:54.487398+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:55.487563+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:56.487757+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:57.487995+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441f400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441f400 session 0x55877450e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:58.488176+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462668 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775d5da40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:59.488433+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779336400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779336400 session 0x558779dfde00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558773e430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:00.488673+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:01.488840+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533430272 unmapped: 113795072 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:02.488985+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:03.489160+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x558775f9ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:04.489348+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:05.489612+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:06.489811+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:07.490043+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:08.490265+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:09.490475+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:10.490653+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:11.490788+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:12.490952+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:13.491145+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:14.491301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750c5c00 session 0x558773e42000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:15.491478+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:16.491634+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:17.491831+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:18.492034+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:19.492236+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:20.492552+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:21.492809+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x55877317fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:22.493000+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.803211212s of 31.928565979s, submitted: 23
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533561344 unmapped: 113664000 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:23.493162+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533561344 unmapped: 113664000 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770db800 session 0x558778aa3c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:24.493368+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533626880 unmapped: 113598464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:25.493540+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533692416 unmapped: 113532928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:26.493783+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:27.494063+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:28.494345+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:29.494557+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:30.494804+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:31.495016+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:32.495234+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:33.495562+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:34.495708+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:35.495888+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:36.496062+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:37.496247+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:38.496398+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:39.496573+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:40.496743+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:41.496932+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:42.497092+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:43.497233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:44.497389+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:45.497562+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:46.497734+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:47.497958+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:48.498158+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:49.498333+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:50.498494+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:51.498683+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:52.498874+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:53.499062+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:54.499230+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 113508352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:55.499394+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 113508352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:56.499530+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:57.499720+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:58.499903+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:59.500091+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:00.500234+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:01.500380+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:02.500542+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:03.500685+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:04.500831+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:05.501000+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:06.501227+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:07.501499+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:08.501704+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:09.501909+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:10.502092+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.346660614s of 48.235324860s, submitted: 257
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c9000 session 0x558775c62d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779337c00 session 0x5587736361e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8400 session 0x558778aa2960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:11.502359+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131800 session 0x5587744394a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131400 session 0x558778aa34a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:12.502537+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:13.502762+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406356 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:14.502986+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:15.503150+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:16.503390+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878d63d000 session 0x5587779572c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:17.503630+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x558773ede780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534134784 unmapped: 113090560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:18.503777+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406356 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534134784 unmapped: 113090560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587768f4000 session 0x55877b486780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:19.503988+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x558775db9860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:20.504192+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:21.504416+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779335000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.984196663s of 10.797243118s, submitted: 52
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:22.504856+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 112967680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:23.505056+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:24.505827+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:25.506951+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:26.507127+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:27.507782+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:28.508179+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:29.508413+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:30.508602+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:31.509038+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:32.509595+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:33.509816+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:34.510006+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.596837044s of 12.842131615s, submitted: 2
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535461888 unmapped: 111763456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a7e4000/0x0/0x1bfc00000, data 0x26ce67d/0x290a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:35.510236+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536133632 unmapped: 111091712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:36.510448+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 539967488 unmapped: 107257856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:37.510738+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 109469696 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a1e5000/0x0/0x1bfc00000, data 0x2ccd67d/0x2f09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:38.510967+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543217 data_alloc: 218103808 data_used: 16642048
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 109469696 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:39.511108+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537821184 unmapped: 109404160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:40.511238+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537821184 unmapped: 109404160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:41.511403+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a13b000/0x0/0x1bfc00000, data 0x2d7767d/0x2fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:42.511578+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:43.511749+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5560905 data_alloc: 218103808 data_used: 16719872
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:44.512048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.149740696s of 10.578299522s, submitted: 130
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a13b000/0x0/0x1bfc00000, data 0x2d7767d/0x2fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:45.512239+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:46.512444+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:47.512720+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:48.512937+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554473 data_alloc: 218103808 data_used: 16723968
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:49.513176+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:50.513407+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:51.513610+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:52.513849+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:53.514027+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554473 data_alloc: 218103808 data_used: 16723968
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:54.514157+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:55.514418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:56.514794+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.785726547s of 12.032221794s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:57.515020+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:58.515219+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554341 data_alloc: 218103808 data_used: 16723968
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:59.515747+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:00.515882+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:01.516041+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:02.516179+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779335000 session 0x558773204d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x558775cda780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:03.516346+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554265 data_alloc: 218103808 data_used: 16736256
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:04.516524+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:05.516732+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:06.516951+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.316456318s of 10.089038849s, submitted: 18
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:07.517145+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:08.517901+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554265 data_alloc: 218103808 data_used: 16736256
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:09.518111+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:10.518269+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:11.518441+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:12.518551+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b220000/0x0/0x1bfc00000, data 0x1c9261b/0x1ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:13.518786+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373704 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:14.519019+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877450f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:15.519225+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:16.519449+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:17.519687+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:18.519861+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:19.520134+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:20.520359+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:21.520558+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:22.520743+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:23.520964+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:24.521167+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:25.521383+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:26.521581+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:27.521767+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:28.521930+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:29.522084+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:30.522261+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:31.522439+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:32.522600+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:33.522751+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:34.522879+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:35.523001+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:36.523147+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:37.523447+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:38.523690+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:39.523862+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:40.524073+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:41.524220+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:42.524417+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:43.524567+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:44.524751+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:45.524959+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:46.525285+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:47.525552+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:48.525730+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:49.525867+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:50.526037+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:51.526187+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:52.526434+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:53.526589+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:54.526741+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:55.526946+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:56.527149+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:57.527455+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:58.527736+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:59.529007+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:00.529205+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:01.529906+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:02.530091+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:03.530667+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:04.530897+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:05.531928+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:06.532476+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:07.532972+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:08.533143+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:09.533351+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:10.534445+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x558775d08f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775cda3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614b800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877614b800 session 0x5587744383c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:11.534888+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x5587758de3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.188396454s of 64.272262573s, submitted: 40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538058752 unmapped: 109166592 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775f86d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x5587744714a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x55877b486960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754ffc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754ffc00 session 0x558778aa30e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877450f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:12.535184+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:13.535385+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:14.535507+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5448467 data_alloc: 218103808 data_used: 10338304
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775db9860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x55877b486780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x558773ede780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7b800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7b800 session 0x558778aa34a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a9a8000/0x0/0x1bfc00000, data 0x250b61b/0x2746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,1,2])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x5587744394a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558773e42000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x558773e430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558779dfde00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e46400 session 0x55877450e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:15.538056+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558775db8d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538238976 unmapped: 108986368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x558775d5c960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:16.538285+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538238976 unmapped: 108986368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x55877317f680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775d5c780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:17.538584+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541425664 unmapped: 105799680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x5587736165a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558775f9f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558775db8960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:18.538720+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x558777957a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538312704 unmapped: 108912640 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877353cf00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x558773513680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:19.538843+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5735766 data_alloc: 218103808 data_used: 16179200
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541073408 unmapped: 106151936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0f000/0x0/0x1bfc00000, data 0x44a06ff/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0f000/0x0/0x1bfc00000, data 0x44a06ff/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:20.539065+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541237248 unmapped: 105988096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6ca800 session 0x55877b486b40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:21.539391+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541237248 unmapped: 105988096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44c00 session 0x558774470780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775d092c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.374307632s of 10.853147507s, submitted: 104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:22.539713+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545087488 unmapped: 102137856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x55877317ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:23.539928+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550985728 unmapped: 96239616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:24.540220+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5876919 data_alloc: 234881024 data_used: 33325056
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550985728 unmapped: 96239616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586b000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:25.540385+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556343296 unmapped: 90882048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:26.540597+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 557203456 unmapped: 90021888 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877586b000 session 0x558774471e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8c00 session 0x558775f9e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:27.540845+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558260224 unmapped: 88965120 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:28.541008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x55877450f2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558268416 unmapped: 88956928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:29.541257+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5746145 data_alloc: 234881024 data_used: 29536256
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558268416 unmapped: 88956928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:30.541560+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 561299456 unmapped: 85925888 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:31.541698+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 87154688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19922c000/0x0/0x1bfc00000, data 0x3c7c69d/0x3eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x5587758de960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x558773ede1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:32.541876+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.121753693s of 10.037456512s, submitted: 94
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a800 session 0x558775f86f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:33.542099+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1995f0000/0x0/0x1bfc00000, data 0x2b2861b/0x2d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:34.542352+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5595568 data_alloc: 218103808 data_used: 19271680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1995f0000/0x0/0x1bfc00000, data 0x2b2861b/0x2d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7400 session 0x5587758412c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7000 session 0x558773204960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:35.542586+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x5587758def00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:36.542793+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:37.543081+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:38.543291+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:39.543520+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:40.543713+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:41.543936+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:42.544107+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:43.544354+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:44.544569+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:45.544788+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:46.544991+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:47.545194+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:48.545362+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:49.545554+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:50.545767+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:51.545975+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:52.546151+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:53.546432+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:54.546631+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:55.546819+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:56.547010+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:57.547239+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:58.547422+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:59.547611+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:00.547737+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:01.547944+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:02.548140+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:03.548350+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:04.548509+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:05.548708+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:06.548920+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:07.549170+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:08.549421+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:09.549622+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:10.549872+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:11.550029+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:12.550203+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:13.550419+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:14.550675+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:15.550934+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:16.551069+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:17.551304+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:18.551498+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:19.551659+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:20.551820+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:21.551990+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:22.552135+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:23.552345+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:24.552493+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:25.552747+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:26.552920+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:27.553098+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:28.553302+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:29.553554+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:30.553688+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:31.553811+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:32.554013+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:33.554147+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:34.554294+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:35.554463+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:36.554628+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:37.554804+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:38.555011+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:39.555161+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:40.555400+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:41.555580+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:42.555754+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:43.555927+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:44.556094+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:45.556399+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:46.556549+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:47.556756+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:48.556984+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:49.557171+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:50.557400+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:51.557549+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:52.557723+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:53.557908+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130000 session 0x558773f2cb40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:54.558163+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:55.558384+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:56.558598+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:57.559059+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:58.559292+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:59.559547+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:00.559680+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:01.559929+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:02.560196+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:03.560413+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:04.560561+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:05.560727+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:06.560913+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:07.561367+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:08.561685+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:09.561832+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:10.562511+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:11.563290+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:12.563650+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:13.564257+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:14.564452+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fc800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 101.868850708s of 102.070037842s, submitted: 66
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409627 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fc800 session 0x5587736cb680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:15.564790+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:16.565056+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:17.565256+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:18.565575+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:19.565740+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409555 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:20.565918+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:21.566162+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:22.566443+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:23.566656+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7000 session 0x55877b4872c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775841a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:24.566928+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:25.567080+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:26.567295+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:27.567528+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:28.567759+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:29.567964+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:30.568128+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:31.568349+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7c00 session 0x55877450e1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe800 session 0x558775d5dc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:32.568486+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:33.568685+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:34.568895+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:35.569035+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:36.569176+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:37.569373+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:38.569567+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:39.569768+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:40.570040+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:41.570287+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:42.570450+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:43.570652+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:44.570848+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:45.571091+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:46.571380+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:47.571619+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:48.571957+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:49.572235+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:50.572563+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:51.572799+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:52.572937+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548143104 unmapped: 99082240 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:53.573140+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:54.573377+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:55.573595+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:56.573804+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:57.574015+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c4800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:58.574253+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c4800 session 0x558775f9ed20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 99065856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:59.574418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775c3e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5412011 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:00.574572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:01.574750+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:02.574938+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:03.575262+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:04.575545+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5412011 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:05.575789+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:06.576008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:07.576303+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:08.576593+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d8400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.116096497s of 54.726608276s, submitted: 15
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:09.576800+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:10.576976+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d8400 session 0x558775f863c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:11.577167+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:12.578569+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:13.578994+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-11-29T09:08:14.579601+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _finish_auth 0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:14.581048+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:15.580109+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:16.581067+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548397056 unmapped: 98828288 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:17.581423+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:18.581777+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:19.582093+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:20.582999+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:21.583461+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:22.583700+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:23.583896+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:24.584126+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:25.584272+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:26.584492+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:27.584758+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:28.585037+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:29.585301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:30.585487+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:31.585642+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:32.585956+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587768f4c00 session 0x558775f9f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:33.586118+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.762420654s of 24.256208420s, submitted: 1
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x5587745005a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:34.586279+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5487063 data_alloc: 218103808 data_used: 11325440
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:35.586516+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556703744 unmapped: 90521600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:36.586719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555196416 unmapped: 92028928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e47400 session 0x5587735ced20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x5587740d43c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:37.586931+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 98320384 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:38.587209+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 98320384 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:39.587467+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5475495 data_alloc: 218103808 data_used: 11329536
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:40.587744+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:41.587973+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:42.588242+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:43.588418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:44.588607+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5475495 data_alloc: 218103808 data_used: 11329536
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:45.588794+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:46.588932+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.426338196s of 13.454877853s, submitted: 12
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d7f5400 session 0x558775f9e960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:47.589112+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:48.589383+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:49.589628+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:50.589783+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:51.589958+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:52.590122+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:53.590302+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:54.590532+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548937728 unmapped: 98287616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:55.590686+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:56.590883+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:57.591073+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:58.591273+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:59.591459+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:00.591676+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:01.591871+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:02.592077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:03.592456+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:04.592646+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:05.592871+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:06.593076+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:07.593998+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:08.594207+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:09.594403+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:10.594607+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548970496 unmapped: 98254848 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.371900558s of 24.390153885s, submitted: 2
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:11.594768+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558773699c00 session 0x558775f9fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c8c00 session 0x558778aa32c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:12.594960+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:13.595181+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:14.595413+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:15.595575+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:16.595776+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:17.596166+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:18.596378+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:19.597768+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:20.598025+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548995072 unmapped: 98230272 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:21.598225+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:22.598604+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:23.598958+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:24.600102+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:25.600393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:26.600783+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:27.601686+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:28.601847+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c9000 session 0x5587758405a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558773699c00 session 0x558775841680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:29.602008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558774516400 session 0x558775db8780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.156299591s of 19.043857574s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d7f5400 session 0x55877b486000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:30.602176+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5488154 data_alloc: 218103808 data_used: 11337728
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549052416 unmapped: 98172928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778e68400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:31.602379+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549052416 unmapped: 98172928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:32.602565+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:33.603028+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:34.603499+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:35.603944+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547138 data_alloc: 234881024 data_used: 18997248
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c8c00 session 0x558775d085a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558778e68400 session 0x558775d5d860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:36.604251+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774517400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:37.604462+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549068800 unmapped: 98156544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:38.604869+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549068800 unmapped: 98156544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:39.605039+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:40.605337+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.602934361s of 10.160605431s, submitted: 26
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543817 data_alloc: 234881024 data_used: 18993152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558774517400 session 0x558779c42f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:41.605652+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:42.605826+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:43.606081+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:44.606280+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:45.606418+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543225 data_alloc: 234881024 data_used: 18993152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:46.606579+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:47.606742+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:48.606981+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:49.607205+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:50.607513+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543357 data_alloc: 234881024 data_used: 18993152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.982504845s of 10.244502068s, submitted: 4
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d217800 session 0x558773205860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:51.607764+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:52.607928+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x5587770db400 session 0x558775d5c3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:53.608164+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x1994a4000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:54.608431+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55878c1aa000 session 0x558775d08d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614a800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:55.608589+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5541782 data_alloc: 234881024 data_used: 18989056
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 98131968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55877614a800 session 0x558775d09c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:56.608765+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 98131968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55877d217800 session 0x558777957c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:57.608983+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 heartbeat osd_stat(store_statfs(0x1994a1000/0x0/0x1bfc00000, data 0x245ef21/0x269c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55878c1aa000 session 0x558778aa3680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:58.609291+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 heartbeat osd_stat(store_statfs(0x1994c1000/0x0/0x1bfc00000, data 0x1c6bf11/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:59.609669+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:00.609856+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428284 data_alloc: 218103808 data_used: 10838016
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:01.610078+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:02.610253+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.973116875s of 11.827588081s, submitted: 32
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7a800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558775b7a800 session 0x55877450f860
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c96000/0x0/0x1bfc00000, data 0x1c6bf11/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587750d7000 session 0x5587732330e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:03.610393+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:04.610560+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558774516c00 session 0x558775cdb0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:05.610721+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5432458 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:06.610925+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:07.611286+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x1c6da50/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x1c6da50/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558772a0fc00 session 0x5587740d54a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fc000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:08.611448+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587754fc000 session 0x558778aa3c20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558775130000 session 0x558775d5c3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586ac00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877586ac00 session 0x558775db8780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:09.611593+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:10.611719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:11.611896+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:12.612191+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:13.612399+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:14.612551+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:15.612716+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:16.612847+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:17.613005+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:18.613174+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:19.613305+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:20.613489+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:21.613607+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:22.613720+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:23.613878+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:24.614172+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:25.614486+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:26.614611+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:27.614762+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:28.614911+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:29.615044+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:30.615223+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:31.615386+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:32.615598+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:33.615742+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:34.615899+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:35.616056+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877441e400 session 0x558775f9fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:36.616227+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:37.616457+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587750c5000 session 0x5587740d43c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:38.616611+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558774516400 session 0x5587735ced20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d216800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.154342651s of 36.324752808s, submitted: 40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877d216800 session 0x5587745005a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:39.616796+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540295168 unmapped: 106930176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:40.616940+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773651400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5488634 data_alloc: 218103808 data_used: 10846208
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540295168 unmapped: 106930176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:41.617064+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:42.617219+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:43.617415+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:44.617560+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:45.617686+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5535834 data_alloc: 234881024 data_used: 17489920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:46.617828+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:47.618155+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:48.618385+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:49.618630+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:50.618845+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5535834 data_alloc: 234881024 data_used: 17489920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:51.619077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:52.619305+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211997032s of 13.701082230s, submitted: 8
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545931264 unmapped: 101294080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:53.619491+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545980416 unmapped: 101244928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,7])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:54.619614+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:55.619763+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5626294 data_alloc: 234881024 data_used: 18419712
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0d000/0x0/0x1bfc00000, data 0x2cf1ac2/0x2f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:56.619901+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:57.620105+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:58.620279+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:59.620406+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545734656 unmapped: 101490688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:00.620521+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5625966 data_alloc: 234881024 data_used: 18423808
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545734656 unmapped: 101490688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:01.620654+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0a000/0x0/0x1bfc00000, data 0x2cf4ac2/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:02.620783+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:03.620926+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:04.621151+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:05.621354+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0a000/0x0/0x1bfc00000, data 0x2cf4ac2/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5626286 data_alloc: 234881024 data_used: 18432000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:06.621544+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.477091789s of 14.712428093s, submitted: 85
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:07.621751+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877d7f5400 session 0x558779dfde00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877e6c8c00 session 0x5587758def00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:08.621952+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:09.622134+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:10.622296+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5633786 data_alloc: 234881024 data_used: 18436096
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:11.622503+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:12.622697+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:13.622853+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:14.623008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:15.623187+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5633786 data_alloc: 234881024 data_used: 18436096
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:16.623359+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:17.623575+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:18.623727+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:19.623928+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.506603241s of 12.525759697s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:20.624108+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634112 data_alloc: 234881024 data_used: 18440192
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:21.624301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545792000 unmapped: 101433344 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558773653c00 session 0x558775f86f00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877d7f5000 session 0x558773e430e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:22.625430+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545792000 unmapped: 101433344 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:23.625590+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c05000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545800192 unmapped: 101425152 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:24.625790+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:25.625949+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652598 data_alloc: 234881024 data_used: 18436096
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:26.626174+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:27.626374+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:28.626544+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:29.626719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:30.626874+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652758 data_alloc: 234881024 data_used: 18440192
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:31.627480+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:32.627693+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:33.627863+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:34.628020+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.607929230s of 15.318544388s, submitted: 9
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:35.628237+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545824768 unmapped: 101400576 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558772a0e000 session 0x558778aa34a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652758 data_alloc: 234881024 data_used: 18440192
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:36.628451+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545824768 unmapped: 101400576 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:37.629032+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:38.629461+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:39.629680+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545701888 unmapped: 101523456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:40.629906+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:41.630126+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:42.630347+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:43.630464+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:44.630646+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:45.630803+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:46.631036+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:47.631301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:48.631616+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 78K writes, 308K keys, 78K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s
                                           Cumulative WAL: 78K writes, 29K syncs, 2.67 writes per sync, written: 0.31 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1807 writes, 6006 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 5.18 MB, 0.01 MB/s
                                           Interval WAL: 1807 writes, 769 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:49.631767+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:50.631922+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.178545952s of 16.156717300s, submitted: 3
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:51.632157+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:52.632432+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:53.632629+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:54.632836+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:55.633045+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5694251 data_alloc: 234881024 data_used: 19750912
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:56.633245+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:57.633534+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:58.633693+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:59.633894+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:00.634052+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:01.634180+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:02.634427+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:03.634600+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:04.634809+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: mgrc ms_handle_reset ms_handle_reset con 0x558773651c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 09:17:34 compute-2 ceph-osd[79833]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: get_auth_request con 0x55877441e400 auth_method 0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: mgrc handle_mgr_configure stats_period=5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:05.635126+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:06.635352+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:07.635522+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:08.635742+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:09.635976+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:10.636114+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:11.636289+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:12.636491+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:13.636781+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:14.636984+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:15.637192+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:16.637378+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:17.637572+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:18.639795+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.341226578s of 27.352630615s, submitted: 3
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558779337800 session 0x55877b486780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:19.639957+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:20.640119+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877441e000 session 0x55877353cf00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5686995 data_alloc: 234881024 data_used: 20164608
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:21.640355+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:22.640565+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:23.640818+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 103841792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:24.640999+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 103784448 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:25.641408+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 103784448 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5687083 data_alloc: 234881024 data_used: 20176896
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:26.641618+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543457280 unmapped: 103768064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x5587770d6800 session 0x558773204960
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877610b000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877610b000 session 0x5587735125a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:27.641913+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877e6ca800 session 0x558775d09a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543547392 unmapped: 103677952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:28.642134+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1.947998643s of 10.021273613s, submitted: 337
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773650400 session 0x55877353dc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198c05000/0x0/0x1bfc00000, data 0x2f2187d/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:29.642307+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773651400 session 0x558775d5dc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x5587770d4800 session 0x558775c3e780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:30.642462+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773650400 session 0x5587735cfa40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5655846 data_alloc: 234881024 data_used: 20176896
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:31.642592+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:32.642800+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773651400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773651400 session 0x558775cdbe00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877610b000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:33.642956+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198c03000/0x0/0x1bfc00000, data 0x2cf851a/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:34.643133+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198e84000/0x0/0x1bfc00000, data 0x1c7151a/0x1eb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:35.643389+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x55877610b000 session 0x55877b487680
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468058 data_alloc: 218103808 data_used: 11431936
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x199a60000/0x0/0x1bfc00000, data 0x1c7151a/0x1eb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:36.643642+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:37.643943+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 434 heartbeat osd_stat(store_statfs(0x199a60000/0x0/0x1bfc00000, data 0x1c714b8/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:38.644099+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.245321274s of 10.457572937s, submitted: 58
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: get_auth_request con 0x558772983c00 auth_method 0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 435 ms_handle_reset con 0x55877db82800 session 0x558773229e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:39.644287+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:40.644514+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 435 heartbeat osd_stat(store_statfs(0x199c85000/0x0/0x1bfc00000, data 0x1c74c94/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462573 data_alloc: 218103808 data_used: 10899456
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:41.644777+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:42.645009+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:43.645190+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:44.645399+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:45.645545+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 435 heartbeat osd_stat(store_statfs(0x199c85000/0x0/0x1bfc00000, data 0x1c74c94/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462573 data_alloc: 218103808 data_used: 10899456
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:46.645789+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:47.645960+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:48.646201+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:49.646445+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:50.646658+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464875 data_alloc: 218103808 data_used: 10899456
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:51.646822+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:52.647063+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614a800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877614a800 session 0x55877317eb40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587757c5400 session 0x558775d08000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:53.647479+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f4400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:54.647660+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.175668716s of 15.365993500s, submitted: 22
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877d7f4400 session 0x55877c2c65a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:55.647882+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5466349 data_alloc: 218103808 data_used: 10899456
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:56.648108+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:57.648432+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587770d4800 session 0x5587736cba40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:58.648671+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c84000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 103727104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558778312c00 session 0x55877c2c7a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:59.648842+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767fc/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,1,3])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548593664 unmapped: 98631680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:00.649021+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587757c5400 session 0x558775c63e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544653312 unmapped: 102572032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877441fc00 session 0x558777956780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:01.649520+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:02.649660+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:03.649836+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:04.650016+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:05.650191+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:06.650397+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:07.650644+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:08.650815+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:09.651028+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:10.651223+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:11.651413+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:12.651581+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:13.651790+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775777c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.897251129s of 19.785919189s, submitted: 52
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558775777c00 session 0x558777957e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:14.652259+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544563200 unmapped: 102662144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fd400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:15.652438+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544604160 unmapped: 102621184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573111 data_alloc: 218103808 data_used: 14704640
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:16.652589+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:17.652797+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:18.652996+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:19.653233+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:20.653410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608631 data_alloc: 234881024 data_used: 18300928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:21.653569+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:22.653729+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:23.653903+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:24.654054+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:25.654250+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608631 data_alloc: 234881024 data_used: 18300928
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:26.702116+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.996846199s of 13.037012100s, submitted: 11
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:27.702419+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 100532224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:28.702579+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:29.702800+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:30.702935+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653735 data_alloc: 234881024 data_used: 18366464
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:31.703122+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:32.703260+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:33.703401+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:34.703563+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548036608 unmapped: 99188736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587770db800 session 0x5587758df2c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587754fd400 session 0x558775c3e780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:35.703722+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877441fc00 session 0x558775f9fe00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:36.703891+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:37.704143+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:38.704379+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:39.704583+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:40.704732+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:41.704865+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:42.705027+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:43.705222+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:44.705341+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:45.705537+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:46.705705+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:47.705924+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:48.706116+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558775130c00 session 0x558779dfc1e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779335000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558779335000 session 0x558779dfcb40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:49.706410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776994000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558776994000 session 0x558779dfdc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d174000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.644924164s of 22.960281372s, submitted: 104
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55878d174000 session 0x558779dfc5a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:50.706580+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:51.706708+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656519 data_alloc: 234881024 data_used: 18468864
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:52.706883+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:53.707034+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:54.707262+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:55.707484+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:56.707705+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651079 data_alloc: 234881024 data_used: 18993152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:57.707924+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:58.708074+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:59.708288+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:00.708475+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:01.708666+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651079 data_alloc: 234881024 data_used: 18993152
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:02.708803+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.647368431s of 12.696481705s, submitted: 13
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:03.708960+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:04.709095+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:05.709219+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:06.709340+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5663855 data_alloc: 234881024 data_used: 19542016
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:07.709558+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:08.709700+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:09.709947+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:10.710138+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:11.710341+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661727 data_alloc: 234881024 data_used: 19537920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:12.710499+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:13.710711+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:14.710909+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.475411415s of 12.517531395s, submitted: 22
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:15.711066+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:16.711225+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661375 data_alloc: 234881024 data_used: 19537920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:17.711392+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:18.711551+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:19.711726+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:20.711900+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:21.712063+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661199 data_alloc: 234881024 data_used: 19537920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:22.712225+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:23.712384+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:24.712533+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:25.712744+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:26.712922+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661199 data_alloc: 234881024 data_used: 19537920
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:27.713158+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:28.713297+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:29.713495+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:30.713652+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.725869179s of 15.739192009s, submitted: 4
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770da000 session 0x5587744b7e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:31.713820+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5671973 data_alloc: 234881024 data_used: 20385792
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548249600 unmapped: 98975744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558779334c00 session 0x55877450f4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773017800 session 0x55877317fc20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:32.713975+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c65000/0x0/0x1bfc00000, data 0x2aef523/0x2d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:33.714134+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:34.714281+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:35.714504+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:36.714662+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:37.714947+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:38.715097+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:39.715254+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:40.715389+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:41.715573+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:42.715759+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:43.716008+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:44.716171+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:45.716401+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699c00 session 0x558779c42d20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:46.716535+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699400 session 0x558773233a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:47.716749+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773017800 session 0x558773e42780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d8400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.519697189s of 17.087022781s, submitted: 10
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770d8400 session 0x558775f9ef00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:48.716935+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699400
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:49.717122+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:50.717275+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:51.717454+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681799 data_alloc: 234881024 data_used: 20459520
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:52.717659+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:53.717806+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:54.717966+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:55.718120+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:56.718263+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681799 data_alloc: 234881024 data_used: 20459520
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:57.718464+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:58.718614+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:59.718729+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:00.718875+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:01.719032+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.313117027s of 13.386991501s, submitted: 2
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5692281 data_alloc: 234881024 data_used: 21245952
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:02.719159+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:03.719413+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:04.719560+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c57000/0x0/0x1bfc00000, data 0x2baf533/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:05.719759+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:06.719962+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5703035 data_alloc: 234881024 data_used: 21241856
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:07.720130+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:08.720302+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:09.720484+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b78000/0x0/0x1bfc00000, data 0x2cc8533/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:10.720619+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:11.720763+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713751 data_alloc: 234881024 data_used: 21389312
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:12.720911+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:13.721054+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:14.721265+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:15.721451+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b78000/0x0/0x1bfc00000, data 0x2cc8533/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:16.721587+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713751 data_alloc: 234881024 data_used: 21389312
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:17.721771+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:18.721894+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.206661224s of 17.654935837s, submitted: 16
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:19.722029+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:20.722188+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b77000/0x0/0x1bfc00000, data 0x2cc9533/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:21.722330+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713979 data_alloc: 234881024 data_used: 21389312
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:22.722483+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:23.722639+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:24.722806+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:25.723007+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:26.723148+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699400 session 0x558773f2d4a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699c00 session 0x558777956000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5711339 data_alloc: 234881024 data_used: 21495808
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dac00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b6f000/0x0/0x1bfc00000, data 0x2cc9533/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:27.723301+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770dac00 session 0x5587744383c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:28.723447+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:29.723601+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558785e46800 session 0x55877450f0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.847147942s of 10.864791870s, submitted: 5
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x55878c1a9000 session 0x558775f874a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:30.723735+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47000
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558785e47000 session 0x558775d5d0e0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1ab800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x55878c1ab800 session 0x558773681e00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:31.723838+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5696562 data_alloc: 234881024 data_used: 21389312
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:32.723977+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2af116e/0x2d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x55877441fc00 session 0x558779dfcd20
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:33.724141+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x558775130c00 session 0x5587740d54a0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x558785e46800 session 0x558773ede3c0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:34.724286+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:35.724419+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:36.724581+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5688029 data_alloc: 234881024 data_used: 21274624
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:37.724792+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548765696 unmapped: 98459648 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 ms_handle_reset con 0x55877e6c8800 session 0x558778aa3a40
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1abc00
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:38.724941+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x197c86000/0x0/0x1bfc00000, data 0x2acec7a/0x2d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 ms_handle_reset con 0x55878c1abc00 session 0x55877b486780
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:39.725073+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:40.725211+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:41.725388+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:42.725510+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:43.725702+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:44.725906+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:45.726109+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:46.726259+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:47.726526+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:48.726669+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:49.726876+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:50.727060+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:51.727221+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:52.727355+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:53.727530+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:54.727872+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:55.728059+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:56.728198+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:57.728420+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:58.728584+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:59.728759+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:00.728873+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:01.729075+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:02.729178+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:03.729480+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:04.729689+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:05.729857+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:06.730076+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:07.730431+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:08.730608+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:09.730759+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:10.730915+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:11.731052+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:12.731250+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:13.731396+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:14.731584+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:15.731766+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:16.731989+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:17.732163+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:18.732336+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:19.732474+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:20.732692+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:21.732841+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:22.733007+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:23.733294+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:24.733525+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:25.733719+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:26.733925+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:27.734230+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:28.734435+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:29.734714+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548659200 unmapped: 98566144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:30.734895+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548659200 unmapped: 98566144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:31.735055+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:32.735228+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:33.735394+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:34.735524+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:35.735866+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:36.736067+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:37.736305+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:38.736643+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:39.736829+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:40.737014+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:41.737213+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:42.737386+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:43.737559+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:44.737709+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:45.737854+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 98541568 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:46.738026+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 98541568 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:47.738192+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548691968 unmapped: 98533376 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:48.738410+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548691968 unmapped: 98533376 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:49.738599+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548700160 unmapped: 98525184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:50.738748+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:51.738905+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:52.739077+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:53.739299+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:54.739512+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548716544 unmapped: 98508800 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:55.739710+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548716544 unmapped: 98508800 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:56.739853+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:57.740385+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:58.740514+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:59.740656+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:00.740813+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}'
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:01.740934+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548798464 unmapped: 98426880 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}'
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:17:34 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:17:34 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:02.741108+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548438016 unmapped: 98787328 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:17:34 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:03.741274+0000)
Nov 29 09:17:34 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:17:34 compute-2 ceph-osd[79833]: do_command 'log dump' '{prefix=log dump}'
Nov 29 09:17:34 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:17:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:34.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 09:17:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3727489054' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 09:17:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/397781663' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.43968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.50720 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.43983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.43995 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.47686 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.50747 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2700122359' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.44016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2873816643' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2873818589' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1639215689' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3727489054' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3228355095' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1611074507' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 09:17:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2440997494' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:17:35 compute-2 crontab[350793]: (root) LIST (root)
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.47704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.50759 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.44037 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.47725 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.50771 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.44055 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/397781663' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.47740 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.50783 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/101951362' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/750011019' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:17:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2440997494' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:17:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 09:17:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2865687822' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:17:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 09:17:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3361912093' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 09:17:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3496834171' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.44070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.47758 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.50807 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.44091 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.50822 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.47773 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/674154820' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2845288947' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2865687822' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3784398411' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/201732818' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3361912093' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1722044676' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2031095561' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 09:17:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/295366142' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 nova_compute[232428]: 2025-11-29 09:17:37.360 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 09:17:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/581378899' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 09:17:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3798038641' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 09:17:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/295512601' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 09:17:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1271020278' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:38.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.44118 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.47794 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.50864 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/956463298' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3496834171' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/295366142' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/61628711' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1701076098' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2677743725' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1609830633' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/581378899' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3798038641' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1272515252' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3657199559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1453176421' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1816638030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/295512601' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1271020278' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 09:17:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3561092254' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:17:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 09:17:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3824388452' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 09:17:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2209770410' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:17:38 compute-2 nova_compute[232428]: 2025-11-29 09:17:38.910 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 09:17:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1878520248' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 09:17:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/59928691' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 09:17:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2123546082' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 systemd[1]: Starting Hostname Service...
Nov 29 09:17:39 compute-2 ceph-mon[77138]: pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2109516250' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3375629223' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/892480265' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3561092254' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1073022535' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3824388452' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/38409193' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/42238607' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1337990301' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2209770410' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/291863460' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3308801323' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1878520248' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3765251769' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2953156169' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:17:39 compute-2 podman[351278]: 2025-11-29 09:17:39.656178614 +0000 UTC m=+0.072283835 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 09:17:39 compute-2 systemd[1]: Started Hostname Service.
Nov 29 09:17:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 09:17:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3450822020' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 09:17:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3429702625' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:40.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:17:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/59928691' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1685737166' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1167337033' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1980321960' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2123546082' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3653193851' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3450822020' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1848883205' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.44277 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3429702625' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.51017 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.47926 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3920128576' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:17:41 compute-2 nova_compute[232428]: 2025-11-29 09:17:41.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 09:17:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3161367851' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:17:41 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 09:17:41 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3676024629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.44301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.44295 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.47935 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.51029 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.47941 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.44310 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.51038 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.47950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.51056 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2315486681' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.47965 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2006648945' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3161367851' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2496590787' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/56438089' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:17:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3676024629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:17:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 09:17:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1707779337' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:42 compute-2 nova_compute[232428]: 2025-11-29 09:17:42.363 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 09:17:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/429345428' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.51068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.44352 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.47986 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.51083 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.44382 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.48001 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1707779337' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2730671749' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4248766985' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3801093389' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/429345428' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/506427084' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:43 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 09:17:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4593179' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:17:43 compute-2 nova_compute[232428]: 2025-11-29 09:17:43.912 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:17:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:17:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 09:17:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2453392932' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.51098 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.48019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.44403 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.51119 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.48034 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.51134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4593179' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2504882129' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/562493909' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.48088 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.51191 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: from='client.44490 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:44 compute-2 ceph-mon[77138]: pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 09:17:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/968665399' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 09:17:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3070862264' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2453392932' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3402498730' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1865210687' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/968665399' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1719245278' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3920761496' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2897488850' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/146904133' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3070862264' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:17:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 09:17:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1739009266' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:17:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:46.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 09:17:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2565094884' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2823251554' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1849550459' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1739009266' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.51230 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.44541 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.48133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1432428896' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:17:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3886331604' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 09:17:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4160117189' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:17:47 compute-2 nova_compute[232428]: 2025-11-29 09:17:47.367 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 09:17:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2201412700' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2565094884' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4060416169' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1043708518' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4160117189' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.51257 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:47 compute-2 ceph-mon[77138]: from='client.44574 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:48 compute-2 nova_compute[232428]: 2025-11-29 09:17:48.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:48 compute-2 nova_compute[232428]: 2025-11-29 09:17:48.203 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:48.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:17:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:17:48 compute-2 nova_compute[232428]: 2025-11-29 09:17:48.913 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 09:17:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3679001251' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.48163 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/498813386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/328211359' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2201412700' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.44592 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:49 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3959855484' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:17:49 compute-2 nova_compute[232428]: 2025-11-29 09:17:49.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:49 compute-2 sudo[352622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:49 compute-2 sudo[352622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:49 compute-2 sudo[352622]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:49 compute-2 sudo[352659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:17:49 compute-2 sudo[352659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:17:49 compute-2 sudo[352659]: pam_unix(sudo:session): session closed for user root
Nov 29 09:17:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 09:17:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1864522322' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 09:17:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:50.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.51275 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.48175 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.44598 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.48181 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.51284 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3679001251' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4139312844' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/930279285' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1864522322' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4216702883' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 09:17:51 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 09:17:51 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1674329803' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.44622 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.44634 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.48214 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.51323 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.48220 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.51329 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/963325396' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3984160017' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1674329803' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2164508129' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/780921308' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 09:17:52 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2964408225' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 09:17:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:52 compute-2 nova_compute[232428]: 2025-11-29 09:17:52.424 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 09:17:52 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3089853072' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:52 compute-2 ovs-appctl[353530]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 09:17:52 compute-2 ovs-appctl[353543]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 09:17:52 compute-2 ovs-appctl[353551]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 09:17:52 compute-2 podman[353541]: 2025-11-29 09:17:52.699902223 +0000 UTC m=+0.098655224 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 09:17:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 09:17:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/12511313' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.51356 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.44664 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.48250 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.44670 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.51374 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3089853072' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/494945659' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1024385563' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/12511313' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3032159976' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 09:17:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 09:17:53 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2063273800' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:53 compute-2 nova_compute[232428]: 2025-11-29 09:17:53.916 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 09:17:54 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2697728515' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:54 compute-2 nova_compute[232428]: 2025-11-29 09:17:54.373 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:54 compute-2 nova_compute[232428]: 2025-11-29 09:17:54.374 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:17:54 compute-2 nova_compute[232428]: 2025-11-29 09:17:54.374 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:17:54 compute-2 nova_compute[232428]: 2025-11-29 09:17:54.401 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:17:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:17:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:54.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.311 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.313 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.313 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.313 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:17:55 compute-2 nova_compute[232428]: 2025-11-29 09:17:55.314 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:17:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 09:17:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2040316573' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:17:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2006863930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:56 compute-2 nova_compute[232428]: 2025-11-29 09:17:56.221 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.908s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:17:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:56 compute-2 nova_compute[232428]: 2025-11-29 09:17:56.378 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:17:56 compute-2 nova_compute[232428]: 2025-11-29 09:17:56.380 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4022MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:17:56 compute-2 nova_compute[232428]: 2025-11-29 09:17:56.380 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:17:56 compute-2 nova_compute[232428]: 2025-11-29 09:17:56.381 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:17:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:56 compute-2 ceph-mon[77138]: from='client.44679 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:17:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3476151050' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 09:17:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2063273800' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4165202095' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:56 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1821901786' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 09:17:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2031960112' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.377 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.377 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.423 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.456 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:17:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1471762219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.871 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.877 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.900 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.901 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:17:57 compute-2 nova_compute[232428]: 2025-11-29 09:17:57.901 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:17:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:17:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 09:17:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1170506955' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.48283 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.51422 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.51428 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2697728515' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1723737267' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1601568759' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3682814035' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3645652316' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1108924346' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4152834789' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3006706261' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1854898143' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.44757 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2040316573' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2006863930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3753974095' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1232979909' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2031960112' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2541776718' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1953938530' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1471762219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:17:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:17:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:17:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:17:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 09:17:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3047559440' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:58 compute-2 nova_compute[232428]: 2025-11-29 09:17:58.903 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:58 compute-2 nova_compute[232428]: 2025-11-29 09:17:58.903 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:58 compute-2 nova_compute[232428]: 2025-11-29 09:17:58.903 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:17:58 compute-2 nova_compute[232428]: 2025-11-29 09:17:58.918 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.44784 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.51488 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.48328 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1170506955' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.44799 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2025109886' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.44805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3047559440' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1138314140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2248783452' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.48358 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: from='client.51512 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 09:17:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3658478532' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 09:17:59 compute-2 podman[354944]: 2025-11-29 09:17:59.66218416 +0000 UTC m=+0.064396841 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 09:18:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2211685371' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3658478532' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3906095358' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.44841 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.44844 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.44853 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.51539 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: from='client.48385 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 09:18:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3776834470' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 09:18:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 09:18:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1857471046' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/724362206' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.51548 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3776834470' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3933268894' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1888875257' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1857471046' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.44880 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1843533178' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:01 compute-2 ceph-mon[77138]: from='client.48403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 09:18:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1507967677' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 nova_compute[232428]: 2025-11-29 09:18:02.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:02 compute-2 nova_compute[232428]: 2025-11-29 09:18:02.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:18:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:02 compute-2 nova_compute[232428]: 2025-11-29 09:18:02.461 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 09:18:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2661000722' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.44886 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.48412 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.51578 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/985417287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1507967677' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:18:02 compute-2 ceph-mon[77138]: from='client.51584 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:18:03.388 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:18:03.388 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:18:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:18:03.389 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:18:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2818360036' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2135483182' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2661000722' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4166184950' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 ceph-mon[77138]: from='client.51602 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:03 compute-2 nova_compute[232428]: 2025-11-29 09:18:03.919 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 09:18:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2441586830' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:04.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:04 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 09:18:04 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3446892227' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:04 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.51608 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.48439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.51617 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2441586830' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4068616332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2795749512' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3446892227' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:04 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1920344964' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 09:18:05 compute-2 systemd[1]: Starting Time & Date Service...
Nov 29 09:18:05 compute-2 systemd[1]: Started Time & Date Service.
Nov 29 09:18:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3785548121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:05 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2000300130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:06.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/267296023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:06 compute-2 ceph-mon[77138]: pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:07 compute-2 nova_compute[232428]: 2025-11-29 09:18:07.463 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:08 compute-2 nova_compute[232428]: 2025-11-29 09:18:08.921 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:09 compute-2 ceph-mon[77138]: pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:09 compute-2 sudo[355735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:09 compute-2 sudo[355735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:09 compute-2 sudo[355735]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:09 compute-2 sudo[355766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:09 compute-2 sudo[355766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:09 compute-2 sudo[355766]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:09 compute-2 podman[355759]: 2025-11-29 09:18:09.792628586 +0000 UTC m=+0.086847151 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 09:18:10 compute-2 nova_compute[232428]: 2025-11-29 09:18:10.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:10.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.297831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890297955, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1274, "num_deletes": 251, "total_data_size": 2247664, "memory_usage": 2272280, "flush_reason": "Manual Compaction"}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890308074, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 1482108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95489, "largest_seqno": 96757, "table_properties": {"data_size": 1475611, "index_size": 3314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 19076, "raw_average_key_size": 22, "raw_value_size": 1461003, "raw_average_value_size": 1743, "num_data_blocks": 143, "num_entries": 838, "num_filter_entries": 838, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407828, "oldest_key_time": 1764407828, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 10338 microseconds, and 5076 cpu microseconds.
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.308187) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 1482108 bytes OK
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.308243) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.309862) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.309875) EVENT_LOG_v1 {"time_micros": 1764407890309871, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.309891) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 2240571, prev total WAL file size 2240571, number of live WAL files 2.
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.310918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(1447KB)], [195(12MB)]
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890311044, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14242608, "oldest_snapshot_seqno": -1}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 12090 keys, 12290167 bytes, temperature: kUnknown
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890397642, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 12290167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12216722, "index_size": 42100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 321139, "raw_average_key_size": 26, "raw_value_size": 12009979, "raw_average_value_size": 993, "num_data_blocks": 1580, "num_entries": 12090, "num_filter_entries": 12090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.398018) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 12290167 bytes
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.399062) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.2 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(17.9) write-amplify(8.3) OK, records in: 12605, records dropped: 515 output_compression: NoCompression
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.399080) EVENT_LOG_v1 {"time_micros": 1764407890399071, "job": 126, "event": "compaction_finished", "compaction_time_micros": 86733, "compaction_time_cpu_micros": 32385, "output_level": 6, "num_output_files": 1, "total_output_size": 12290167, "num_input_records": 12605, "num_output_records": 12090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890399435, "job": 126, "event": "table_file_deletion", "file_number": 197}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890401284, "job": 126, "event": "table_file_deletion", "file_number": 195}
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.310825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.401328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.401332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.401334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.401335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:18:10.401336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:18:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:18:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:18:11 compute-2 ceph-mon[77138]: pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:12.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:12 compute-2 nova_compute[232428]: 2025-11-29 09:18:12.465 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:12 compute-2 ceph-mon[77138]: pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:13 compute-2 nova_compute[232428]: 2025-11-29 09:18:13.924 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:14.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:15 compute-2 ceph-mon[77138]: pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:16.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:17 compute-2 ceph-mon[77138]: pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:17 compute-2 nova_compute[232428]: 2025-11-29 09:18:17.467 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:18 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:18:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:18:18 compute-2 ceph-mon[77138]: pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:18 compute-2 nova_compute[232428]: 2025-11-29 09:18:18.925 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:20.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:20 compute-2 sshd-session[355811]: Invalid user ubuntu from 45.148.10.240 port 60092
Nov 29 09:18:20 compute-2 sshd-session[355811]: Connection closed by invalid user ubuntu 45.148.10.240 port 60092 [preauth]
Nov 29 09:18:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:20.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:21 compute-2 ceph-mon[77138]: pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:22.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:22 compute-2 nova_compute[232428]: 2025-11-29 09:18:22.468 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:22 compute-2 ceph-mon[77138]: pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:22 compute-2 podman[355814]: 2025-11-29 09:18:22.876043746 +0000 UTC m=+0.117769922 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:18:23 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:23 compute-2 nova_compute[232428]: 2025-11-29 09:18:23.927 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:24.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:25 compute-2 ceph-mon[77138]: pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:18:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:26.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:18:26 compute-2 ceph-mon[77138]: pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:26.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:27 compute-2 nova_compute[232428]: 2025-11-29 09:18:27.471 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:27 compute-2 sudo[355845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:27 compute-2 sudo[355845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:27 compute-2 sudo[355845]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:27 compute-2 sudo[355870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:18:27 compute-2 sudo[355870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:27 compute-2 sudo[355870]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:27 compute-2 sudo[355895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:27 compute-2 sudo[355895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:27 compute-2 sudo[355895]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:27 compute-2 sudo[355920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 09:18:27 compute-2 sudo[355920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:28 compute-2 sudo[355920]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:28.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:28 compute-2 sudo[355965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:28 compute-2 sudo[355965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:28 compute-2 sudo[355965]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:28 compute-2 sudo[355990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:18:28 compute-2 sudo[355990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:28 compute-2 sudo[355990]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:28 compute-2 sudo[356015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:28 compute-2 sudo[356015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:28 compute-2 sudo[356015]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:28 compute-2 sudo[356040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:18:28 compute-2 sudo[356040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:28 compute-2 nova_compute[232428]: 2025-11-29 09:18:28.928 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:29 compute-2 sudo[356040]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:29 compute-2 ceph-mon[77138]: pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1875438823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/1875438823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:18:29 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:18:29 compute-2 sudo[356096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:29 compute-2 sudo[356096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:29 compute-2 sudo[356096]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:29 compute-2 podman[356120]: 2025-11-29 09:18:29.904690163 +0000 UTC m=+0.055340262 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 09:18:29 compute-2 sudo[356127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:29 compute-2 sudo[356127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:29 compute-2 sudo[356127]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:30.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:30.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:32.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:32 compute-2 nova_compute[232428]: 2025-11-29 09:18:32.473 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:18:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:18:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:18:32 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:18:32 compute-2 ceph-mon[77138]: pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:32.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:33 compute-2 ceph-mon[77138]: pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:33 compute-2 nova_compute[232428]: 2025-11-29 09:18:33.931 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:34.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:34 compute-2 ceph-mon[77138]: pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:34.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:35 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 09:18:35 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 09:18:35 compute-2 sudo[356172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:35 compute-2 sudo[356172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:35 compute-2 sudo[356172]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:35 compute-2 sudo[356197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:18:35 compute-2 sudo[356197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:35 compute-2 sudo[356197]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:36.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:18:36 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:18:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:36.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:37 compute-2 nova_compute[232428]: 2025-11-29 09:18:37.474 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:38 compute-2 ceph-mon[77138]: pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:38.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:38 compute-2 ceph-mon[77138]: pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:38 compute-2 nova_compute[232428]: 2025-11-29 09:18:38.932 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:40.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:40 compute-2 podman[356225]: 2025-11-29 09:18:40.686678763 +0000 UTC m=+0.083954173 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:18:40 compute-2 ceph-mon[77138]: pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:41 compute-2 nova_compute[232428]: 2025-11-29 09:18:41.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:42.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:42 compute-2 nova_compute[232428]: 2025-11-29 09:18:42.476 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:42.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:43 compute-2 ceph-mon[77138]: pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:43 compute-2 nova_compute[232428]: 2025-11-29 09:18:43.934 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000062s ======
Nov 29 09:18:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:44.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Nov 29 09:18:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:44.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:45 compute-2 ceph-mon[77138]: pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:47 compute-2 ceph-mon[77138]: pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:47 compute-2 nova_compute[232428]: 2025-11-29 09:18:47.478 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:48 compute-2 nova_compute[232428]: 2025-11-29 09:18:48.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:18:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:48.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:18:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:48.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:48 compute-2 nova_compute[232428]: 2025-11-29 09:18:48.935 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:49 compute-2 ceph-mon[77138]: pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:49 compute-2 sudo[356253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:49 compute-2 sudo[356253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:49 compute-2 sudo[356253]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:50 compute-2 sudo[356279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:18:50 compute-2 sudo[356279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:18:50 compute-2 sudo[356279]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:50 compute-2 nova_compute[232428]: 2025-11-29 09:18:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:50.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:50.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:51 compute-2 ceph-mon[77138]: pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:51 compute-2 sudo[348707]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:51 compute-2 sshd-session[348706]: Received disconnect from 192.168.122.10 port 56940:11: disconnected by user
Nov 29 09:18:51 compute-2 sshd-session[348706]: Disconnected from user zuul 192.168.122.10 port 56940
Nov 29 09:18:51 compute-2 sshd-session[348703]: pam_unix(sshd:session): session closed for user zuul
Nov 29 09:18:51 compute-2 systemd[1]: session-60.scope: Deactivated successfully.
Nov 29 09:18:51 compute-2 systemd[1]: session-60.scope: Consumed 2min 53.633s CPU time, 1.0G memory peak, read 448.4M from disk, written 346.2M to disk.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Session 60 logged out. Waiting for processes to exit.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Removed session 60.
Nov 29 09:18:51 compute-2 sshd-session[356304]: Accepted publickey for zuul from 192.168.122.10 port 41688 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 09:18:51 compute-2 systemd-logind[787]: New session 61 of user zuul.
Nov 29 09:18:51 compute-2 systemd[1]: Started Session 61 of User zuul.
Nov 29 09:18:51 compute-2 sshd-session[356304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 09:18:51 compute-2 sudo[356308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-2-2025-11-29-ebdzlgc.tar.xz
Nov 29 09:18:51 compute-2 sudo[356308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 09:18:51 compute-2 sudo[356308]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:51 compute-2 sshd-session[356307]: Received disconnect from 192.168.122.10 port 41688:11: disconnected by user
Nov 29 09:18:51 compute-2 sshd-session[356307]: Disconnected from user zuul 192.168.122.10 port 41688
Nov 29 09:18:51 compute-2 sshd-session[356304]: pam_unix(sshd:session): session closed for user zuul
Nov 29 09:18:51 compute-2 systemd[1]: session-61.scope: Deactivated successfully.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Session 61 logged out. Waiting for processes to exit.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Removed session 61.
Nov 29 09:18:51 compute-2 sshd-session[356333]: Accepted publickey for zuul from 192.168.122.10 port 41696 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 09:18:51 compute-2 systemd-logind[787]: New session 62 of user zuul.
Nov 29 09:18:51 compute-2 systemd[1]: Started Session 62 of User zuul.
Nov 29 09:18:51 compute-2 sshd-session[356333]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 09:18:51 compute-2 sudo[356337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 29 09:18:51 compute-2 sudo[356337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 09:18:51 compute-2 sudo[356337]: pam_unix(sudo:session): session closed for user root
Nov 29 09:18:51 compute-2 sshd-session[356336]: Received disconnect from 192.168.122.10 port 41696:11: disconnected by user
Nov 29 09:18:51 compute-2 sshd-session[356336]: Disconnected from user zuul 192.168.122.10 port 41696
Nov 29 09:18:51 compute-2 sshd-session[356333]: pam_unix(sshd:session): session closed for user zuul
Nov 29 09:18:51 compute-2 systemd[1]: session-62.scope: Deactivated successfully.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Session 62 logged out. Waiting for processes to exit.
Nov 29 09:18:51 compute-2 systemd-logind[787]: Removed session 62.
Nov 29 09:18:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:52.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:52 compute-2 nova_compute[232428]: 2025-11-29 09:18:52.479 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:18:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:18:53 compute-2 ceph-mon[77138]: pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:53 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:53 compute-2 podman[356363]: 2025-11-29 09:18:53.707274161 +0000 UTC m=+0.108052524 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:18:53 compute-2 nova_compute[232428]: 2025-11-29 09:18:53.938 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:54 compute-2 nova_compute[232428]: 2025-11-29 09:18:54.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:54 compute-2 nova_compute[232428]: 2025-11-29 09:18:54.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:18:54 compute-2 nova_compute[232428]: 2025-11-29 09:18:54.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:18:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:54.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:54 compute-2 nova_compute[232428]: 2025-11-29 09:18:54.821 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:18:55 compute-2 nova_compute[232428]: 2025-11-29 09:18:55.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:55 compute-2 ceph-mon[77138]: pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.027 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.028 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.028 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.028 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.029 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:18:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:56.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:56 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:18:56 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3474091318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.654 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:18:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.837 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.839 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4061MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.839 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:18:56 compute-2 nova_compute[232428]: 2025-11-29 09:18:56.839 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.313 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.314 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.356 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.482 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:57 compute-2 ceph-mon[77138]: pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:18:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:18:57 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4067226243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.859 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.864 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.941 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.943 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.944 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.944 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:57 compute-2 nova_compute[232428]: 2025-11-29 09:18:57.945 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 09:18:58 compute-2 nova_compute[232428]: 2025-11-29 09:18:58.300 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 09:18:58 compute-2 nova_compute[232428]: 2025-11-29 09:18:58.301 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:18:58 compute-2 nova_compute[232428]: 2025-11-29 09:18:58.301 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 09:18:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:18:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:18:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:18:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:18:58 compute-2 nova_compute[232428]: 2025-11-29 09:18:58.941 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:18:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3474091318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4067226243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:18:59 compute-2 ceph-mon[77138]: pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:00 compute-2 podman[356437]: 2025-11-29 09:19:00.658543679 +0000 UTC m=+0.062727600 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 09:19:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:00 compute-2 nova_compute[232428]: 2025-11-29 09:19:00.809 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:00 compute-2 nova_compute[232428]: 2025-11-29 09:19:00.810 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:00 compute-2 nova_compute[232428]: 2025-11-29 09:19:00.810 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:01 compute-2 ceph-mon[77138]: pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:02 compute-2 nova_compute[232428]: 2025-11-29 09:19:02.485 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:03 compute-2 nova_compute[232428]: 2025-11-29 09:19:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:03 compute-2 nova_compute[232428]: 2025-11-29 09:19:03.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:19:03.389 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:19:03.390 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:19:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:19:03.390 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:19:03 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:03 compute-2 ceph-mon[77138]: pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:03 compute-2 nova_compute[232428]: 2025-11-29 09:19:03.944 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:04.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:04 compute-2 nova_compute[232428]: 2025-11-29 09:19:04.579 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:05 compute-2 ceph-mon[77138]: pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:07 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4117766560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:19:07 compute-2 nova_compute[232428]: 2025-11-29 09:19:07.487 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:08 compute-2 ceph-mon[77138]: pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/607609603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:19:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3622949996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:19:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:08.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:08 compute-2 nova_compute[232428]: 2025-11-29 09:19:08.944 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:09 compute-2 ceph-mon[77138]: pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/968756917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:19:10 compute-2 sudo[356461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:10 compute-2 sudo[356461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:10 compute-2 sudo[356461]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:10 compute-2 sudo[356486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:10 compute-2 sudo[356486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:10 compute-2 sudo[356486]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:10.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:11 compute-2 podman[356511]: 2025-11-29 09:19:11.669545889 +0000 UTC m=+0.066864966 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 09:19:11 compute-2 ceph-mon[77138]: pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:12.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:12 compute-2 nova_compute[232428]: 2025-11-29 09:19:12.489 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:13 compute-2 ceph-mon[77138]: pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:13 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:13 compute-2 nova_compute[232428]: 2025-11-29 09:19:13.946 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:14.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:14 compute-2 ceph-mon[77138]: pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:16.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:16 compute-2 ceph-mon[77138]: pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:17 compute-2 nova_compute[232428]: 2025-11-29 09:19:17.490 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:19:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:18.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:19:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:18 compute-2 nova_compute[232428]: 2025-11-29 09:19:18.948 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:19 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:19 compute-2 ceph-mon[77138]: pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:20 compute-2 ceph-mon[77138]: pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:20.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:22.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:22 compute-2 nova_compute[232428]: 2025-11-29 09:19:22.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:23 compute-2 ceph-mon[77138]: pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:23 compute-2 nova_compute[232428]: 2025-11-29 09:19:23.949 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:24 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:24.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:24 compute-2 ceph-mon[77138]: pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:24 compute-2 podman[356538]: 2025-11-29 09:19:24.69116069 +0000 UTC m=+0.092218306 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 29 09:19:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:26.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:27 compute-2 ceph-mon[77138]: pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:27 compute-2 nova_compute[232428]: 2025-11-29 09:19:27.491 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:28.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/451159059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:19:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/451159059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:19:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:28 compute-2 nova_compute[232428]: 2025-11-29 09:19:28.952 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:29 compute-2 ceph-mon[77138]: pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:30 compute-2 sudo[356568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:30 compute-2 sudo[356568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:30 compute-2 sudo[356568]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:30.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:30 compute-2 sudo[356593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:30 compute-2 sudo[356593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:30 compute-2 sudo[356593]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:30 compute-2 ceph-mon[77138]: pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:31 compute-2 podman[356618]: 2025-11-29 09:19:31.645249304 +0000 UTC m=+0.053667860 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 09:19:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:32.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:32 compute-2 nova_compute[232428]: 2025-11-29 09:19:32.493 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:32.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:33 compute-2 ceph-mon[77138]: pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:33 compute-2 nova_compute[232428]: 2025-11-29 09:19:33.953 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:34 compute-2 ceph-mon[77138]: pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:34.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:36 compute-2 sudo[356641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:36 compute-2 sudo[356641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:36 compute-2 sudo[356641]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:36 compute-2 sudo[356666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:19:36 compute-2 sudo[356666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:36 compute-2 sudo[356666]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:36 compute-2 sudo[356691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:36 compute-2 sudo[356691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:36 compute-2 sudo[356691]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:36 compute-2 sudo[356716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:19:36 compute-2 sudo[356716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:36 compute-2 sudo[356716]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:36.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:37 compute-2 ceph-mon[77138]: pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:19:37 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:19:37 compute-2 nova_compute[232428]: 2025-11-29 09:19:37.495 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:38.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:38 compute-2 nova_compute[232428]: 2025-11-29 09:19:38.956 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:39 compute-2 ceph-mon[77138]: pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:40.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:40.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:41 compute-2 ceph-mon[77138]: pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:42 compute-2 nova_compute[232428]: 2025-11-29 09:19:42.498 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:42 compute-2 ceph-mon[77138]: pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:42 compute-2 podman[356776]: 2025-11-29 09:19:42.692627784 +0000 UTC m=+0.077664929 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:19:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:42.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:43 compute-2 nova_compute[232428]: 2025-11-29 09:19:43.435 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:43 compute-2 nova_compute[232428]: 2025-11-29 09:19:43.957 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:44 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:44.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:44.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:46 compute-2 ceph-mon[77138]: pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:46.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:46.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:47 compute-2 ceph-mon[77138]: pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:47 compute-2 nova_compute[232428]: 2025-11-29 09:19:47.499 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:48.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:48.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:48 compute-2 nova_compute[232428]: 2025-11-29 09:19:48.960 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:49 compute-2 nova_compute[232428]: 2025-11-29 09:19:49.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:49 compute-2 ceph-mon[77138]: pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:49 compute-2 sudo[356800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:49 compute-2 sudo[356800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:49 compute-2 sudo[356800]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:49 compute-2 sudo[356825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:19:49 compute-2 sudo[356825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:49 compute-2 sudo[356825]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:50 compute-2 nova_compute[232428]: 2025-11-29 09:19:50.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:19:50 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:19:50 compute-2 sudo[356851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:50 compute-2 sudo[356851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:50 compute-2 sudo[356851]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:50 compute-2 sudo[356876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:19:50 compute-2 sudo[356876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:19:50 compute-2 sudo[356876]: pam_unix(sudo:session): session closed for user root
Nov 29 09:19:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:50.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:52 compute-2 nova_compute[232428]: 2025-11-29 09:19:52.500 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:52 compute-2 ceph-mon[77138]: pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:52.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:53 compute-2 nova_compute[232428]: 2025-11-29 09:19:53.962 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:54 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:19:54 compute-2 nova_compute[232428]: 2025-11-29 09:19:54.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:54 compute-2 nova_compute[232428]: 2025-11-29 09:19:54.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:19:54 compute-2 nova_compute[232428]: 2025-11-29 09:19:54.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:19:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:19:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:54.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:19:54 compute-2 ceph-mon[77138]: pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:54 compute-2 nova_compute[232428]: 2025-11-29 09:19:54.482 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:19:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:55 compute-2 ceph-mon[77138]: pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:56 compute-2 podman[356903]: 2025-11-29 09:19:56.114303141 +0000 UTC m=+0.521350942 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:19:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:56.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:56.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:56 compute-2 ceph-mon[77138]: pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:19:57 compute-2 nova_compute[232428]: 2025-11-29 09:19:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:57 compute-2 nova_compute[232428]: 2025-11-29 09:19:57.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:19:57 compute-2 nova_compute[232428]: 2025-11-29 09:19:57.501 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:19:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:19:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:19:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:19:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:19:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:19:58 compute-2 nova_compute[232428]: 2025-11-29 09:19:58.964 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:00.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:00.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:02 compute-2 ceph-mon[77138]: pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.364 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.364 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.365 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.365 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.365 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:20:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.502 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:02 compute-2 podman[356954]: 2025-11-29 09:20:02.6652253 +0000 UTC m=+0.074532305 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 09:20:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:02.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:20:02 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/401087936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.820 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.986 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.988 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.988 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:20:02 compute-2 nova_compute[232428]: 2025-11-29 09:20:02.988 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:20:03 compute-2 ceph-mon[77138]: overall HEALTH_OK
Nov 29 09:20:03 compute-2 ceph-mon[77138]: pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:03 compute-2 ceph-mon[77138]: pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:03 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/401087936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:20:03.390 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:20:03.391 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:20:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:20:03.391 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:20:03 compute-2 nova_compute[232428]: 2025-11-29 09:20:03.965 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:04 compute-2 ceph-mon[77138]: pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.403 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.404 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.634 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing inventories for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.738 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating ProviderTree inventory for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.738 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Updating inventory in ProviderTree for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.759 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing aggregate associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.791 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Refreshing trait associations for resource provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2, traits: COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 09:20:05 compute-2 nova_compute[232428]: 2025-11-29 09:20:05.842 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:20:06 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:20:06 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2510269755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:06 compute-2 nova_compute[232428]: 2025-11-29 09:20:06.248 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:20:06 compute-2 nova_compute[232428]: 2025-11-29 09:20:06.255 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:20:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:06.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2812326397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:06 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2510269755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:06.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:07 compute-2 nova_compute[232428]: 2025-11-29 09:20:07.504 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:07 compute-2 ceph-mon[77138]: pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:08.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:08.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:08 compute-2 nova_compute[232428]: 2025-11-29 09:20:08.844 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:20:08 compute-2 nova_compute[232428]: 2025-11-29 09:20:08.845 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:20:08 compute-2 nova_compute[232428]: 2025-11-29 09:20:08.846 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:20:08 compute-2 ceph-mon[77138]: pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:08 compute-2 nova_compute[232428]: 2025-11-29 09:20:08.967 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:09 compute-2 nova_compute[232428]: 2025-11-29 09:20:09.846 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:09 compute-2 nova_compute[232428]: 2025-11-29 09:20:09.847 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:09 compute-2 nova_compute[232428]: 2025-11-29 09:20:09.847 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:09 compute-2 nova_compute[232428]: 2025-11-29 09:20:09.847 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:20:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1075469183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1008334526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:10.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:10 compute-2 sudo[357001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:10 compute-2 sudo[357001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:10 compute-2 sudo[357001]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:10 compute-2 sudo[357026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:10 compute-2 sudo[357026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:10 compute-2 sudo[357026]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:10.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:10 compute-2 ceph-mon[77138]: pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1317951223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:12 compute-2 nova_compute[232428]: 2025-11-29 09:20:12.507 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:12.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:13 compute-2 ceph-mon[77138]: pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:13 compute-2 podman[357052]: 2025-11-29 09:20:13.664050266 +0000 UTC m=+0.064492673 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 09:20:13 compute-2 nova_compute[232428]: 2025-11-29 09:20:13.970 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:14.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:14.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:15 compute-2 nova_compute[232428]: 2025-11-29 09:20:15.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:15 compute-2 ceph-mon[77138]: pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:16 compute-2 ceph-mon[77138]: pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:17 compute-2 nova_compute[232428]: 2025-11-29 09:20:17.508 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:18.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:18 compute-2 nova_compute[232428]: 2025-11-29 09:20:18.971 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:19 compute-2 ceph-mon[77138]: pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:20.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:21 compute-2 ceph-mon[77138]: pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:22 compute-2 nova_compute[232428]: 2025-11-29 09:20:22.511 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:22.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:23 compute-2 ceph-mon[77138]: pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:23 compute-2 nova_compute[232428]: 2025-11-29 09:20:23.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:25 compute-2 ceph-mon[77138]: pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:26.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:26 compute-2 ceph-mon[77138]: pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:26 compute-2 podman[357080]: 2025-11-29 09:20:26.725544249 +0000 UTC m=+0.136172628 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 09:20:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:27 compute-2 nova_compute[232428]: 2025-11-29 09:20:27.514 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:20:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430856772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:20:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:20:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430856772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:20:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:28 compute-2 nova_compute[232428]: 2025-11-29 09:20:28.976 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:29 compute-2 ceph-mon[77138]: pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2430856772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:20:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2430856772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:20:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:30 compute-2 sudo[357108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:30 compute-2 sudo[357108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:30 compute-2 sudo[357108]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:30 compute-2 sudo[357133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:30 compute-2 sudo[357133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:30 compute-2 sudo[357133]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:31 compute-2 ceph-mon[77138]: pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:32 compute-2 nova_compute[232428]: 2025-11-29 09:20:32.518 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:32.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:33 compute-2 podman[357159]: 2025-11-29 09:20:33.644059084 +0000 UTC m=+0.051260730 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 09:20:33 compute-2 ceph-mon[77138]: pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:33 compute-2 nova_compute[232428]: 2025-11-29 09:20:33.977 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:34.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:34 compute-2 ceph-mon[77138]: pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:34.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:36.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:36.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:37 compute-2 ceph-mon[77138]: pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:37 compute-2 nova_compute[232428]: 2025-11-29 09:20:37.521 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:37 compute-2 sshd-session[357182]: Invalid user ubuntu from 45.148.10.240 port 38528
Nov 29 09:20:38 compute-2 sshd-session[357182]: Connection closed by invalid user ubuntu 45.148.10.240 port 38528 [preauth]
Nov 29 09:20:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:38.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:38 compute-2 nova_compute[232428]: 2025-11-29 09:20:38.980 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:39 compute-2 ceph-mon[77138]: pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:40 compute-2 ceph-mon[77138]: pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:40.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:42.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:42 compute-2 nova_compute[232428]: 2025-11-29 09:20:42.524 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:43 compute-2 ceph-mon[77138]: pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:43 compute-2 nova_compute[232428]: 2025-11-29 09:20:43.981 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:44 compute-2 nova_compute[232428]: 2025-11-29 09:20:44.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:44 compute-2 podman[357188]: 2025-11-29 09:20:44.724642368 +0000 UTC m=+0.112676365 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 09:20:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:44.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:45 compute-2 ceph-mon[77138]: pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:46.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:46 compute-2 ceph-mon[77138]: pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:47 compute-2 nova_compute[232428]: 2025-11-29 09:20:47.527 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:47 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Nov 29 09:20:47 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:47.979062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:20:47 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Nov 29 09:20:47 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408047979213, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1675, "num_deletes": 256, "total_data_size": 4001094, "memory_usage": 4047712, "flush_reason": "Manual Compaction"}
Nov 29 09:20:47 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048003676, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 2629222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96762, "largest_seqno": 98432, "table_properties": {"data_size": 2622190, "index_size": 4102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14395, "raw_average_key_size": 19, "raw_value_size": 2608156, "raw_average_value_size": 3582, "num_data_blocks": 181, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407891, "oldest_key_time": 1764407891, "file_creation_time": 1764408047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 24835 microseconds, and 11767 cpu microseconds.
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.003904) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 2629222 bytes OK
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.004008) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.007275) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.007307) EVENT_LOG_v1 {"time_micros": 1764408048007298, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.007368) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3993529, prev total WAL file size 3993529, number of live WAL files 2.
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.009994) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373734' seq:72057594037927935, type:22 .. '6C6F676D0034303237' seq:0, type:0; will stop at (end)
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(2567KB)], [198(11MB)]
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048010056, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14919389, "oldest_snapshot_seqno": -1}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 12291 keys, 14789510 bytes, temperature: kUnknown
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048149417, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14789510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14711938, "index_size": 45715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30789, "raw_key_size": 326257, "raw_average_key_size": 26, "raw_value_size": 14499032, "raw_average_value_size": 1179, "num_data_blocks": 1733, "num_entries": 12291, "num_filter_entries": 12291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764408048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.149763) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14789510 bytes
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.150887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.0 rd, 106.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 12818, records dropped: 527 output_compression: NoCompression
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.150918) EVENT_LOG_v1 {"time_micros": 1764408048150903, "job": 128, "event": "compaction_finished", "compaction_time_micros": 139457, "compaction_time_cpu_micros": 68814, "output_level": 6, "num_output_files": 1, "total_output_size": 14789510, "num_input_records": 12818, "num_output_records": 12291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048151850, "job": 128, "event": "table_file_deletion", "file_number": 200}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048156550, "job": 128, "event": "table_file_deletion", "file_number": 198}
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.009878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.156674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.156683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.156687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.156689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:20:48.156692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:20:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:20:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:20:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:48 compute-2 nova_compute[232428]: 2025-11-29 09:20:48.983 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:48 compute-2 ceph-mon[77138]: pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:49 compute-2 sudo[357210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:49 compute-2 sudo[357210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:49 compute-2 sudo[357210]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:50 compute-2 sudo[357235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:20:50 compute-2 sudo[357235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:50 compute-2 sudo[357235]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:50 compute-2 sudo[357261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:50 compute-2 sudo[357261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:50 compute-2 sudo[357261]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:50 compute-2 sudo[357286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:20:50 compute-2 sudo[357286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:50 compute-2 nova_compute[232428]: 2025-11-29 09:20:50.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:50.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:50 compute-2 sudo[357286]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:50.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:50 compute-2 sudo[357343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:50 compute-2 sudo[357343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:50 compute-2 sudo[357343]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:51 compute-2 sudo[357368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:51 compute-2 sudo[357368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:51 compute-2 sudo[357368]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:51 compute-2 nova_compute[232428]: 2025-11-29 09:20:51.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:51 compute-2 ceph-mon[77138]: pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:20:51 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:20:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:52 compute-2 nova_compute[232428]: 2025-11-29 09:20:52.530 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:52 compute-2 ceph-mon[77138]: pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:52.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:53 compute-2 nova_compute[232428]: 2025-11-29 09:20:53.986 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:54.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:20:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:54.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:20:55 compute-2 ceph-mon[77138]: pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:56 compute-2 nova_compute[232428]: 2025-11-29 09:20:56.786 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:56 compute-2 nova_compute[232428]: 2025-11-29 09:20:56.786 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:20:56 compute-2 nova_compute[232428]: 2025-11-29 09:20:56.786 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:20:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:56.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:56 compute-2 ceph-mon[77138]: pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:20:57 compute-2 nova_compute[232428]: 2025-11-29 09:20:57.447 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:20:57 compute-2 nova_compute[232428]: 2025-11-29 09:20:57.533 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:57 compute-2 podman[357396]: 2025-11-29 09:20:57.697176985 +0000 UTC m=+0.101130686 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:20:57 compute-2 sudo[357422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:20:57 compute-2 sudo[357422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:57 compute-2 sudo[357422]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:58 compute-2 sudo[357447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:20:58 compute-2 sudo[357447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:20:58 compute-2 sudo[357447]: pam_unix(sudo:session): session closed for user root
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.228 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.229 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.229 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.230 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:20:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:20:58 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:20:58 compute-2 ceph-mon[77138]: pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:20:58 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:20:58 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/38362228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.663 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.840 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.841 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4096MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.841 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.842 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:20:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:20:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:20:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.958 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.959 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:20:58 compute-2 nova_compute[232428]: 2025-11-29 09:20:58.982 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:20:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:20:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1538552759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.400 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.407 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.422 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.424 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:20:59 compute-2 nova_compute[232428]: 2025-11-29 09:20:59.424 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:20:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/38362228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:20:59 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1538552759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:00 compute-2 nova_compute[232428]: 2025-11-29 09:21:00.414 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:00 compute-2 nova_compute[232428]: 2025-11-29 09:21:00.414 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:00 compute-2 nova_compute[232428]: 2025-11-29 09:21:00.415 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:00.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:00 compute-2 ceph-mon[77138]: pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:02 compute-2 nova_compute[232428]: 2025-11-29 09:21:02.536 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:03 compute-2 ceph-mon[77138]: pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:03 compute-2 nova_compute[232428]: 2025-11-29 09:21:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:03 compute-2 nova_compute[232428]: 2025-11-29 09:21:03.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:21:03.392 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:21:03.393 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:21:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:21:03.393 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:21:03 compute-2 nova_compute[232428]: 2025-11-29 09:21:03.989 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:04.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:04 compute-2 podman[357520]: 2025-11-29 09:21:04.66537982 +0000 UTC m=+0.064634507 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 09:21:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:05 compute-2 ceph-mon[77138]: pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:06.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:07 compute-2 ceph-mon[77138]: pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:07 compute-2 nova_compute[232428]: 2025-11-29 09:21:07.539 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/100528066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3738230363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:08.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:08 compute-2 nova_compute[232428]: 2025-11-29 09:21:08.991 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:09 compute-2 ceph-mon[77138]: pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3737279369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2992879480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:10.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:11 compute-2 sudo[357544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:11 compute-2 sudo[357544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:11 compute-2 sudo[357544]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:11 compute-2 sudo[357569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:11 compute-2 sudo[357569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:11 compute-2 sudo[357569]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:11 compute-2 ceph-mon[77138]: pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:12 compute-2 nova_compute[232428]: 2025-11-29 09:21:12.542 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:12.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:13 compute-2 ceph-mon[77138]: pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:13 compute-2 nova_compute[232428]: 2025-11-29 09:21:13.993 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:21:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:14.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:21:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:14.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:15 compute-2 ceph-mon[77138]: pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:15 compute-2 podman[357596]: 2025-11-29 09:21:15.671143782 +0000 UTC m=+0.067776915 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 09:21:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:21:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:16.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:21:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:17 compute-2 ceph-mon[77138]: pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:17 compute-2 nova_compute[232428]: 2025-11-29 09:21:17.545 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.002000063s ======
Nov 29 09:21:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Nov 29 09:21:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:18 compute-2 nova_compute[232428]: 2025-11-29 09:21:18.996 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:19 compute-2 ceph-mon[77138]: pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:20.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:21 compute-2 ceph-mon[77138]: pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:22 compute-2 ceph-mon[77138]: pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:22 compute-2 nova_compute[232428]: 2025-11-29 09:21:22.549 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:23 compute-2 nova_compute[232428]: 2025-11-29 09:21:23.998 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:25 compute-2 ceph-mon[77138]: pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:26.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:27 compute-2 ceph-mon[77138]: pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:27 compute-2 nova_compute[232428]: 2025-11-29 09:21:27.551 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:21:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231426213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:21:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:21:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231426213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:21:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2231426213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:21:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/2231426213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:21:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:28.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:28 compute-2 podman[357623]: 2025-11-29 09:21:28.663107756 +0000 UTC m=+0.074577737 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 09:21:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:28.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:29 compute-2 nova_compute[232428]: 2025-11-29 09:21:29.000 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:29 compute-2 ceph-mon[77138]: pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:31 compute-2 sudo[357650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:31 compute-2 sudo[357650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:31 compute-2 sudo[357650]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:31 compute-2 sudo[357675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:31 compute-2 ceph-mon[77138]: pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:31 compute-2 sudo[357675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:31 compute-2 sudo[357675]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:21:32 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.3 total, 600.0 interval
                                           Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1540 writes, 7616 keys, 1540 commit groups, 1.0 writes per commit group, ingest: 16.23 MB, 0.03 MB/s
                                           Interval WAL: 1540 writes, 1540 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     36.2      3.41              0.56        64    0.053       0      0       0.0       0.0
                                             L6      1/0   14.10 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6     82.5     71.2      9.64              2.57        63    0.153    548K    33K       0.0       0.0
                                            Sum      1/0   14.10 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     61.0     62.1     13.05              3.13       127    0.103    548K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     98.9    102.1      0.73              0.31        10    0.073     61K   2557       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     82.5     71.2      9.64              2.57        63    0.153    548K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     37.2      3.32              0.56        63    0.053       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.085       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.121, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.79 GB write, 0.10 MB/s write, 0.78 GB read, 0.10 MB/s read, 13.1 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a34d5511f0#2 capacity: 304.00 MB usage: 89.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000607 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5325,85.68 MB,28.1833%) FilterBlock(127,1.47 MB,0.484923%) IndexBlock(127,2.36 MB,0.776577%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 09:21:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:32.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:32 compute-2 nova_compute[232428]: 2025-11-29 09:21:32.555 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:33 compute-2 ceph-mon[77138]: pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:34 compute-2 nova_compute[232428]: 2025-11-29 09:21:34.001 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:34.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:35 compute-2 ceph-mon[77138]: pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:35 compute-2 podman[357702]: 2025-11-29 09:21:35.649187859 +0000 UTC m=+0.054744040 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 09:21:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:36.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:36.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:37 compute-2 ceph-mon[77138]: pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:37 compute-2 nova_compute[232428]: 2025-11-29 09:21:37.558 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:38 compute-2 ceph-mon[77138]: pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:38.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:39 compute-2 nova_compute[232428]: 2025-11-29 09:21:39.004 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:40.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:41 compute-2 ceph-mon[77138]: pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:42 compute-2 nova_compute[232428]: 2025-11-29 09:21:42.561 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:43 compute-2 ceph-mon[77138]: pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:44 compute-2 nova_compute[232428]: 2025-11-29 09:21:44.006 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:21:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:21:45 compute-2 nova_compute[232428]: 2025-11-29 09:21:45.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:45 compute-2 ceph-mon[77138]: pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:46 compute-2 podman[357727]: 2025-11-29 09:21:46.654393212 +0000 UTC m=+0.065019148 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 09:21:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:46.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:47 compute-2 ceph-mon[77138]: pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:47 compute-2 nova_compute[232428]: 2025-11-29 09:21:47.563 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:49 compute-2 nova_compute[232428]: 2025-11-29 09:21:49.009 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:49 compute-2 ceph-mon[77138]: pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:50 compute-2 nova_compute[232428]: 2025-11-29 09:21:50.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:21:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:50.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:21:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:50.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:51 compute-2 ceph-mon[77138]: pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:51 compute-2 sudo[357749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:51 compute-2 sudo[357749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:51 compute-2 sudo[357749]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:51 compute-2 sudo[357774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:51 compute-2 sudo[357774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:51 compute-2 sudo[357774]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:52 compute-2 nova_compute[232428]: 2025-11-29 09:21:52.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:52.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:52 compute-2 nova_compute[232428]: 2025-11-29 09:21:52.566 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:52.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:53 compute-2 ceph-mon[77138]: pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:54 compute-2 nova_compute[232428]: 2025-11-29 09:21:54.011 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:21:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:54.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:21:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:55 compute-2 ceph-mon[77138]: pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:56.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:56 compute-2 ceph-mon[77138]: pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:56.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:21:57 compute-2 nova_compute[232428]: 2025-11-29 09:21:57.569 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:58 compute-2 sudo[357803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:58 compute-2 sudo[357803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:58 compute-2 sudo[357803]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:58 compute-2 sudo[357828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:21:58 compute-2 sudo[357828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:58 compute-2 sudo[357828]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:58 compute-2 nova_compute[232428]: 2025-11-29 09:21:58.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:58 compute-2 nova_compute[232428]: 2025-11-29 09:21:58.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:21:58 compute-2 nova_compute[232428]: 2025-11-29 09:21:58.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:21:58 compute-2 sudo[357853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:21:58 compute-2 nova_compute[232428]: 2025-11-29 09:21:58.225 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:21:58 compute-2 sudo[357853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:58 compute-2 sudo[357853]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:58 compute-2 sudo[357878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:21:58 compute-2 sudo[357878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:21:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:58.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:58 compute-2 sudo[357878]: pam_unix(sudo:session): session closed for user root
Nov 29 09:21:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:21:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:21:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.013 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:21:59 compute-2 ceph-mon[77138]: pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:21:59 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.235 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.236 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.236 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.236 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:21:59 compute-2 podman[357955]: 2025-11-29 09:21:59.682614055 +0000 UTC m=+0.082029640 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 09:21:59 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:21:59 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1583657514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.702 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.848 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.849 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4105MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.850 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.850 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.988 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:21:59 compute-2 nova_compute[232428]: 2025-11-29 09:21:59.988 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.038 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:22:00 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1583657514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.243303) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120243376, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 942, "num_deletes": 251, "total_data_size": 1913914, "memory_usage": 1946880, "flush_reason": "Manual Compaction"}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120251681, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 1262501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98438, "largest_seqno": 99374, "table_properties": {"data_size": 1258121, "index_size": 2031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9610, "raw_average_key_size": 19, "raw_value_size": 1249427, "raw_average_value_size": 2560, "num_data_blocks": 90, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764408048, "oldest_key_time": 1764408048, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 8388 microseconds, and 4018 cpu microseconds.
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.251717) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 1262501 bytes OK
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.251736) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253280) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253295) EVENT_LOG_v1 {"time_micros": 1764408120253289, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253326) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1909217, prev total WAL file size 1909217, number of live WAL files 2.
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(1232KB)], [201(14MB)]
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120254008, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 16052011, "oldest_snapshot_seqno": -1}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 12264 keys, 14013960 bytes, temperature: kUnknown
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120342967, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 14013960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13937244, "index_size": 44923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 326405, "raw_average_key_size": 26, "raw_value_size": 13725244, "raw_average_value_size": 1119, "num_data_blocks": 1696, "num_entries": 12264, "num_filter_entries": 12264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400291, "oldest_key_time": 0, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62cfe9d7-b838-48ed-bc7b-9412d6dcca65", "db_session_id": "FV6EMGUAMR2UK13SF2XC", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.343235) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14013960 bytes
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.344456) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.3 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 12779, records dropped: 515 output_compression: NoCompression
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.344478) EVENT_LOG_v1 {"time_micros": 1764408120344468, "job": 130, "event": "compaction_finished", "compaction_time_micros": 89048, "compaction_time_cpu_micros": 32991, "output_level": 6, "num_output_files": 1, "total_output_size": 14013960, "num_input_records": 12779, "num_output_records": 12264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120344813, "job": 130, "event": "table_file_deletion", "file_number": 203}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120347783, "job": 130, "event": "table_file_deletion", "file_number": 201}
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.347851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.347855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.347857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.347859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: rocksdb: (Original Log Time 2025/11/29-09:22:00.347861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 09:22:00 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:22:00 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2914028186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.460 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.469 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.494 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.496 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:22:00 compute-2 nova_compute[232428]: 2025-11-29 09:22:00.497 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:22:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:00.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:00.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:01 compute-2 ceph-mon[77138]: pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:01 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2914028186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:01 compute-2 nova_compute[232428]: 2025-11-29 09:22:01.488 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:01 compute-2 nova_compute[232428]: 2025-11-29 09:22:01.489 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:02 compute-2 nova_compute[232428]: 2025-11-29 09:22:02.572 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:02.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:03 compute-2 nova_compute[232428]: 2025-11-29 09:22:03.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:03 compute-2 nova_compute[232428]: 2025-11-29 09:22:03.201 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:22:03 compute-2 ceph-mon[77138]: pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:22:03.392 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:22:03.393 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:22:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:22:03.393 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:22:04 compute-2 nova_compute[232428]: 2025-11-29 09:22:04.014 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:04.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:04.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:05 compute-2 ceph-mon[77138]: pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:22:05 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:22:05 compute-2 sudo[358009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:05 compute-2 sudo[358009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:05 compute-2 sudo[358009]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:05 compute-2 sudo[358034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:22:05 compute-2 sudo[358034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:05 compute-2 sudo[358034]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:06 compute-2 podman[358060]: 2025-11-29 09:22:06.638356641 +0000 UTC m=+0.048846534 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:22:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:06.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:07 compute-2 ceph-mon[77138]: pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:07 compute-2 nova_compute[232428]: 2025-11-29 09:22:07.575 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:09 compute-2 nova_compute[232428]: 2025-11-29 09:22:09.017 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:09 compute-2 ceph-mon[77138]: pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3008762521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:10 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1288147277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:10.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:11 compute-2 ceph-mon[77138]: pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/179083472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:11 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/723768468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:22:11 compute-2 sudo[358082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:11 compute-2 sudo[358082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:11 compute-2 sudo[358082]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:11 compute-2 sudo[358107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:11 compute-2 sudo[358107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:11 compute-2 sudo[358107]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:12 compute-2 nova_compute[232428]: 2025-11-29 09:22:12.578 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:13 compute-2 ceph-mon[77138]: pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:14 compute-2 nova_compute[232428]: 2025-11-29 09:22:14.018 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:14 compute-2 ceph-mon[77138]: pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:14.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:14.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:17 compute-2 ceph-mon[77138]: pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:17 compute-2 nova_compute[232428]: 2025-11-29 09:22:17.581 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:17 compute-2 podman[358135]: 2025-11-29 09:22:17.648885132 +0000 UTC m=+0.055446300 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:22:18 compute-2 nova_compute[232428]: 2025-11-29 09:22:18.192 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:18.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:22:18 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 79K writes, 311K keys, 79K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s
                                           Cumulative WAL: 79K writes, 29K syncs, 2.66 writes per sync, written: 0.31 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 3524 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 2.73 MB, 0.00 MB/s
                                           Interval WAL: 1285 writes, 551 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:22:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:18.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:19 compute-2 nova_compute[232428]: 2025-11-29 09:22:19.020 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:19 compute-2 ceph-mon[77138]: pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:20.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:21 compute-2 ceph-mon[77138]: pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:22 compute-2 nova_compute[232428]: 2025-11-29 09:22:22.584 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:22 compute-2 ceph-mon[77138]: pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:22.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:24 compute-2 nova_compute[232428]: 2025-11-29 09:22:24.023 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:24.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:24.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:25 compute-2 ceph-mon[77138]: pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:26.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:26.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:27 compute-2 ceph-mon[77138]: pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:27 compute-2 nova_compute[232428]: 2025-11-29 09:22:27.587 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 09:22:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3973164874' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:22:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 09:22:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3973164874' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:22:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:29 compute-2 nova_compute[232428]: 2025-11-29 09:22:29.025 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:29 compute-2 ceph-mon[77138]: pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3973164874' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:22:29 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3973164874' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:22:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:30.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:30 compute-2 podman[358163]: 2025-11-29 09:22:30.699173756 +0000 UTC m=+0.102914091 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:22:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:31 compute-2 ceph-mon[77138]: pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:31 compute-2 sudo[358190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:31 compute-2 sudo[358190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:31 compute-2 sudo[358190]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:31 compute-2 sudo[358215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:31 compute-2 sudo[358215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:31 compute-2 sudo[358215]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:32.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:32 compute-2 nova_compute[232428]: 2025-11-29 09:22:32.591 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:33 compute-2 ceph-mon[77138]: pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:34 compute-2 nova_compute[232428]: 2025-11-29 09:22:34.026 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:34 compute-2 ceph-mon[77138]: pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:34.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:35.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:37 compute-2 ceph-mon[77138]: pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:37 compute-2 nova_compute[232428]: 2025-11-29 09:22:37.592 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:37 compute-2 podman[358243]: 2025-11-29 09:22:37.635339094 +0000 UTC m=+0.046715988 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 09:22:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:39.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:39 compute-2 nova_compute[232428]: 2025-11-29 09:22:39.028 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:39 compute-2 ceph-mon[77138]: pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:40.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:41 compute-2 ceph-mon[77138]: pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:42.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:42 compute-2 nova_compute[232428]: 2025-11-29 09:22:42.596 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:43.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:43 compute-2 ceph-mon[77138]: pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:44 compute-2 nova_compute[232428]: 2025-11-29 09:22:44.030 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:44.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:44 compute-2 ceph-mon[77138]: pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:45.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:46.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:47.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:47 compute-2 nova_compute[232428]: 2025-11-29 09:22:47.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:47 compute-2 ceph-mon[77138]: pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:47 compute-2 nova_compute[232428]: 2025-11-29 09:22:47.599 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:48.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:48 compute-2 podman[358269]: 2025-11-29 09:22:48.647143805 +0000 UTC m=+0.054724748 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 29 09:22:48 compute-2 ceph-mon[77138]: pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:49.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:49 compute-2 nova_compute[232428]: 2025-11-29 09:22:49.032 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:49 compute-2 nova_compute[232428]: 2025-11-29 09:22:49.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:50 compute-2 sshd-session[358289]: Invalid user ubuntu from 45.148.10.240 port 57906
Nov 29 09:22:50 compute-2 sshd-session[358289]: Connection closed by invalid user ubuntu 45.148.10.240 port 57906 [preauth]
Nov 29 09:22:50 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:50 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:50 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:51 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:51 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:51 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:51 compute-2 ceph-mon[77138]: pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:51 compute-2 sudo[358292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:51 compute-2 sudo[358292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:51 compute-2 sudo[358292]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:51 compute-2 sudo[358317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:22:51 compute-2 sudo[358317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:22:51 compute-2 sudo[358317]: pam_unix(sudo:session): session closed for user root
Nov 29 09:22:52 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:52 compute-2 nova_compute[232428]: 2025-11-29 09:22:52.545 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:52 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:52 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:52 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:52 compute-2 nova_compute[232428]: 2025-11-29 09:22:52.602 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:52 compute-2 ceph-mon[77138]: pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:53 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:53 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:53 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:53.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:53 compute-2 nova_compute[232428]: 2025-11-29 09:22:53.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:54 compute-2 nova_compute[232428]: 2025-11-29 09:22:54.034 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:54 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:54 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:54 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:55 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:55 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:55 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:55 compute-2 ceph-mon[77138]: pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:22:56 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:56 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:56 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:56 compute-2 ceph-mon[77138]: pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 09:22:57 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:57 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:22:57 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:57.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:22:57 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:22:57 compute-2 nova_compute[232428]: 2025-11-29 09:22:57.605 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:58 compute-2 nova_compute[232428]: 2025-11-29 09:22:58.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:22:58 compute-2 nova_compute[232428]: 2025-11-29 09:22:58.202 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 09:22:58 compute-2 nova_compute[232428]: 2025-11-29 09:22:58.203 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 09:22:58 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:58 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:22:58 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:22:59 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:22:59 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:22:59 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:59.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:22:59 compute-2 nova_compute[232428]: 2025-11-29 09:22:59.036 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:22:59 compute-2 nova_compute[232428]: 2025-11-29 09:22:59.505 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 09:22:59 compute-2 nova_compute[232428]: 2025-11-29 09:22:59.505 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:00 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:00 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:00 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:01 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:01 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:01 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:01 compute-2 nova_compute[232428]: 2025-11-29 09:23:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:01 compute-2 nova_compute[232428]: 2025-11-29 09:23:01.201 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:01 compute-2 nova_compute[232428]: 2025-11-29 09:23:01.202 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:01 compute-2 podman[358348]: 2025-11-29 09:23:01.670179978 +0000 UTC m=+0.078638844 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 09:23:01 compute-2 ceph-mon[77138]: pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 09:23:02 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:02 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:02 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:02 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:02 compute-2 nova_compute[232428]: 2025-11-29 09:23:02.607 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:02 compute-2 ceph-mon[77138]: pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 09:23:02 compute-2 ceph-mon[77138]: pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 09:23:03 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:03 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:03 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:03.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:03.393 143801 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:03.394 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:23:03 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:03.394 143801 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:23:04 compute-2 nova_compute[232428]: 2025-11-29 09:23:04.039 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:04 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:04 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:04 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:05 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:05 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:05 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:05.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:05 compute-2 ceph-mon[77138]: pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 29 09:23:05 compute-2 sudo[358378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:05 compute-2 sudo[358378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:05 compute-2 sudo[358378]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:05 compute-2 sudo[358403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 09:23:05 compute-2 sudo[358403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:05 compute-2 sudo[358403]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:05 compute-2 sudo[358428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:05 compute-2 sudo[358428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:05 compute-2 sudo[358428]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:05 compute-2 sudo[358453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 09:23:05 compute-2 sudo[358453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:06 compute-2 sudo[358453]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:06 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:06 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:06 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:07 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:07 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:07 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:07 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:07 compute-2 ceph-mon[77138]: pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.595 232432 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.627 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.633 232432 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] CMD "env LANG=C uptime" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.702 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.702 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.703 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.703 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 09:23:07 compute-2 nova_compute[232428]: 2025-11-29 09:23:07.703 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:23:08 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:23:08 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3012031455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.133 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.341 232432 WARNING nova.virt.libvirt.driver [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.343 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.343 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.344 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3012031455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:08 compute-2 ceph-mon[77138]: pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 09:23:08 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.552 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.552 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 09:23:08 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:08 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:08 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:08 compute-2 nova_compute[232428]: 2025-11-29 09:23:08.597 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 09:23:08 compute-2 podman[358534]: 2025-11-29 09:23:08.649285253 +0000 UTC m=+0.053776929 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.041 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:09 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 09:23:09 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1721122749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:09 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:09 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:09 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:09.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.066 232432 DEBUG oslo_concurrency.processutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.072 232432 DEBUG nova.compute.provider_tree [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed in ProviderTree for provider: 77f31ad1-818f-4610-8dd1-3fbcd25133f2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.219 232432 DEBUG nova.scheduler.client.report [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Inventory has not changed for provider 77f31ad1-818f-4610-8dd1-3fbcd25133f2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.221 232432 DEBUG nova.compute.resource_tracker [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 09:23:09 compute-2 nova_compute[232428]: 2025-11-29 09:23:09.222 232432 DEBUG oslo_concurrency.lockutils [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 09:23:09 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1721122749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:10 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:10 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:10 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:10 compute-2 ceph-mon[77138]: pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Nov 29 09:23:11 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:11 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:11 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:11.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:11 compute-2 sudo[358579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:11 compute-2 sudo[358579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:11 compute-2 sudo[358579]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:12 compute-2 sudo[358604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:12 compute-2 sudo[358604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:12 compute-2 sudo[358604]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/786460659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:12 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/901617796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:12 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:12 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:12 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:12 compute-2 nova_compute[232428]: 2025-11-29 09:23:12.630 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:12 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:13 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:13 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:13 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:13.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:13 compute-2 nova_compute[232428]: 2025-11-29 09:23:13.222 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:13 compute-2 nova_compute[232428]: 2025-11-29 09:23:13.223 232432 DEBUG nova.compute.manager [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 09:23:13 compute-2 ceph-mon[77138]: pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s
Nov 29 09:23:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3685331155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:13 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1672143075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 09:23:14 compute-2 nova_compute[232428]: 2025-11-29 09:23:14.044 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:14 compute-2 sudo[358631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:14 compute-2 sudo[358631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:14 compute-2 sudo[358631]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:14 compute-2 sudo[358656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 09:23:14 compute-2 sudo[358656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:14 compute-2 sudo[358656]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:14 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:14 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:14 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:15 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:15 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:15 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:15.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:23:15 compute-2 ceph-mon[77138]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 09:23:15 compute-2 ceph-mon[77138]: pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 114 op/s
Nov 29 09:23:16 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:16 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:16 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:16 compute-2 ceph-mon[77138]: pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 125 op/s
Nov 29 09:23:17 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:17 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:17 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:17.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:17 compute-2 nova_compute[232428]: 2025-11-29 09:23:17.632 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:17 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:18 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:18 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:18 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:18.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:18 compute-2 ceph-mon[77138]: pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 09:23:18 compute-2 sshd-session[358683]: Accepted publickey for zuul from 192.168.122.10 port 41604 ssh2: ECDSA SHA256:wGP93gkN+kzmvKTvoR3y45htPaQhPvR9XiEuGcAv0l8
Nov 29 09:23:18 compute-2 systemd-logind[787]: New session 63 of user zuul.
Nov 29 09:23:18 compute-2 systemd[1]: Started Session 63 of User zuul.
Nov 29 09:23:18 compute-2 sshd-session[358683]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 09:23:18 compute-2 podman[358685]: 2025-11-29 09:23:18.869276427 +0000 UTC m=+0.061117668 container health_status 7d4a453e91d55df1d0b9514ba239f4af0a82da8be55df6428a895222150a2706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 09:23:18 compute-2 sudo[358707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 09:23:18 compute-2 sudo[358707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 09:23:19 compute-2 nova_compute[232428]: 2025-11-29 09:23:19.047 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:19 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:19 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:19 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:19.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:19.144 143801 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 09:23:19 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:19.145 143801 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 09:23:19 compute-2 nova_compute[232428]: 2025-11-29 09:23:19.145 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:19 compute-2 ceph-mon[77138]: from='client.51773 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:19 compute-2 ceph-mon[77138]: from='client.45093 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:20 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:20 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:20 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:20.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:20 compute-2 ceph-mon[77138]: from='client.51788 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:20 compute-2 ceph-mon[77138]: from='client.45102 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1018952352' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:23:20 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4180975379' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:23:20 compute-2 ceph-mon[77138]: pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 09:23:21 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:21 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:21 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:22 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:22 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:22 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:22.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:22 compute-2 nova_compute[232428]: 2025-11-29 09:23:22.633 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:22 compute-2 ceph-mon[77138]: from='client.48616 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:22 compute-2 ceph-mon[77138]: pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 09:23:22 compute-2 ceph-mon[77138]: from='client.48622 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 09:23:22 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3791633360' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:23:22 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:23 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:23 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:23 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:23.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:23 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3791633360' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 09:23:24 compute-2 nova_compute[232428]: 2025-11-29 09:23:24.050 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:24 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:24 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:24 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:25 compute-2 ceph-mon[77138]: pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 09:23:25 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:25 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:25 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:25.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:25 compute-2 ovs-vsctl[358992]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.51815 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.45123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2194017178' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3493943781' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3759730485' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:26 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/306902972' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:26 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:26 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:26 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:26.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:26 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 09:23:26 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 09:23:26 compute-2 virtqemud[231977]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 09:23:27 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:27 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:27 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:27.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:27 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: cache status {prefix=cache status} (starting...)
Nov 29 09:23:27 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: client ls {prefix=client ls} (starting...)
Nov 29 09:23:27 compute-2 lvm[359350]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 09:23:27 compute-2 lvm[359350]: VG ceph_vg0 finished
Nov 29 09:23:27 compute-2 nova_compute[232428]: 2025-11-29 09:23:27.635 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.51833 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.45138 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1263854760' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.51866 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2264060084' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.45168 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4114640580' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/111379511' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/213769437' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2037240265' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.45192 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1044149323' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:27 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 09:23:28 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 09:23:28 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1893329507' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 09:23:28 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:28 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:28 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:28.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.51908 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1831362939' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.45207 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4129492656' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.51932 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/815208781' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/735868659' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.48667 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/694171894' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1635987375' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3628530622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.10:0/3628530622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1893329507' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2105236450' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2532790915' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3480468546' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 09:23:28 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 09:23:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 09:23:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3401100162' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:29 compute-2 nova_compute[232428]: 2025-11-29 09:23:29.051 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:29 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:29 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:29 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:29.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 09:23:29 compute-2 ovn_metadata_agent[143796]: 2025-11-29 09:23:29.147 143801 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8abfd39-a629-4854-b6ed-e2d68f35f5fb, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 09:23:29 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: ops {prefix=ops} (starting...)
Nov 29 09:23:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 09:23:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/692117389' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:23:29 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 09:23:29 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1470502285' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.48694 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1013019184' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.51998 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1147489542' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3401100162' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.52010 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/830609869' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/493985352' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/692117389' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3724313417' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2502860335' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/913395531' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2248402355' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1922656788' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 09:23:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/111659149' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:23:30 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: session ls {prefix=session ls} (starting...)
Nov 29 09:23:30 compute-2 ceph-mds[83773]: mds.cephfs.compute-2.fwjrvc asok_command: status {prefix=status} (starting...)
Nov 29 09:23:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 09:23:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3012055867' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:30 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:30 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:30 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:30.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:30 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 09:23:30 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/990368843' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:31 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:31 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:31 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:31.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 09:23:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/520566761' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.48724 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1470502285' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.52049 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/111659149' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3906609404' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2598535447' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.45312 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.52067 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.48751 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3012055867' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2161853771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2352921293' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/990368843' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1495824373' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:31 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 09:23:31 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1681191078' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 09:23:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/521672976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:23:32 compute-2 sudo[359848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:32 compute-2 sudo[359848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:32 compute-2 sudo[359848]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:32 compute-2 sudo[359889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 09:23:32 compute-2 sudo[359889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 09:23:32 compute-2 sudo[359889]: pam_unix(sudo:session): session closed for user root
Nov 29 09:23:32 compute-2 podman[359886]: 2025-11-29 09:23:32.268102941 +0000 UTC m=+0.099812375 container health_status d4bf7c2c5844b7b8d705cabea054421953eb42882e44975b35f79f9128af3b09 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 09:23:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 09:23:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4294472269' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:32 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:32 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:32 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:32.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:32 compute-2 nova_compute[232428]: 2025-11-29 09:23:32.637 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 09:23:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3332098037' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.45321 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.52085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.48760 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.45342 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.52103 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1578149916' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.45357 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3416208665' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/520566761' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.52118 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3861266032' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.45375 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3439110020' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.52139 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1681191078' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/521672976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3913738379' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.45396 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/28272599' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.52157 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/4294472269' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3944346860' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:23:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:32 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 09:23:32 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2760050205' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:23:33 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:33 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:33 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 09:23:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3332299166' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 09:23:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3801949607' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x5587731d6c00 session 0x5587779572c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:42.311686+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530685952 unmapped: 100777984 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db83000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55877db83000 session 0x558779dfde00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55878c1a9000 session 0x55877b487c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 ms_handle_reset con 0x55878d63d400 session 0x5587740d5860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:43.311835+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 heartbeat osd_stat(store_statfs(0x197ab2000/0x0/0x1bfc00000, data 0x47d9a87/0x4a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 heartbeat osd_stat(store_statfs(0x197ab2000/0x0/0x1bfc00000, data 0x47d9a87/0x4a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:44.312003+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:45.312206+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5657797 data_alloc: 234881024 data_used: 35184640
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:46.312363+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530694144 unmapped: 100769792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.155641556s of 12.725331306s, submitted: 134
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:47.312576+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530702336 unmapped: 100761600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:48.312721+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 73K writes, 289K keys, 73K commit groups, 1.0 writes per commit group, ingest: 0.29 GB, 0.05 MB/s
                                           Cumulative WAL: 73K writes, 27K syncs, 2.69 writes per sync, written: 0.29 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5186 writes, 21K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 26.25 MB, 0.04 MB/s
                                           Interval WAL: 5186 writes, 1999 syncs, 2.59 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558771beb350#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 100728832 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x197ab0000/0x0/0x1bfc00000, data 0x47db5c6/0x4a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:49.312910+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558773616b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770d6400 session 0x558778aa3680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558778312400 session 0x558774471c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 100728832 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558779334c00 session 0x558775841680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d6400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587731d6400 session 0x558775f9f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x5587735ac5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:50.313059+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558778aa23c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770d6400 session 0x558774471a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558778312400 session 0x558778aa2960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5662233 data_alloc: 234881024 data_used: 35192832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:51.313234+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x197aaf000/0x0/0x1bfc00000, data 0x47db638/0x4a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:52.313394+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100720640 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:53.313590+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535273472 unmapped: 96190464 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:54.313743+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196e3c000/0x0/0x1bfc00000, data 0x5448638/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 95690752 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558775d09e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:55.313902+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558775c3fa40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 95690752 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5771441 data_alloc: 251658240 data_used: 36446208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:56.314034+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x558775f9f860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558779c432c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d8a000/0x0/0x1bfc00000, data 0x54fa638/0x572e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 95543296 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d66000/0x0/0x1bfc00000, data 0x551e638/0x5752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:57.314182+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.980627060s of 10.456892014s, submitted: 174
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535937024 unmapped: 95526912 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:58.314403+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535953408 unmapped: 95510528 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:51:59.314583+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d66000/0x0/0x1bfc00000, data 0x551e638/0x5752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:00.314725+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5838057 data_alloc: 251658240 data_used: 45514752
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:01.314877+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d48000/0x0/0x1bfc00000, data 0x5542638/0x5776000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:02.315048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:03.315253+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:04.315421+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:05.315553+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d48000/0x0/0x1bfc00000, data 0x5542638/0x5776000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 92577792 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5838057 data_alloc: 251658240 data_used: 45514752
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:06.315694+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:07.315931+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:08.316124+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:09.316281+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196d3e000/0x0/0x1bfc00000, data 0x554c638/0x5780000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 92569600 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:10.316438+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.751883507s of 12.793128967s, submitted: 10
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 88932352 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5897192 data_alloc: 251658240 data_used: 47095808
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:11.316611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542752768 unmapped: 88711168 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:12.316761+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 542752768 unmapped: 88711168 heap: 631463936 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:13.316952+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x55878c1a9000 session 0x5587779574a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dbc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770dbc00 session 0x558775d5c780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x5587745003c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x196978000/0x0/0x1bfc00000, data 0x5912638/0x5b46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x558773229680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 554729472 unmapped: 80936960 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558776fd5c00 session 0x558773233a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:14.317116+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540835840 unmapped: 94830592 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:15.317262+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540835840 unmapped: 94830592 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6023022 data_alloc: 251658240 data_used: 47476736
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:16.317421+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:17.317669+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x55878c1a4400 session 0x558773205860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:18.317829+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fd800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587754fd800 session 0x55877450e5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195892000/0x0/0x1bfc00000, data 0x69f8638/0x6c2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:19.318021+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x558772b9e800 session 0x558778aa32c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587750d4400 session 0x55877353de00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:20.318247+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540844032 unmapped: 94822400 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6025777 data_alloc: 251658240 data_used: 47501312
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:21.318438+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547012608 unmapped: 88653824 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.476983070s of 11.754078865s, submitted: 40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:22.318597+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 87195648 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:23.318757+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195890000/0x0/0x1bfc00000, data 0x69f866b/0x6c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548544512 unmapped: 87121920 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:24.318919+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 87040000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:25.319146+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6121297 data_alloc: 268435456 data_used: 57442304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:26.319352+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x19588f000/0x0/0x1bfc00000, data 0x69f866b/0x6c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:27.319585+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:28.319818+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 86999040 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:29.320048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 86990848 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:30.320212+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 86990848 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6121405 data_alloc: 268435456 data_used: 57438208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:31.320365+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x195888000/0x0/0x1bfc00000, data 0x69fd66b/0x6c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 86982656 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:32.320569+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.082427025s of 10.630439758s, submitted: 254
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549421056 unmapped: 86245376 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:33.320787+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 86016000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:34.320972+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:35.321132+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x194974000/0x0/0x1bfc00000, data 0x790c66b/0x7b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6242787 data_alloc: 268435456 data_used: 57643008
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:36.321296+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551018496 unmapped: 84647936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:37.321572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551149568 unmapped: 84516864 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:38.321750+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dac00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 551149568 unmapped: 84516864 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 ms_handle_reset con 0x5587770dac00 session 0x5587758ded20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:39.321962+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 heartbeat osd_stat(store_statfs(0x194974000/0x0/0x1bfc00000, data 0x790c66b/0x7b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558775890c00 session 0x55877b486d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558785e47000 session 0x558775f9e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x5587770da800 session 0x558775f865a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558833664 unmapped: 76832768 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 421 ms_handle_reset con 0x558772b9e800 session 0x558775d08960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:40.322218+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 76800000 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6401963 data_alloc: 268435456 data_used: 62681088
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 422 ms_handle_reset con 0x5587750d4400 session 0x5587758de1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:41.322418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556187648 unmapped: 79478784 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772a0e000 session 0x55877c2c7a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 heartbeat osd_stat(store_statfs(0x19361f000/0x0/0x1bfc00000, data 0x8c62bf6/0x8e9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:42.322581+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558779337800 session 0x558773f2d0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558776fd5c00 session 0x558779c43c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.595304489s of 10.217897415s, submitted: 124
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550486016 unmapped: 85180416 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558779337800 session 0x558775d08b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:43.322719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772a0e000 session 0x5587736ca1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x558772b9e800 session 0x5587744394a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 ms_handle_reset con 0x5587750d4400 session 0x558773f2cb40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:44.322884+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:45.323109+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x558772a0e000 session 0x558775d090e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5913003 data_alloc: 251658240 data_used: 49061888
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:46.323285+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x5587770d6400 session 0x558774471e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 424 ms_handle_reset con 0x558778312400 session 0x55877450e3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550551552 unmapped: 85114880 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x55877435a400 session 0x55877317e3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x558775777c00 session 0x558775db9c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:47.323526+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 heartbeat osd_stat(store_statfs(0x196960000/0x0/0x1bfc00000, data 0x59233d7/0x5b5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x558772a0e000 session 0x558775c621e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 ms_handle_reset con 0x55877435a400 session 0x558775d5cd20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:48.323689+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 heartbeat osd_stat(store_statfs(0x197c18000/0x0/0x1bfc00000, data 0x429b2e0/0x44d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:49.323845+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:50.323969+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5694943 data_alloc: 251658240 data_used: 41340928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:51.324182+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:52.324338+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550625280 unmapped: 85041152 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.754790306s of 10.184646606s, submitted: 160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:53.324506+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558775131c00 session 0x558775f86d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541745152 unmapped: 93921280 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:54.324680+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x198f6b000/0x0/0x1bfc00000, data 0x3318acc/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x198f6b000/0x0/0x1bfc00000, data 0x3318acc/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541745152 unmapped: 93921280 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558777510800 session 0x558775c62000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x5587770d6c00 session 0x558773ede1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:55.324858+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533651456 unmapped: 102014976 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5239563 data_alloc: 218103808 data_used: 10526720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:56.325011+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x1c66acc/0x1e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 102055936 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:57.325249+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 ms_handle_reset con 0x558772a0e000 session 0x558775c62b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:58.325432+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:52:59.325622+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x1c66acc/0x1e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:00.325786+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5238971 data_alloc: 218103808 data_used: 10526720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:01.325954+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:02.326249+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:03.326421+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:04.326594+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:05.326765+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:06.326945+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:07.327144+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:08.327353+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:09.327496+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:10.327666+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:11.327860+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:12.328056+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:13.328263+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:14.328562+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:15.328793+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:16.328972+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:17.329198+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:18.329393+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:19.329589+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:20.329775+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243145 data_alloc: 218103808 data_used: 10534912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:21.329942+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:22.330106+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558775f9ef00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775bd0c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775bd0c00 session 0x558775d5c3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877b4874a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e47c00 session 0x5587744b7a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.615688324s of 29.984188080s, submitted: 76
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:23.330276+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 102047744 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:24.330420+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772a0e000 session 0x5587736814a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:25.330588+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:26.330805+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5339749 data_alloc: 218103808 data_used: 10534912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:27.331033+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7b000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7b000 session 0x55877c2c70e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:28.331197+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750c5c00 session 0x5587744b7c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534151168 unmapped: 101515264 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:29.331416+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db80c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877db80c00 session 0x55877450e780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775bd1c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775bd1c00 session 0x558775c3e3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534093824 unmapped: 101572608 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:30.331530+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534118400 unmapped: 101548032 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:31.331668+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348962 data_alloc: 218103808 data_used: 11583488
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:32.331898+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:33.332149+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:34.332391+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:35.332555+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:36.332734+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5422402 data_alloc: 234881024 data_used: 21983232
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:37.332987+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:38.333213+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:39.333372+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199ad2000/0x0/0x1bfc00000, data 0x27b260b/0x29ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:40.333510+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:41.333692+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5422402 data_alloc: 234881024 data_used: 21983232
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 101539840 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:42.333825+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.275669098s of 19.426235199s, submitted: 29
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 92340224 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:43.333958+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543334400 unmapped: 92332032 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:44.334128+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19872a000/0x0/0x1bfc00000, data 0x3b5160b/0x3d8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:45.334275+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:46.334367+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5597166 data_alloc: 234881024 data_used: 24129536
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:47.334539+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:48.334687+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1986a6000/0x0/0x1bfc00000, data 0x3bd560b/0x3e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:49.334822+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543834112 unmapped: 91832320 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:50.334984+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:51.335142+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5587890 data_alloc: 234881024 data_used: 24129536
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:52.335302+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:53.335472+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868e000/0x0/0x1bfc00000, data 0x3bf660b/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:54.335644+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868e000/0x0/0x1bfc00000, data 0x3bf660b/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:55.335776+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:56.335942+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588210 data_alloc: 234881024 data_used: 24137728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.573825836s of 14.371501923s, submitted: 167
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:57.336177+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:58.336378+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:53:59.336553+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:00.336710+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:01.336902+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588266 data_alloc: 234881024 data_used: 24137728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:02.337075+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:03.337233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198688000/0x0/0x1bfc00000, data 0x3bfc60b/0x3e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:04.337428+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:05.337703+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:06.337879+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588342 data_alloc: 234881024 data_used: 24137728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:07.338116+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198685000/0x0/0x1bfc00000, data 0x3bff60b/0x3e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:08.338451+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:09.338635+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:10.338830+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:11.338969+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5589462 data_alloc: 234881024 data_used: 24166400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:12.339138+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:13.339291+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.385713577s of 16.471719742s, submitted: 4
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19867b000/0x0/0x1bfc00000, data 0x3c0960b/0x3e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:14.339472+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543170560 unmapped: 92495872 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:15.339846+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543178752 unmapped: 92487680 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198679000/0x0/0x1bfc00000, data 0x3c0a60b/0x3e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:16.340665+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5590818 data_alloc: 234881024 data_used: 24166400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:17.341865+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:18.342060+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:19.342251+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543186944 unmapped: 92479488 heap: 635666432 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x558775c3f4a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:20.342424+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ece000/0x0/0x1bfc00000, data 0x43b660b/0x45f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:21.342993+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5648916 data_alloc: 234881024 data_used: 24166400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec5000/0x0/0x1bfc00000, data 0x43bf60b/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:22.343407+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543285248 unmapped: 96059392 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec5000/0x0/0x1bfc00000, data 0x43bf60b/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:23.344137+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:24.344369+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:25.344527+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:26.344810+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543293440 unmapped: 96051200 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5648916 data_alloc: 234881024 data_used: 24166400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775841c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:27.345206+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.738831520s of 14.080610275s, submitted: 18
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558778312400 session 0x558775cdba40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 96018432 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:28.345540+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543326208 unmapped: 96018432 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775db9a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ec2000/0x0/0x1bfc00000, data 0x43c260b/0x45fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130c00 session 0x5587735ced20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:29.345815+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 95870976 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:30.345950+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 95870976 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:31.346263+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5706948 data_alloc: 234881024 data_used: 31490048
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:32.346479+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:33.346738+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:34.347019+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:35.347227+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e9e000/0x0/0x1bfc00000, data 0x43e660b/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:36.347402+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5707748 data_alloc: 234881024 data_used: 31555584
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:37.347645+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:38.347844+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359608650s of 11.381764412s, submitted: 5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:39.348006+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:40.348233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197e98000/0x0/0x1bfc00000, data 0x43ec60b/0x4626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:41.348538+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 95862784 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5707980 data_alloc: 234881024 data_used: 31555584
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:42.348735+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 91185152 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197acf000/0x0/0x1bfc00000, data 0x47b560b/0x49ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,10])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:43.348951+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546734080 unmapped: 92610560 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:44.349164+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546742272 unmapped: 92602368 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:45.349396+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546750464 unmapped: 92594176 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:46.349522+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5776212 data_alloc: 234881024 data_used: 31690752
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:47.349757+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:48.349932+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197798000/0x0/0x1bfc00000, data 0x4aec60b/0x4d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:49.350096+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:50.350231+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546971648 unmapped: 92372992 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:51.350452+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197798000/0x0/0x1bfc00000, data 0x4aec60b/0x4d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.695519447s of 12.682845116s, submitted: 75
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775048 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:52.350635+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776f000/0x0/0x1bfc00000, data 0x4b1560b/0x4d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:53.350786+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:54.350919+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:55.351050+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776f000/0x0/0x1bfc00000, data 0x4b1560b/0x4d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:56.351454+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 92348416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774748 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:57.351792+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:58.352012+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:54:59.352256+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19776c000/0x0/0x1bfc00000, data 0x4b1860b/0x4d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:00.352418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:01.352625+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547004416 unmapped: 92340224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774280 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:02.352817+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.590047836s of 11.008296967s, submitted: 8
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547012608 unmapped: 92332032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:03.353031+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197767000/0x0/0x1bfc00000, data 0x4b1d60b/0x4d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:04.353284+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:05.353484+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547020800 unmapped: 92323840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197764000/0x0/0x1bfc00000, data 0x4b2060b/0x4d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:06.353630+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197764000/0x0/0x1bfc00000, data 0x4b2060b/0x4d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547028992 unmapped: 92315648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774760 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:07.353820+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547028992 unmapped: 92315648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:08.354010+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:09.354214+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:10.354416+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197761000/0x0/0x1bfc00000, data 0x4b2360b/0x4d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:11.354584+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 92307456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774420 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:12.354824+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267741203s of 10.673305511s, submitted: 6
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:13.355041+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:14.355258+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19775e000/0x0/0x1bfc00000, data 0x4b2660b/0x4d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:15.355436+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:16.355656+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775044 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:17.355866+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:18.356076+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:19.356274+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547045376 unmapped: 92299264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:20.356448+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:21.356646+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775192 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:22.356827+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197758000/0x0/0x1bfc00000, data 0x4b2c60b/0x4d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:23.356988+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:24.357136+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522049904s of 11.567011833s, submitted: 8
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:25.357305+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197755000/0x0/0x1bfc00000, data 0x4b2f60b/0x4d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:26.357555+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197755000/0x0/0x1bfc00000, data 0x4b2f60b/0x4d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775060 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:27.357843+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 92291072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:28.358245+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:29.358418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:30.358534+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:31.358745+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197752000/0x0/0x1bfc00000, data 0x4b3260b/0x4d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775136 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:32.358919+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:33.359154+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 92282880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:34.359392+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:35.359603+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.531800270s of 11.552363396s, submitted: 4
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:36.359832+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19774d000/0x0/0x1bfc00000, data 0x4b3760b/0x4d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5775156 data_alloc: 234881024 data_used: 31694848
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:37.360048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5400 session 0x558773204960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558773205c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 92274688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:38.360230+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19774d000/0x0/0x1bfc00000, data 0x4b3760b/0x4d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650400 session 0x558775db81e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:39.360442+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:40.360618+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:41.360817+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5602266 data_alloc: 234881024 data_used: 24010752
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:42.361001+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547078144 unmapped: 92266496 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:43.361245+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19863f000/0x0/0x1bfc00000, data 0x3c4560b/0x3e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:44.361453+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:45.361628+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:46.361806+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5603238 data_alloc: 234881024 data_used: 24010752
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:47.362031+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 92258304 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:48.362399+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19863f000/0x0/0x1bfc00000, data 0x3c4560b/0x3e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5000 session 0x55877450e5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.228904724s of 12.636835098s, submitted: 17
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776704000 session 0x55877c2c65a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547102720 unmapped: 92241920 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:49.362554+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775f86d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543916032 unmapped: 95428608 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:50.362970+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:51.363593+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:52.364233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:53.364572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:54.365263+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:55.365688+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:56.366078+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:57.366554+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:58.366901+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:55:59.367065+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543924224 unmapped: 95420416 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:00.367351+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:01.367550+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:02.367934+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 95412224 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:03.368228+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:04.368548+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:05.368822+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:06.369077+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543940608 unmapped: 95404032 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:07.369303+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:08.369598+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:09.369844+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:10.370068+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:11.370287+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:12.370621+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:13.370848+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:14.371041+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543948800 unmapped: 95395840 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:15.371395+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:16.371681+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:17.371894+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:18.372120+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:19.372355+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:20.372538+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:21.372697+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:22.372830+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543956992 unmapped: 95387648 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:23.372992+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:24.373151+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:25.373267+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:26.373485+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:27.373710+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:28.373941+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:29.374154+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:30.374361+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543965184 unmapped: 95379456 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:31.374583+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:32.374835+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:33.375006+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543973376 unmapped: 95371264 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:34.375213+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:35.375454+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:36.375666+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:37.375960+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:38.376186+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543981568 unmapped: 95363072 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:39.376381+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:40.376582+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:41.376822+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:42.377021+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5268402 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:43.377194+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:44.377372+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:45.377534+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543989760 unmapped: 95354880 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:46.377760+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543997952 unmapped: 95346688 heap: 639344640 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a786c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a786c00 session 0x558775d08960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130800 session 0x558775f865a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x558775f9e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d9000 session 0x55877353de00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.806312561s of 57.890869141s, submitted: 32
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775d09860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773650000 session 0x55877317e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130800 session 0x558775d5c5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a786c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a786c00 session 0x558774501c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:47.377949+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770db000 session 0x558773616b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369337 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:48.378136+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:49.378290+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:50.378509+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199957000/0x0/0x1bfc00000, data 0x292d60b/0x2b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:51.378721+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:52.378961+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369337 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:53.379167+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877586a400 session 0x5587735ce960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199957000/0x0/0x1bfc00000, data 0x292d60b/0x2b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:54.379386+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776704c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776704c00 session 0x558777957860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:55.379527+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538533888 unmapped: 105013248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x558775d5c000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:56.379658+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c9000 session 0x55877317fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878874779s of 10.056379318s, submitted: 37
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:57.379853+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372351 data_alloc: 218103808 data_used: 10342400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:58.380024+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538517504 unmapped: 105029632 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:56:59.380153+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199956000/0x0/0x1bfc00000, data 0x292d62e/0x2b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:00.380291+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:01.380458+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:02.380614+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 105021440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465151 data_alloc: 234881024 data_used: 23355392
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d6000 session 0x5587735ade00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558775d092c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:03.380740+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533118976 unmapped: 110428160 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x55877450f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:04.380941+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:05.381094+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:06.381296+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:07.381525+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5278201 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:08.381740+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:09.381905+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:10.382081+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:11.382304+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:12.382550+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5278201 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:13.382729+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:14.382987+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533127168 unmapped: 110419968 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877c2c74a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d4800 session 0x5587758df0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c5000 session 0x558778aa3a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:15.383148+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558774438b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.406515121s of 18.591299057s, submitted: 50
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533094400 unmapped: 110452736 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441e400 session 0x558775c62780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558773e430e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a1be000/0x0/0x1bfc00000, data 0x20c561b/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d4800 session 0x558777957a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774354000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774354000 session 0x5587732043c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558775cdb680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:16.383370+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:17.383575+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399349 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:18.383746+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:19.383969+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:20.384169+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199767000/0x0/0x1bfc00000, data 0x2b1c61b/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877b486b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:21.384372+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6ca400 session 0x558775d085a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199767000/0x0/0x1bfc00000, data 0x2b1c61b/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:22.384553+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399349 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6cb800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6cb800 session 0x558775f9f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:23.384706+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533405696 unmapped: 110141440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778313000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558778313000 session 0x558773205860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:24.384895+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 109813760 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:25.385059+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:26.385202+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199741000/0x0/0x1bfc00000, data 0x2b4064e/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:27.385525+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5516089 data_alloc: 234881024 data_used: 25350144
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199741000/0x0/0x1bfc00000, data 0x2b4064e/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:28.385675+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x558773e430e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.425266266s of 13.606207848s, submitted: 36
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x55877450fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:29.386628+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 110125056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d4400 session 0x5587745005a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:30.387257+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:31.387572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:32.388108+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:33.388568+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:34.388776+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:35.388959+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:36.389298+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:37.389683+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:38.389868+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:39.390043+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:40.390338+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:41.390571+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:42.390809+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:43.391000+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:44.391195+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:45.391368+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:46.391579+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:47.391790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:48.391991+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:49.392157+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:50.392408+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:51.392630+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:52.392805+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:53.393003+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:54.393164+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:55.393505+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:56.393691+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:57.393887+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:58.394118+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:57:59.394278+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:00.394447+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:01.394616+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:02.394797+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524869632 unmapped: 118677504 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291610 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:03.394980+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a61b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524877824 unmapped: 118669312 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:04.395180+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777510800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777510800 session 0x558773637c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7a000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7a000 session 0x558775d5d0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877b4861e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 524877824 unmapped: 118669312 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d4400 session 0x558775f863c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.514713287s of 35.804637909s, submitted: 38
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131c00 session 0x558775d090e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:05.395364+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:06.395512+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:07.395718+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5350972 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:08.395886+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1aa400 session 0x558775d5c960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:09.396004+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c4000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c4000 session 0x5587735acf00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:10.396202+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8800 session 0x558775d5de00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775841e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:11.396393+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:12.396561+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 118464512 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5351292 data_alloc: 218103808 data_used: 10371072
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:13.396705+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:14.396876+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:15.397029+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:16.397217+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:17.398104+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5395772 data_alloc: 234881024 data_used: 16703488
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:18.398290+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:19.398552+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:20.398783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:21.398996+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:22.399195+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199edd000/0x0/0x1bfc00000, data 0x23a760b/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5395772 data_alloc: 234881024 data_used: 16703488
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:23.399409+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 525352960 unmapped: 118194176 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:24.399517+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.604757309s of 19.673570633s, submitted: 13
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528121856 unmapped: 115425280 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199a4a000/0x0/0x1bfc00000, data 0x283260b/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:25.440504+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528293888 unmapped: 115253248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:26.440644+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:27.440847+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469034 data_alloc: 234881024 data_used: 17444864
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:28.441003+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:29.441200+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:30.441539+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:31.443964+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:32.444446+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469050 data_alloc: 234881024 data_used: 17444864
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:33.444640+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:34.444798+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:35.445224+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:36.445423+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:37.445709+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470010 data_alloc: 234881024 data_used: 17469440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:38.445961+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:39.446198+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:40.446442+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:41.446676+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x199418000/0x0/0x1bfc00000, data 0x2a5360b/0x2c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528302080 unmapped: 115245056 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x5587732332c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:42.446904+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.278804779s of 18.051580429s, submitted: 51
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x5587735125a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:43.447068+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:44.447233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:45.447431+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:46.447628+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:47.447832+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:48.447978+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:49.448137+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:50.448363+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:51.448512+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:52.448673+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:53.448865+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:54.449003+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:55.449138+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:56.449295+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:57.449527+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:58.449764+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:58:59.449942+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:00.450107+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:01.450306+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:02.450511+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527630336 unmapped: 115916800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:03.450723+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:04.450906+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:05.451069+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:06.451199+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:07.451366+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:08.451545+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:09.451751+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:10.451914+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:11.452127+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:12.452270+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:13.452492+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:14.452702+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:15.452868+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:16.453008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:17.453451+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301098 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:18.453603+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:19.453856+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527638528 unmapped: 115908608 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6cb800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6cb800 session 0x5587735ac960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:20.454059+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558775db94a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x55877317e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775db81e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.154705048s of 38.187271118s, submitted: 12
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527671296 unmapped: 115875840 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:21.454265+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a20b000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8800 session 0x55877c2c65a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779337000 session 0x5587735ade00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527695872 unmapped: 115851264 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558773232960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558779c42000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:22.454486+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775d08960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5368479 data_alloc: 218103808 data_used: 10342400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:23.454684+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:24.454900+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:25.455066+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:26.455278+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e8000/0x0/0x1bfc00000, data 0x248b61b/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:27.455557+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776fd5000 session 0x5587779570e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5368479 data_alloc: 218103808 data_used: 10342400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:28.456010+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9e800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9e800 session 0x558775d5dc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x5587779565a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527720448 unmapped: 115826688 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x5587758df680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:29.456141+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776fd5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 527728640 unmapped: 115818496 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:30.456305+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 528351232 unmapped: 115195904 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:31.456462+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529309696 unmapped: 114237440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:32.456612+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431734 data_alloc: 234881024 data_used: 18862080
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529309696 unmapped: 114237440 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e6000/0x0/0x1bfc00000, data 0x248b64e/0x26c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:33.456780+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:34.456979+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:35.457135+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:36.457732+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:37.458014+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431734 data_alloc: 234881024 data_used: 18862080
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:38.459015+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1999e6000/0x0/0x1bfc00000, data 0x248b64e/0x26c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:39.459390+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 114229248 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:40.459944+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.894926071s of 20.206439972s, submitted: 27
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x55877450e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 529416192 unmapped: 114130944 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x5587740d43c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a6c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a6c00 session 0x558774471a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558773513680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558774470780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:41.460483+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 531234816 unmapped: 112312320 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:42.460656+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5637342 data_alloc: 234881024 data_used: 19996672
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 111411200 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:43.461000+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:44.461411+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198035000/0x0/0x1bfc00000, data 0x3e2d6b0/0x406b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x558773edf860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:45.461792+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x55877b486000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532176896 unmapped: 111370240 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:46.462140+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db80c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877db80c00 session 0x5587758df860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587758df4a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198035000/0x0/0x1bfc00000, data 0x3e2d6b0/0x406b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587731d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 111050752 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:47.462512+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650905 data_alloc: 234881024 data_used: 19963904
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 111050752 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:48.462729+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536240128 unmapped: 107307008 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:49.462954+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:50.463178+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:51.463400+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:52.463997+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801e000/0x0/0x1bfc00000, data 0x3e516c0/0x4090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741305 data_alloc: 234881024 data_used: 32706560
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:53.464246+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:54.464479+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:55.464660+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:56.465115+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801e000/0x0/0x1bfc00000, data 0x3e516c0/0x4090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:57.465511+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741305 data_alloc: 234881024 data_used: 32706560
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:58.465875+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:59:59.466167+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 107036672 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:00.466400+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.909603119s of 19.408382416s, submitted: 140
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 104980480 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:01.466740+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197895000/0x0/0x1bfc00000, data 0x45da6c0/0x4819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538615808 unmapped: 104931328 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:02.466932+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822965 data_alloc: 234881024 data_used: 32972800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:03.467235+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:04.467386+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:05.467629+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:06.467818+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197808000/0x0/0x1bfc00000, data 0x46676c0/0x48a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:07.468092+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x197808000/0x0/0x1bfc00000, data 0x46676c0/0x48a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822965 data_alloc: 234881024 data_used: 32972800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538877952 unmapped: 104669184 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:08.468202+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:09.468294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1977e7000/0x0/0x1bfc00000, data 0x46886c0/0x48c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:10.468457+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:11.468588+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:12.468760+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1977e7000/0x0/0x1bfc00000, data 0x46886c0/0x48c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23b4f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5818249 data_alloc: 234881024 data_used: 32985088
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:13.468945+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.360217094s of 13.718045235s, submitted: 81
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:14.469149+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538894336 unmapped: 104652800 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:15.469446+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587731d7c00 session 0x558774501c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe400 session 0x558775c632c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x558773f2cf00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:16.469592+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19993e000/0x0/0x1bfc00000, data 0x31d664e/0x3413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:17.469749+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569054 data_alloc: 234881024 data_used: 19968000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:18.469949+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534347776 unmapped: 109199360 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:19.470127+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19993e000/0x0/0x1bfc00000, data 0x31d664e/0x3413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:20.470269+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:21.470359+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534355968 unmapped: 109191168 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:22.470471+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776fd5000 session 0x55877317f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x558775d092c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775cdad20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:23.470572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:24.470724+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:25.470882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:26.471065+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:27.471195+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:28.471408+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:29.471607+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:30.471736+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:31.471915+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:32.472079+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:33.472233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:34.472392+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:35.472562+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:36.472662+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:37.472802+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:38.472970+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:39.473166+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:40.473410+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:41.473532+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:42.473765+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:43.473948+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:44.474129+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:45.474354+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:46.474499+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:47.474774+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:48.474900+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335035 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534388736 unmapped: 109158400 heap: 643547136 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558775f9f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334400 session 0x55877317ed20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75d800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a75d800 session 0x558775d5c5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587740d50e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.351760864s of 34.652317047s, submitted: 103
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770da400 session 0x558778aa2960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511c00 session 0x55877450e960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334400 session 0x55877450f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558773653800 session 0x55877353dc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775c63a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:49.475066+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:50.475279+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:51.475518+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:52.475719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x5587779570e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a971000/0x0/0x1bfc00000, data 0x254261b/0x277d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:53.475870+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5400387 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877a75d000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877a75d000 session 0x558773637c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 112828416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a7c00 session 0x558775f863c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:54.476086+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a5c00 session 0x558775f9f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534552576 unmapped: 112672768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:55.476267+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534552576 unmapped: 112672768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:56.476451+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:57.476719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:58.476917+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470660 data_alloc: 218103808 data_used: 19628032
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:00:59.477048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:00.477214+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:01.477392+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:02.477596+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:03.477849+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470660 data_alloc: 218103808 data_used: 19628032
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:04.478013+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:05.478198+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a94d000/0x0/0x1bfc00000, data 0x256661b/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:06.478353+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:07.478546+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.669998169s of 18.780950546s, submitted: 15
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 112648192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:08.478715+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a280000/0x0/0x1bfc00000, data 0x2c3361b/0x2e6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5530192 data_alloc: 218103808 data_used: 19800064
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535822336 unmapped: 111403008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:09.479015+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:10.479210+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a24f000/0x0/0x1bfc00000, data 0x2c6461b/0x2e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:11.479410+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:12.479620+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:13.479790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543540 data_alloc: 218103808 data_used: 19800064
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:14.479951+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:15.480282+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:16.480374+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a24c000/0x0/0x1bfc00000, data 0x2c6761b/0x2ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:17.480608+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:18.480754+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540404 data_alloc: 218103808 data_used: 19800064
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:19.480914+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535830528 unmapped: 111394816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x5587732332c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.474609375s of 12.674955368s, submitted: 53
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558773e42000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:20.481095+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535838720 unmapped: 111386624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d5c00 session 0x5587758412c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:21.481588+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:22.481758+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:23.481944+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:24.482096+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:25.482379+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:26.482590+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:27.482747+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:28.482885+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:29.483001+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:30.483138+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:31.483363+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:32.483632+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:33.483762+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 115097600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:34.483888+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:35.484094+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:36.484251+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:37.484484+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:38.484672+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:39.484820+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:40.485011+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:41.485218+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:42.485364+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:43.485461+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:44.485611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:45.485764+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:46.485938+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:47.486204+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:48.486380+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5348438 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 302K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 28K syncs, 2.68 writes per sync, written: 0.30 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3139 writes, 12K keys, 3139 commit groups, 1.0 writes per commit group, ingest: 12.34 MB, 0.02 MB/s
                                           Interval WAL: 3139 writes, 1258 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:49.486532+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:50.486690+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x558775cdb0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d174000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878d174000 session 0x5587740770e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775d5d860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d5c00 session 0x558773f2c3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 532135936 unmapped: 115089408 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.497575760s of 30.588603973s, submitted: 30
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775d5c780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x55877317f680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734da800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587734da800 session 0x558775d5c960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558775db8d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587734da800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587734da800 session 0x558775c3fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:51.486872+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:52.487006+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:53.487188+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462668 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:54.487398+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:55.487563+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:56.487757+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:57.487995+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533413888 unmapped: 113811456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441f400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441f400 session 0x55877450e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:58.488176+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462668 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775d5da40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:01:59.488433+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779336400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779336400 session 0x558779dfde00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772b9fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558772b9fc00 session 0x558773e430e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:00.488673+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533422080 unmapped: 113803264 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:01.488840+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533430272 unmapped: 113795072 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:02.488985+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:03.489160+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x558775f9ed20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:04.489348+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:05.489612+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:06.489811+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:07.490043+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:08.490265+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:09.490475+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:10.490653+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:11.490788+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:12.490952+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:13.491145+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:14.491301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750c5c00 session 0x558773e42000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:15.491478+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:16.491634+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:17.491831+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:18.492034+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5554668 data_alloc: 234881024 data_used: 23326720
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:19.492236+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558777511800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:20.492552+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a46a000/0x0/0x1bfc00000, data 0x2a4961b/0x2c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:21.492809+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535633920 unmapped: 111591424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558777511800 session 0x55877317fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:22.493000+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.803211212s of 31.928565979s, submitted: 23
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533561344 unmapped: 113664000 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:23.493162+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533561344 unmapped: 113664000 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770db800 session 0x558778aa3c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:24.493368+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533626880 unmapped: 113598464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:25.493540+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533692416 unmapped: 113532928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:26.493783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:27.494063+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:28.494345+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:29.494557+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:30.494804+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:31.495016+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:32.495234+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:33.495562+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:34.495708+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:35.495888+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:36.496062+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:37.496247+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:38.496398+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:39.496573+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:40.496743+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:41.496932+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:42.497092+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:43.497233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:44.497389+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 113524736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:45.497562+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:46.497734+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:47.497958+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:48.498158+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:49.498333+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:50.498494+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:51.498683+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:52.498874+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:53.499062+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533708800 unmapped: 113516544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:54.499230+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 113508352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:55.499394+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 113508352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:56.499530+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:57.499720+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:58.499903+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:02:59.500091+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:00.500234+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:01.500380+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:02.500542+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:03.500685+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 113491968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:04.500831+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:05.501000+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:06.501227+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:07.501499+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:08.501704+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358224 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:09.501909+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:10.502092+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.346660614s of 48.235324860s, submitted: 257
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 533741568 unmapped: 113483776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c9000 session 0x558775c62d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779337c00 session 0x5587736361e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8400 session 0x558778aa2960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:11.502359+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131800 session 0x5587744394a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775131400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775131400 session 0x558778aa34a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:12.502537+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:13.502762+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406356 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:14.502986+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:15.503150+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:16.503390+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d63d000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878d63d000 session 0x5587779572c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534126592 unmapped: 113098752 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:17.503630+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad46000/0x0/0x1bfc00000, data 0x216d66d/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x558773ede780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534134784 unmapped: 113090560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:18.503777+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406356 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534134784 unmapped: 113090560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587768f4000 session 0x55877b486780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:19.503988+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x558775db9860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:20.504192+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:21.504416+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779335000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.984196663s of 10.797243118s, submitted: 52
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 112975872 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:22.504856+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 112967680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:23.505056+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:24.505827+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:25.506951+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:26.507127+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:27.507782+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:28.508179+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:29.508413+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:30.508602+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19ad1b000/0x0/0x1bfc00000, data 0x219767d/0x23d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:31.509038+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:32.509595+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:33.509816+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447805 data_alloc: 218103808 data_used: 15228928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 112959488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:34.510006+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.596837044s of 12.842131615s, submitted: 2
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 535461888 unmapped: 111763456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a7e4000/0x0/0x1bfc00000, data 0x26ce67d/0x290a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:35.510236+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 536133632 unmapped: 111091712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:36.510448+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 539967488 unmapped: 107257856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:37.510738+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 109469696 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a1e5000/0x0/0x1bfc00000, data 0x2ccd67d/0x2f09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:38.510967+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543217 data_alloc: 218103808 data_used: 16642048
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 109469696 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:39.511108+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537821184 unmapped: 109404160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:40.511238+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537821184 unmapped: 109404160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:41.511403+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a13b000/0x0/0x1bfc00000, data 0x2d7767d/0x2fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:42.511578+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:43.511749+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5560905 data_alloc: 218103808 data_used: 16719872
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:44.512048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.149740696s of 10.578299522s, submitted: 130
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537829376 unmapped: 109395968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a13b000/0x0/0x1bfc00000, data 0x2d7767d/0x2fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:45.512239+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:46.512444+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:47.512720+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:48.512937+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554473 data_alloc: 218103808 data_used: 16723968
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:49.513176+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:50.513407+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 109281280 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:51.513610+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:52.513849+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a116000/0x0/0x1bfc00000, data 0x2d9c67d/0x2fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:53.514027+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554473 data_alloc: 218103808 data_used: 16723968
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:54.514157+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:55.514418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:56.514794+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.785726547s of 12.032221794s, submitted: 5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:57.515020+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:58.515219+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554341 data_alloc: 218103808 data_used: 16723968
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:03:59.515747+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:00.515882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:01.516041+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:02.516179+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779335000 session 0x558773204d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x558775cda780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:03.516346+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554265 data_alloc: 218103808 data_used: 16736256
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:04.516524+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:05.516732+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:06.516951+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537952256 unmapped: 109273088 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.316456318s of 10.089038849s, submitted: 18
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:07.517145+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:08.517901+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5554265 data_alloc: 218103808 data_used: 16736256
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537960448 unmapped: 109264896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:09.518111+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a113000/0x0/0x1bfc00000, data 0x2d9f67d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:10.518269+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:11.518441+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537968640 unmapped: 109256704 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:12.518551+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b220000/0x0/0x1bfc00000, data 0x1c9261b/0x1ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:13.518786+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373704 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:14.519019+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877450f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:15.519225+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:16.519449+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:17.519687+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:18.519861+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:19.520134+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:20.520359+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:21.520558+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537976832 unmapped: 109248512 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:22.520743+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:23.520964+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:24.521167+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:25.521383+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:26.521581+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:27.521767+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:28.521930+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537985024 unmapped: 109240320 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:29.522084+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:30.522261+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:31.522439+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:32.522600+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:33.522751+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:34.522879+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:35.523001+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:36.523147+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 537993216 unmapped: 109232128 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:37.523447+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:38.523690+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:39.523862+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538001408 unmapped: 109223936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:40.524073+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:41.524220+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:42.524417+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:43.524567+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:44.524751+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:45.524959+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:46.525285+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:47.525552+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538009600 unmapped: 109215744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:48.525730+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:49.525867+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:50.526037+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:51.526187+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:52.526434+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:53.526589+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:54.526741+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:55.526946+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538017792 unmapped: 109207552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:56.527149+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:57.527455+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:58.527736+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:04:59.529007+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:00.529205+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:01.529906+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:02.530091+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538025984 unmapped: 109199360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:03.530667+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:04.530897+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:05.531928+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:06.532476+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:07.532972+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:08.533143+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:09.533351+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373528 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:10.534445+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 109182976 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x558775d08f00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775cda3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614b800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877614b800 session 0x5587744383c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:11.534888+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x5587758de3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.188396454s of 64.272262573s, submitted: 40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538058752 unmapped: 109166592 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775f86d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x5587744714a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x55877b486960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754ffc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754ffc00 session 0x558778aa30e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877450f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:12.535184+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:13.535385+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:14.535507+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5448467 data_alloc: 218103808 data_used: 10338304
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538206208 unmapped: 109019136 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775db9860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558779334000 session 0x55877b486780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44400 session 0x558773ede780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7b800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775b7b800 session 0x558778aa34a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19a9a8000/0x0/0x1bfc00000, data 0x250b61b/0x2746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,1,2])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x5587744394a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558773e42000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x558773e430e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558779dfde00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e46400 session 0x55877450e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:15.538056+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558775db8d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538238976 unmapped: 108986368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x558775d5c960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:16.538285+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538238976 unmapped: 108986368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x55877317f680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d9c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d9c00 session 0x558775d5c780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775779400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:17.538584+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541425664 unmapped: 105799680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x5587736165a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775779400 session 0x558775f9f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x558775db8960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:18.538720+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x558777957a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 538312704 unmapped: 108912640 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775776c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775776c00 session 0x55877353cf00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776992400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776992400 session 0x558773513680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:19.538843+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5735766 data_alloc: 218103808 data_used: 16179200
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541073408 unmapped: 106151936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0f000/0x0/0x1bfc00000, data 0x44a06ff/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441ec00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0f000/0x0/0x1bfc00000, data 0x44a06ff/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:20.539065+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541237248 unmapped: 105988096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6ca800 session 0x55877b486b40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:21.539391+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 541237248 unmapped: 105988096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e44c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e44c00 session 0x558774470780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x558775d092c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.374307632s of 10.853147507s, submitted: 104
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:22.539713+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545087488 unmapped: 102137856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x55877317ed20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:23.539928+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550985728 unmapped: 96239616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:24.540220+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5876919 data_alloc: 234881024 data_used: 33325056
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 550985728 unmapped: 96239616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586b000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:25.540385+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556343296 unmapped: 90882048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:26.540597+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 557203456 unmapped: 90021888 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877586b000 session 0x558774471e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877e6c8c00 session 0x558775f9e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775890000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:27.540845+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558260224 unmapped: 88965120 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x198a0d000/0x0/0x1bfc00000, data 0x44a0731/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:28.541008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775890000 session 0x55877450f2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558268416 unmapped: 88956928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:29.541257+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5746145 data_alloc: 234881024 data_used: 29536256
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 558268416 unmapped: 88956928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:30.541560+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 561299456 unmapped: 85925888 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:31.541698+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 87154688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19922c000/0x0/0x1bfc00000, data 0x3c7c69d/0x3eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a400 session 0x5587758de960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877441ec00 session 0x558773ede1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877435a800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:32.541876+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.121753693s of 10.037456512s, submitted: 94
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55877435a800 session 0x558775f86f00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:33.542099+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1995f0000/0x0/0x1bfc00000, data 0x2b2861b/0x2d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:34.542352+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5595568 data_alloc: 218103808 data_used: 19271680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x1995f0000/0x0/0x1bfc00000, data 0x2b2861b/0x2d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7400 session 0x5587758412c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7000 session 0x558773204960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:35.542586+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a8000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555335680 unmapped: 91889664 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x55878c1a8000 session 0x5587758def00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:36.542793+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:37.543081+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:38.543291+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:39.543520+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:40.543713+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:41.543936+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:42.544107+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:43.544354+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:44.544569+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:45.544788+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:46.544991+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:47.545194+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:48.545362+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:49.545554+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:50.545767+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:51.545975+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:52.546151+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:53.546432+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:54.546631+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:55.546819+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:56.547010+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:57.547239+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:58.547422+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:05:59.547611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:00.547737+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:01.547944+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:02.548140+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:03.548350+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:04.548509+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:05.548708+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:06.548920+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:07.549170+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:08.549421+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:09.549622+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:10.549872+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:11.550029+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:12.550203+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:13.550419+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:14.550675+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:15.550934+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:16.551069+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:17.551304+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:18.551498+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:19.551659+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:20.551820+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:21.551990+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:22.552135+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:23.552345+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:24.552493+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:25.552747+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:26.552920+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:27.553098+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:28.553302+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:29.553554+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:30.553688+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:31.553811+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:32.554013+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:33.554147+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:34.554294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:35.554463+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:36.554628+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:37.554804+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:38.555011+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:39.555161+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:40.555400+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 99164160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:41.555580+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:42.555754+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:43.555927+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:44.556094+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:45.556399+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:46.556549+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:47.556756+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:48.556984+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:49.557171+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548069376 unmapped: 99155968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:50.557400+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:51.557549+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:52.557723+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:53.557908+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558775130000 session 0x558773f2cb40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:54.558163+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:55.558384+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:56.558598+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 99147776 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:57.559059+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:58.559292+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:06:59.559547+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:00.559680+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:01.559929+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:02.560196+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:03.560413+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:04.560561+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:05.560727+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:06.560913+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:07.561367+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 99139584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:08.561685+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:09.561832+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5407726 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:10.562511+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24c000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:11.563290+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:12.563650+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:13.564257+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 99131392 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:14.564452+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fc800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 101.868850708s of 102.070037842s, submitted: 66
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409627 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fc800 session 0x5587736cb680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:15.564790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:16.565056+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:17.565256+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:18.565575+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:19.565740+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409555 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:20.565918+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:21.566162+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6862e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:22.566443+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:23.566656+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7000 session 0x55877b4872c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775841a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:24.566928+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:25.567080+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:26.567295+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 99123200 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:27.567528+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:28.567759+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:29.567964+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:30.568128+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:31.568349+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587750d7c00 session 0x55877450e1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fe800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587754fe800 session 0x558775d5dc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:32.568486+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:33.568685+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 99115008 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:34.568895+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:35.569035+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:36.569176+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:37.569373+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:38.569567+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:39.569768+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:40.570040+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:41.570287+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 99106816 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:42.570450+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:43.570652+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:44.570848+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:45.571091+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:46.571380+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:47.571619+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:48.571957+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 99098624 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:49.572235+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:50.572563+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:51.572799+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 99090432 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:52.572937+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548143104 unmapped: 99082240 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:53.573140+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:54.573377+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408811 data_alloc: 218103808 data_used: 10076160
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:55.573595+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:56.573804+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:57.574015+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c4800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 99074048 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:58.574253+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587757c4800 session 0x558775f9ed20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 99065856 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:07:59.574418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x558775c3e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5412011 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:00.574572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:01.574750+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:02.574938+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:03.575262+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:04.575545+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5412011 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:05.575789+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 98869248 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:06.576008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:07.576303+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:08.576593+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24b000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d8400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.116096497s of 54.726608276s, submitted: 15
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:09.576800+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:10.576976+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587770d8400 session 0x558775f863c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:11.577167+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:12.578569+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548372480 unmapped: 98852864 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:13.578994+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-11-29T09:08:14.579601+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _finish_auth 0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:14.581048+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:15.580109+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548388864 unmapped: 98836480 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:16.581067+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548397056 unmapped: 98828288 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:17.581423+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:18.581777+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:19.582093+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:20.582999+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:21.583461+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 98820096 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:22.583700+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:23.583896+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:24.584126+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:25.584272+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:26.584492+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587768f4c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:27.584758+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:28.585037+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:29.585301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413839 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 98803712 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:30.585487+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6861b/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:31.585642+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:32.585956+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x5587768f4c00 session 0x558775f9f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776993400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:33.586118+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.762420654s of 24.256208420s, submitted: 1
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558776993400 session 0x5587745005a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:34.586279+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19b24a000/0x0/0x1bfc00000, data 0x1c6860b/0x1ea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5487063 data_alloc: 218103808 data_used: 11325440
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:35.586516+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 556703744 unmapped: 90521600 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:36.586719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 555196416 unmapped: 92028928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558785e47400 session 0x5587735ced20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 ms_handle_reset con 0x558774516400 session 0x5587740d43c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:37.586931+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 98320384 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:38.587209+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 98320384 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:39.587467+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5475495 data_alloc: 218103808 data_used: 11329536
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:40.587744+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:41.587973+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:42.588242+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:43.588418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:44.588607+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5475495 data_alloc: 218103808 data_used: 11329536
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:45.588794+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 heartbeat osd_stat(store_statfs(0x19aa58000/0x0/0x1bfc00000, data 0x245b61b/0x2696000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:46.588932+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 98312192 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.426338196s of 13.454877853s, submitted: 12
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d7f5400 session 0x558775f9e960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:47.589112+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:48.589383+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:49.589628+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:50.589783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:51.589958+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:52.590122+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:53.590302+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548929536 unmapped: 98295808 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:54.590532+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548937728 unmapped: 98287616 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:55.590686+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:56.590883+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:57.591073+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:58.591273+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:08:59.591459+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:00.591676+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:01.591871+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548945920 unmapped: 98279424 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:02.592077+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:03.592456+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:04.592646+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:05.592871+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:06.593076+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548954112 unmapped: 98271232 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:07.593998+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:08.594207+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:09.594403+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548962304 unmapped: 98263040 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:10.594607+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481485 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548970496 unmapped: 98254848 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.371900558s of 24.390153885s, submitted: 2
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a643000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:11.594768+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558773699c00 session 0x558775f9fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c8c00 session 0x558778aa32c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:12.594960+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:13.595181+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:14.595413+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:15.595575+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548978688 unmapped: 98246656 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:16.595776+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:17.596166+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:18.596378+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:19.597768+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548986880 unmapped: 98238464 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:20.598025+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548995072 unmapped: 98230272 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:21.598225+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:22.598604+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:23.598958+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:24.600102+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:25.600393+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5481903 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:26.600783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:27.601686+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:28.601847+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c9000 session 0x5587758405a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558773699c00 session 0x558775841680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:29.602008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549003264 unmapped: 98222080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558774516400 session 0x558775db8780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.156299591s of 19.043857574s, submitted: 5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d7f5400 session 0x55877b486000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:30.602176+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5488154 data_alloc: 218103808 data_used: 11337728
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549052416 unmapped: 98172928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778e68400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:31.602379+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549052416 unmapped: 98172928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:32.602565+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:33.603028+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:34.603499+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:35.603944+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547138 data_alloc: 234881024 data_used: 18997248
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877e6c8c00 session 0x558775d085a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558778e68400 session 0x558775d5d860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:36.604251+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549060608 unmapped: 98164736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774517400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:37.604462+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549068800 unmapped: 98156544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:38.604869+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549068800 unmapped: 98156544 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:39.605039+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a61f000/0x0/0x1bfc00000, data 0x24812e6/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:40.605337+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.602934361s of 10.160605431s, submitted: 26
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543817 data_alloc: 234881024 data_used: 18993152
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x558774517400 session 0x558779c42f00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:41.605652+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:42.605826+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:43.606081+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:44.606280+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:45.606418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543225 data_alloc: 234881024 data_used: 18993152
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:46.606579+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:47.606742+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:48.606981+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:49.607205+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x19a644000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:50.607513+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543357 data_alloc: 234881024 data_used: 18993152
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.982504845s of 10.244502068s, submitted: 4
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549076992 unmapped: 98148352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55877d217800 session 0x558773205860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:51.607764+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:52.607928+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x5587770db400 session 0x558775d5c3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:53.608164+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 heartbeat osd_stat(store_statfs(0x1994a4000/0x0/0x1bfc00000, data 0x245d2d6/0x269a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:54.608431+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549085184 unmapped: 98140160 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 ms_handle_reset con 0x55878c1aa000 session 0x558775d08d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614a800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:55.608589+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5541782 data_alloc: 234881024 data_used: 18989056
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 98131968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55877614a800 session 0x558775d09c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:56.608765+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 98131968 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d217800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55877d217800 session 0x558777957c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1aa000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:57.608983+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 heartbeat osd_stat(store_statfs(0x1994a1000/0x0/0x1bfc00000, data 0x245ef21/0x269c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 ms_handle_reset con 0x55878c1aa000 session 0x558778aa3680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:58.609291+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 heartbeat osd_stat(store_statfs(0x1994c1000/0x0/0x1bfc00000, data 0x1c6bf11/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:09:59.609669+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:00.609856+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428284 data_alloc: 218103808 data_used: 10838016
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:01.610078+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:02.610253+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.973116875s of 11.827588081s, submitted: 32
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775b7a800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558775b7a800 session 0x55877450f860
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c96000/0x0/0x1bfc00000, data 0x1c6bf11/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750d7000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587750d7000 session 0x5587732330e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:03.610393+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:04.610560+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558774516c00 session 0x558775cdb0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:05.610721+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5432458 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:06.610925+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:07.611286+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x1c6da50/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x1c6da50/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558772a0fc00 session 0x5587740d54a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fc000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:08.611448+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587754fc000 session 0x558778aa3c20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540262400 unmapped: 106962944 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558775130000 session 0x558775d5c3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877586ac00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877586ac00 session 0x558775db8780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:09.611593+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:10.611719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:11.611896+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:12.612191+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:13.612399+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:14.612551+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:15.612716+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:16.612847+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:17.613005+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:18.613174+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:19.613305+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:20.613489+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:21.613607+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:22.613720+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:23.613878+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:24.614172+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:25.614486+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:26.614611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:27.614762+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:28.614911+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:29.615044+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:30.615223+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:31.615386+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:32.615598+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540278784 unmapped: 106946560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:33.615742+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:34.615899+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:35.616056+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877441e400 session 0x558775f9fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5486495 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:36.616227+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:37.616457+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587750c5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x5587750c5000 session 0x5587740d43c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:38.616611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558774516400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x558774516400 session 0x5587735ced20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d216800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.154342651s of 36.324752808s, submitted: 40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199639000/0x0/0x1bfc00000, data 0x22c6ab2/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540286976 unmapped: 106938368 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 ms_handle_reset con 0x55877d216800 session 0x5587745005a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:39.616796+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540295168 unmapped: 106930176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:40.616940+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773651400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5488634 data_alloc: 218103808 data_used: 10846208
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540295168 unmapped: 106930176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:41.617064+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:42.617219+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:43.617415+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:44.617560+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:45.617686+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5535834 data_alloc: 234881024 data_used: 17489920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:46.617828+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:47.618155+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:48.618385+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:49.618630+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x199638000/0x0/0x1bfc00000, data 0x22c6ac2/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:50.618845+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5535834 data_alloc: 234881024 data_used: 17489920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:51.619077+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 540311552 unmapped: 106913792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:52.619305+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211997032s of 13.701082230s, submitted: 8
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545931264 unmapped: 101294080 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:53.619491+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545980416 unmapped: 101244928 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,7])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:54.619614+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c2c000/0x0/0x1bfc00000, data 0x2cd2ac2/0x2f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:55.619763+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5626294 data_alloc: 234881024 data_used: 18419712
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0d000/0x0/0x1bfc00000, data 0x2cf1ac2/0x2f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:56.619901+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:57.620105+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:58.620279+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:10:59.620406+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545734656 unmapped: 101490688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:00.620521+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5625966 data_alloc: 234881024 data_used: 18423808
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545734656 unmapped: 101490688 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:01.620654+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0a000/0x0/0x1bfc00000, data 0x2cf4ac2/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:02.620783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:03.620926+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545742848 unmapped: 101482496 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:04.621151+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:05.621354+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 heartbeat osd_stat(store_statfs(0x198c0a000/0x0/0x1bfc00000, data 0x2cf4ac2/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5626286 data_alloc: 234881024 data_used: 18432000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:06.621544+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545751040 unmapped: 101474304 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.477091789s of 14.712428093s, submitted: 85
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:07.621751+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877d7f5400 session 0x558779dfde00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877e6c8c00 session 0x5587758def00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:08.621952+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:09.622134+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:10.622296+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5633786 data_alloc: 234881024 data_used: 18436096
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:11.622503+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:12.622697+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:13.622853+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 101457920 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:14.623008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:15.623187+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5633786 data_alloc: 234881024 data_used: 18436096
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:16.623359+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:17.623575+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 101449728 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:18.623727+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f5000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773653c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:19.623928+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.506603241s of 12.525759697s, submitted: 5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:20.624108+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634112 data_alloc: 234881024 data_used: 18440192
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 101441536 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:21.624301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545792000 unmapped: 101433344 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558773653c00 session 0x558775f86f00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877d7f5000 session 0x558773e430e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:22.625430+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545792000 unmapped: 101433344 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:23.625590+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c05000/0x0/0x1bfc00000, data 0x2cf68df/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545800192 unmapped: 101425152 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:24.625790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:25.625949+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652598 data_alloc: 234881024 data_used: 18436096
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:26.626174+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545808384 unmapped: 101416960 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:27.626374+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:28.626544+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:29.626719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:30.626874+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652758 data_alloc: 234881024 data_used: 18440192
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:31.627480+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:32.627693+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:33.627863+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:34.628020+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545816576 unmapped: 101408768 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558772a0e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.607929230s of 15.318544388s, submitted: 9
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:35.628237+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545824768 unmapped: 101400576 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558772a0e000 session 0x558778aa34a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652758 data_alloc: 234881024 data_used: 18440192
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:36.628451+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779337800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545824768 unmapped: 101400576 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:37.629032+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:38.629461+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545759232 unmapped: 101466112 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:39.629680+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545701888 unmapped: 101523456 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:40.629906+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:41.630126+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:42.630347+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:43.630464+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:44.630646+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 101556224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:45.630803+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:46.631036+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:47.631301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:48.631616+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 78K writes, 308K keys, 78K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s
                                           Cumulative WAL: 78K writes, 29K syncs, 2.67 writes per sync, written: 0.31 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1807 writes, 6006 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 5.18 MB, 0.01 MB/s
                                           Interval WAL: 1807 writes, 769 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:49.631767+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:50.631922+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5652970 data_alloc: 234881024 data_used: 18472960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.178545952s of 16.156717300s, submitted: 3
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:51.632157+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 101548032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:52.632432+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:53.632629+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:54.632836+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:55.633045+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5694251 data_alloc: 234881024 data_used: 19750912
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:56.633245+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:57.633534+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:58.633693+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:11:59.633894+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:00.634052+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:01.634180+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546521088 unmapped: 100704256 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:02.634427+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:03.634600+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:04.634809+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546529280 unmapped: 100696064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: mgrc ms_handle_reset ms_handle_reset con 0x558773651c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 09:23:33 compute-2 ceph-osd[79833]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: get_auth_request con 0x55877441e400 auth_method 0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: mgrc handle_mgr_configure stats_period=5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:05.635126+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:06.635352+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:07.635522+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:08.635742+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:09.635976+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:10.636114+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:11.636289+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:12.636491+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:13.636781+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:14.636984+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:15.637192+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5695371 data_alloc: 234881024 data_used: 20152320
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:16.637378+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:17.637572+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:18.639795+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.341226578s of 27.352630615s, submitted: 3
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546611200 unmapped: 100614144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x558779337800 session 0x55877b486780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:19.639957+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441e000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:20.640119+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877441e000 session 0x55877353cf00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5686995 data_alloc: 234881024 data_used: 20164608
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:21.640355+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:22.640565+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543367168 unmapped: 103858176 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:23.640818+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 103841792 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:24.640999+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 103784448 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:25.641408+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 103784448 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5687083 data_alloc: 234881024 data_used: 20176896
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:26.641618+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x1988f4000/0x0/0x1bfc00000, data 0x32318df/0x324a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d6800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543457280 unmapped: 103768064 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x5587770d6800 session 0x558773204960
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877610b000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877610b000 session 0x5587735125a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:27.641913+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6ca800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 ms_handle_reset con 0x55877e6ca800 session 0x558775d09a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543547392 unmapped: 103677952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 heartbeat osd_stat(store_statfs(0x198c04000/0x0/0x1bfc00000, data 0x2f218df/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:28.642134+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1.947998643s of 10.021273613s, submitted: 337
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773650400 session 0x55877353dc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198c05000/0x0/0x1bfc00000, data 0x2f2187d/0x2f39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:29.642307+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773651400 session 0x558775d5dc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x5587770d4800 session 0x558775c3e780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773650400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:30.642462+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773650400 session 0x5587735cfa40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5655846 data_alloc: 234881024 data_used: 20176896
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:31.642592+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:32.642800+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773651400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x558773651400 session 0x558775cdbe00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877610b000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:33.642956+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198c03000/0x0/0x1bfc00000, data 0x2cf851a/0x2f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:34.643133+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x198e84000/0x0/0x1bfc00000, data 0x1c7151a/0x1eb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:35.643389+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 ms_handle_reset con 0x55877610b000 session 0x55877b487680
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468058 data_alloc: 218103808 data_used: 11431936
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 heartbeat osd_stat(store_statfs(0x199a60000/0x0/0x1bfc00000, data 0x1c7151a/0x1eb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:36.643642+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543580160 unmapped: 103645184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:37.643943+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 434 heartbeat osd_stat(store_statfs(0x199a60000/0x0/0x1bfc00000, data 0x1c714b8/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:38.644099+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877db82800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.245321274s of 10.457572937s, submitted: 58
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: get_auth_request con 0x558772983c00 auth_method 0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 435 ms_handle_reset con 0x55877db82800 session 0x558773229e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:39.644287+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:40.644514+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 435 heartbeat osd_stat(store_statfs(0x199c85000/0x0/0x1bfc00000, data 0x1c74c94/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462573 data_alloc: 218103808 data_used: 10899456
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:41.644777+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:42.645009+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:43.645190+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:44.645399+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:45.645545+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 435 heartbeat osd_stat(store_statfs(0x199c85000/0x0/0x1bfc00000, data 0x1c74c94/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5462573 data_alloc: 218103808 data_used: 10899456
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:46.645789+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 103686144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:47.645960+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:48.646201+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:49.646445+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:50.646658+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464875 data_alloc: 218103808 data_used: 10899456
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:51.646822+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:52.647063+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877614a800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877614a800 session 0x55877317eb40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587757c5400 session 0x558775d08000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:53.647479+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877d7f4400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:54.647660+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.175668716s of 15.365993500s, submitted: 22
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877d7f4400 session 0x55877c2c65a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 103743488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:55.647882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5466349 data_alloc: 218103808 data_used: 10899456
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:56.648108+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:57.648432+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d4800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 103735296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587770d4800 session 0x5587736cba40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558778312c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:58.648671+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c84000/0x0/0x1bfc00000, data 0x1c767d3/0x1eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 103727104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558778312c00 session 0x55877c2c7a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587757c5400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:12:59.648842+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x199c83000/0x0/0x1bfc00000, data 0x1c767fc/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [0,0,0,1,3])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548593664 unmapped: 98631680 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:00.649021+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587757c5400 session 0x558775c63e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544653312 unmapped: 102572032 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877441fc00 session 0x558777956780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:01.649520+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:02.649660+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:03.649836+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:04.650016+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:05.650191+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:06.650397+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:07.650644+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:08.650815+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:09.651028+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544546816 unmapped: 102678528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fe000/0x0/0x1bfc00000, data 0x25fb835/0x2840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:10.651223+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543109 data_alloc: 218103808 data_used: 10907648
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:11.651413+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:12.651581+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:13.651790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 102670336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775777c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.897251129s of 19.785919189s, submitted: 52
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558775777c00 session 0x558777957e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:14.652259+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544563200 unmapped: 102662144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770db800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587754fd400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:15.652438+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 544604160 unmapped: 102621184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573111 data_alloc: 218103808 data_used: 14704640
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:16.652589+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:17.652797+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:18.652996+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:19.653233+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:20.653410+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608631 data_alloc: 234881024 data_used: 18300928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:21.653569+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:22.653729+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:23.653903+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:24.654054+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:25.654250+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608631 data_alloc: 234881024 data_used: 18300928
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:26.702116+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x1992fd000/0x0/0x1bfc00000, data 0x25fb858/0x2841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 545570816 unmapped: 101654528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.996846199s of 13.037012100s, submitted: 11
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:27.702419+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 100532224 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:28.702579+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:29.702800+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:30.702935+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653735 data_alloc: 234881024 data_used: 18366464
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:31.703122+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:32.703260+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:33.703401+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 546037760 unmapped: 101187584 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:34.703563+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x198e26000/0x0/0x1bfc00000, data 0x2ac9858/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x240bf9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548036608 unmapped: 99188736 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587770db800 session 0x5587758df2c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x5587754fd400 session 0x558775c3e780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:35.703722+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55877441fc00 session 0x558775f9fe00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:36.703891+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:37.704143+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:38.704379+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:39.704583+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:40.704732+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:41.704865+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:42.705027+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:43.705222+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:44.705341+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:45.705537+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650035 data_alloc: 234881024 data_used: 18350080
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:46.705705+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:47.705924+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:48.706116+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558775130c00 session 0x558779dfc1e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c8f000/0x0/0x1bfc00000, data 0x2ac9835/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779335000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558779335000 session 0x558779dfcb40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:49.706410+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 99172352 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558776994000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x558776994000 session 0x558779dfdc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878d174000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.644924164s of 22.960281372s, submitted: 104
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 ms_handle_reset con 0x55878d174000 session 0x558779dfc5a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:50.706580+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877441fc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558775130c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 98861056 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:51.706708+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656519 data_alloc: 234881024 data_used: 18468864
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:52.706883+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:53.707034+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:54.707262+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:55.707484+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:56.707705+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651079 data_alloc: 234881024 data_used: 18993152
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:57.707924+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:58.708074+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:13:59.708288+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:00.708475+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:01.708666+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651079 data_alloc: 234881024 data_used: 18993152
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:02.708803+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.647368431s of 12.696481705s, submitted: 13
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:03.708960+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:04.709095+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:05.709219+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:06.709340+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5663855 data_alloc: 234881024 data_used: 19542016
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:07.709558+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:08.709700+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:09.709947+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:10.710138+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:11.710341+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661727 data_alloc: 234881024 data_used: 19537920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:12.710499+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:13.710711+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:14.710909+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548200448 unmapped: 99024896 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.475411415s of 12.517531395s, submitted: 22
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:15.711066+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:16.711225+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661375 data_alloc: 234881024 data_used: 19537920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:17.711392+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:18.711551+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:19.711726+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:20.711900+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:21.712063+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661199 data_alloc: 234881024 data_used: 19537920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:22.712225+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:23.712384+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:24.712533+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:25.712744+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:26.712922+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5661199 data_alloc: 234881024 data_used: 19537920
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:27.713158+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:28.713297+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:29.713495+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770da000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 heartbeat osd_stat(store_statfs(0x197c6a000/0x0/0x1bfc00000, data 0x2aed868/0x2d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:30.713652+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 98983936 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.725869179s of 15.739192009s, submitted: 4
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770da000 session 0x5587744b7e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:31.713820+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5671973 data_alloc: 234881024 data_used: 20385792
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548249600 unmapped: 98975744 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558779334c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558779334c00 session 0x55877450f4a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773017800 session 0x55877317fc20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:32.713975+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c65000/0x0/0x1bfc00000, data 0x2aef523/0x2d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:33.714134+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:34.714281+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:35.714504+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:36.714662+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:37.714947+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:38.715097+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:39.715254+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 98967552 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:40.715389+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:41.715573+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:42.715759+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:43.716008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:44.716171+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:45.716401+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2ba5523/0x2d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699c00 session 0x558779c42d20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:46.716535+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5680025 data_alloc: 234881024 data_used: 20385792
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699400 session 0x558773233a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 98959360 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:47.716749+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773017800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773017800 session 0x558773e42780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770d8400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.519697189s of 17.087022781s, submitted: 10
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770d8400 session 0x558775f9ef00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:48.716935+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699400
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558773699c00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:49.717122+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:50.717275+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:51.717454+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681799 data_alloc: 234881024 data_used: 20459520
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:52.717659+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:53.717806+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:54.717966+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:55.718120+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:56.718263+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681799 data_alloc: 234881024 data_used: 20459520
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:57.718464+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:58.718614+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:14:59.718729+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c61000/0x0/0x1bfc00000, data 0x2ba5533/0x2d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:00.718875+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:01.719032+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.313117027s of 13.386991501s, submitted: 2
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5692281 data_alloc: 234881024 data_used: 21245952
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:02.719159+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:03.719413+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:04.719560+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197c57000/0x0/0x1bfc00000, data 0x2baf533/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:05.719759+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:06.719962+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5703035 data_alloc: 234881024 data_used: 21241856
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 98951168 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:07.720130+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:08.720302+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:09.720484+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b78000/0x0/0x1bfc00000, data 0x2cc8533/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:10.720619+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:11.720763+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713751 data_alloc: 234881024 data_used: 21389312
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:12.720911+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:13.721054+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:14.721265+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:15.721451+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b78000/0x0/0x1bfc00000, data 0x2cc8533/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:16.721587+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713751 data_alloc: 234881024 data_used: 21389312
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:17.721771+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:18.721894+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.206661224s of 17.654935837s, submitted: 16
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:19.722029+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:20.722188+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 549494784 unmapped: 97730560 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b77000/0x0/0x1bfc00000, data 0x2cc9533/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:21.722330+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5713979 data_alloc: 234881024 data_used: 21389312
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:22.722483+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:23.722639+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:24.722806+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:25.723007+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:26.723148+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699400 session 0x558773f2d4a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558773699c00 session 0x558777956000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5711339 data_alloc: 234881024 data_used: 21495808
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x5587770dac00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 heartbeat osd_stat(store_statfs(0x197b6f000/0x0/0x1bfc00000, data 0x2cc9533/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:27.723301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x5587770dac00 session 0x5587744383c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:28.723447+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:29.723601+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558785e46800 session 0x55877450f0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1a9000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.847147942s of 10.864791870s, submitted: 5
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x55878c1a9000 session 0x558775f874a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:30.723735+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e47000
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 ms_handle_reset con 0x558785e47000 session 0x558775d5d0e0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1ab800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x55878c1ab800 session 0x558773681e00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:31.723838+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5696562 data_alloc: 234881024 data_used: 21389312
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:32.723977+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 heartbeat osd_stat(store_statfs(0x197c62000/0x0/0x1bfc00000, data 0x2af116e/0x2d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x55877441fc00 session 0x558779dfcd20
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:33.724141+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x558775130c00 session 0x5587740d54a0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x558785e46800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 ms_handle_reset con 0x558785e46800 session 0x558773ede3c0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:34.724286+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:35.724419+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:36.724581+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5688029 data_alloc: 234881024 data_used: 21274624
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548732928 unmapped: 98492416 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:37.724792+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548765696 unmapped: 98459648 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55877e6c8800
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 ms_handle_reset con 0x55877e6c8800 session 0x558778aa3a40
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: handle_auth_request added challenge on 0x55878c1abc00
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:38.724941+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x197c86000/0x0/0x1bfc00000, data 0x2acec7a/0x2d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 ms_handle_reset con 0x55878c1abc00 session 0x55877b486780
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:39.725073+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:40.725211+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:41.725388+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:42.725510+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548601856 unmapped: 98623488 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:43.725702+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:44.725906+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:45.726109+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:46.726259+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:47.726526+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:48.726669+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:49.726876+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548610048 unmapped: 98615296 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:50.727060+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:51.727221+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:52.727355+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:53.727530+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:54.727872+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:55.728059+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:56.728198+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:57.728420+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:58.728584+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548618240 unmapped: 98607104 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:15:59.728759+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:00.728873+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:01.729075+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:02.729178+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:03.729480+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:04.729689+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:05.729857+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548626432 unmapped: 98598912 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:06.730076+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:07.730431+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:08.730608+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:09.730759+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:10.730915+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:11.731052+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:12.731250+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:13.731396+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548634624 unmapped: 98590720 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:14.731584+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:15.731766+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:16.731989+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:17.732163+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:18.732336+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:19.732474+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:20.732692+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:21.732841+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548642816 unmapped: 98582528 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:22.733007+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:23.733294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:24.733525+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:25.733719+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:26.733925+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:27.734230+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:28.734435+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548651008 unmapped: 98574336 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:29.734714+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548659200 unmapped: 98566144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:30.734895+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548659200 unmapped: 98566144 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:31.735055+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:32.735228+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:33.735394+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:34.735524+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:35.735866+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:36.736067+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548667392 unmapped: 98557952 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:37.736305+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:38.736643+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:39.736829+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:40.737014+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:41.737213+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:42.737386+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:43.737559+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:44.737709+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548675584 unmapped: 98549760 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:45.737854+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 98541568 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:46.738026+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548683776 unmapped: 98541568 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:47.738192+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548691968 unmapped: 98533376 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:48.738410+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548691968 unmapped: 98533376 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:49.738599+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548700160 unmapped: 98525184 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:50.738748+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:51.738905+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:52.739077+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:53.739299+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548708352 unmapped: 98516992 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:54.739512+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548716544 unmapped: 98508800 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:55.739710+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548716544 unmapped: 98508800 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:56.739853+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:57.740385+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:58.740514+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:16:59.740656+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:00.740813+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 98500608 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:01.740934+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548798464 unmapped: 98426880 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:02.741108+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548438016 unmapped: 98787328 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:03.741274+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548429824 unmapped: 98795520 heap: 647225344 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'log dump' '{prefix=log dump}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:04.741396+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548446208 unmapped: 109821952 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:05.741568+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:06.741698+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:07.741861+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:08.742008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:09.742136+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:10.742270+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547667968 unmapped: 110600192 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:11.742382+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:12.742547+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:13.742699+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:14.742851+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:15.743039+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:16.743213+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:17.743491+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547684352 unmapped: 110583808 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:18.743731+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:19.743933+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:20.748785+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:21.749011+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:22.749382+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:23.749516+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:24.749702+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547692544 unmapped: 110575616 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:25.749944+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547700736 unmapped: 110567424 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:26.750489+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547700736 unmapped: 110567424 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:27.750755+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547700736 unmapped: 110567424 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:28.751012+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:29.751165+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:30.751294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:31.751455+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:32.751666+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:33.752050+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:34.752294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 110559232 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:35.752479+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:36.752960+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:37.753518+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:38.753917+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:39.754104+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:40.754560+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:41.754952+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 110551040 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:42.755226+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 110542848 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:43.755546+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 110542848 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:44.755757+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 110542848 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:45.756409+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 110542848 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:46.756805+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 110534656 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:47.757111+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 110534656 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:48.757518+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 110534656 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:49.757862+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 110526464 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:50.758053+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547749888 unmapped: 110518272 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:51.758215+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547749888 unmapped: 110518272 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:52.758406+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:53.758638+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:54.758985+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:55.759200+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:56.759611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:57.759826+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:58.760136+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 110501888 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:17:59.760404+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:00.760556+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:01.760710+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:02.761066+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:03.761287+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:04.761672+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:05.761827+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:06.761979+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 110493696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:07.762157+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547782656 unmapped: 110485504 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:08.762301+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547782656 unmapped: 110485504 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:09.762621+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547782656 unmapped: 110485504 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:10.762871+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547799040 unmapped: 110469120 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:11.763032+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547799040 unmapped: 110469120 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:12.763186+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547799040 unmapped: 110469120 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:13.763525+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547799040 unmapped: 110469120 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:14.763761+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:15.763935+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:16.764208+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:17.764542+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:18.764680+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:19.764832+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:20.765017+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547807232 unmapped: 110460928 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:21.765240+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547815424 unmapped: 110452736 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:22.765434+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 110444544 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:23.765716+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:24.765860+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:25.766113+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:26.766345+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:27.766646+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:28.766841+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 110436352 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:29.767063+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:30.767264+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:31.767480+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:32.767651+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:33.767990+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:34.768418+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:35.768626+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:36.768936+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 110428160 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:37.769133+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547848192 unmapped: 110419968 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:38.769428+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547848192 unmapped: 110419968 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:39.769633+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547848192 unmapped: 110419968 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:40.769846+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547856384 unmapped: 110411776 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:41.770003+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547856384 unmapped: 110411776 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:42.770220+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547856384 unmapped: 110411776 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:43.770471+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547856384 unmapped: 110411776 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:44.770760+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547856384 unmapped: 110411776 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:45.770998+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547864576 unmapped: 110403584 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:46.771132+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547864576 unmapped: 110403584 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:47.771380+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:48.771530+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:49.771728+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:50.771889+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:51.772178+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:52.772373+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:53.772618+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547872768 unmapped: 110395392 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:54.772766+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:55.773008+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:56.773210+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:57.773540+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:58.773689+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:18:59.773865+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:00.774029+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547889152 unmapped: 110379008 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:01.774374+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547897344 unmapped: 110370816 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:02.774561+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:03.774752+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:04.774921+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:05.775136+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:06.775282+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:07.775487+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:08.775646+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547905536 unmapped: 110362624 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:09.775841+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:10.776007+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:11.776212+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:12.776377+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:13.776577+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:14.776744+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:15.776909+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:16.777086+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:17.777343+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547913728 unmapped: 110354432 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:18.777551+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547921920 unmapped: 110346240 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:19.777698+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:20.777879+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:21.778054+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:22.778223+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:23.778365+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:24.778517+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547930112 unmapped: 110338048 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:25.778664+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:26.778836+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:27.779067+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:28.779193+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:29.779423+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:30.779618+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:31.779827+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:32.780000+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547938304 unmapped: 110329856 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:33.780248+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547946496 unmapped: 110321664 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:34.780405+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547946496 unmapped: 110321664 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:35.780545+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:36.780659+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:37.780827+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:38.780950+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:39.781093+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:40.781180+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:41.781287+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:42.781476+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:43.781611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547962880 unmapped: 110305280 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:44.781743+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 110297088 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:45.781882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 110297088 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:46.782039+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 110297088 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:47.782245+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 110297088 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:48.782416+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547979264 unmapped: 110288896 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:49.782611+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547979264 unmapped: 110288896 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:50.782769+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547979264 unmapped: 110288896 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:51.782917+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:52.783070+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:53.783211+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:54.783391+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:55.783539+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:56.783745+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:57.783999+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 547987456 unmapped: 110280704 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:58.784166+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:19:59.784347+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:00.784468+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:01.784601+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:02.784783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:03.784937+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:04.785095+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548003840 unmapped: 110264320 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:05.785243+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548012032 unmapped: 110256128 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:06.785421+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548012032 unmapped: 110256128 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:07.785622+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548012032 unmapped: 110256128 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:08.785767+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548020224 unmapped: 110247936 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:09.785917+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548020224 unmapped: 110247936 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:10.786079+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548028416 unmapped: 110239744 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:11.786282+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548028416 unmapped: 110239744 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:12.786464+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548028416 unmapped: 110239744 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:13.786616+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548036608 unmapped: 110231552 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:14.786783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548044800 unmapped: 110223360 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:15.786968+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:16.787159+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:17.787419+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:18.787626+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:19.787783+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:20.787961+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:21.788175+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:22.788375+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:23.788526+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548052992 unmapped: 110215168 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:24.788712+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:25.788891+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:26.789028+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:27.789189+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:28.789333+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:29.789509+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:30.789701+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:31.789869+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548061184 unmapped: 110206976 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:32.790038+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548077568 unmapped: 110190592 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:33.790180+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:34.790367+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:35.790543+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:36.790692+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:37.790876+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:38.791027+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548085760 unmapped: 110182400 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:39.791175+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:40.791360+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:41.791517+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:42.791671+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:43.791844+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:44.791970+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:45.792120+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:46.792277+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548093952 unmapped: 110174208 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:47.792513+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 110166016 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:48.792691+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:49.792859+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:50.793043+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:51.793196+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:52.793377+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:53.793532+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 110157824 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:54.793689+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 110149632 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:55.793841+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 110149632 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:56.793978+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:57.794158+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:58.794396+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:20:59.794584+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:00.794790+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:01.794943+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:02.795094+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548126720 unmapped: 110141440 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:03.795227+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548134912 unmapped: 110133248 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:04.795360+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:05.795509+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:06.795649+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:07.795826+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:08.796041+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:09.796196+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:10.796392+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548151296 unmapped: 110116864 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:11.796581+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:12.796720+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:13.796931+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:14.797078+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:15.797262+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:16.797441+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:17.797672+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548159488 unmapped: 110108672 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:18.797836+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:19.797970+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:20.798255+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:21.798450+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:22.798626+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:23.798823+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:24.799086+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:25.799260+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548175872 unmapped: 110092288 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:26.799388+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:27.799604+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:28.799703+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:29.799855+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:30.799949+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:31.800066+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:32.800183+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548192256 unmapped: 110075904 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:33.800296+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548208640 unmapped: 110059520 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:34.800508+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548208640 unmapped: 110059520 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:35.800690+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:36.801559+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:37.801732+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:38.801882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:39.802431+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:40.802562+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:41.802707+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:42.802839+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548216832 unmapped: 110051328 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:43.803196+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548225024 unmapped: 110043136 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:44.803523+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548233216 unmapped: 110034944 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:45.803700+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548233216 unmapped: 110034944 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:46.803948+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548233216 unmapped: 110034944 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:47.804258+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 110026752 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 79K writes, 311K keys, 79K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s
                                           Cumulative WAL: 79K writes, 29K syncs, 2.66 writes per sync, written: 0.31 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 3524 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 2.73 MB, 0.00 MB/s
                                           Interval WAL: 1285 writes, 551 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:48.804406+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 110026752 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:49.804678+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548241408 unmapped: 110026752 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:50.804830+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548249600 unmapped: 110018560 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:51.805030+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548249600 unmapped: 110018560 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:52.805294+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548249600 unmapped: 110018560 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:53.805511+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 110010368 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:54.805712+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 110010368 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:55.806014+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 110010368 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:56.806195+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 110010368 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:57.806387+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 110010368 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:58.806754+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:21:59.806914+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:00.807058+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:01.807237+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:02.807407+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:03.807544+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:04.807689+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 110002176 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:05.807856+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548274176 unmapped: 109993984 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:06.808006+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548290560 unmapped: 109977600 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:07.808172+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548290560 unmapped: 109977600 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:08.808344+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:09.808510+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:10.808849+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:11.809032+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:12.809176+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:13.809354+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:14.809602+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548298752 unmapped: 109969408 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:15.809725+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:16.809917+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:17.810110+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5495751 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:18.810258+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:19.810403+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:20.810576+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:21.811135+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198ad5000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548306944 unmapped: 109961216 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:22.811407+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548315136 unmapped: 109953024 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 412.686676025s of 412.895324707s, submitted: 75
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494782 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:23.811568+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548331520 unmapped: 109936640 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:24.811715+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548331520 unmapped: 109936640 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,5])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:25.811873+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548331520 unmapped: 109936640 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:26.812142+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548331520 unmapped: 109936640 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:27.812407+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548331520 unmapped: 109936640 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494767 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:28.812595+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548339712 unmapped: 109928448 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:29.812744+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548339712 unmapped: 109928448 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:30.812878+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548339712 unmapped: 109928448 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:31.813049+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548347904 unmapped: 109920256 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:32.813268+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 109912064 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1.824405551s of 10.338525772s, submitted: 69
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494767 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:33.813411+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 109912064 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:34.813605+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,2])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 109912064 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:35.813773+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548356096 unmapped: 109912064 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:36.813947+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 109903872 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:37.814126+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548397056 unmapped: 109871104 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494767 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:38.814364+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548421632 unmapped: 109846528 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:39.814529+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548462592 unmapped: 109805568 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:40.814697+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:41.814882+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:42.815028+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494695 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:43.815173+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:44.815429+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:45.815582+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:46.815769+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:47.815986+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494695 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:48.816267+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:49.816492+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:50.816640+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:51.816861+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:52.817014+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494695 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:53.817214+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548470784 unmapped: 109797376 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:54.817380+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:55.817526+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:56.817665+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:57.817829+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 09:23:33 compute-2 ceph-osd[79833]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 09:23:33 compute-2 ceph-osd[79833]: bluestore.MempoolThread(0x558771cc9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494695 data_alloc: 218103808 data_used: 10915840
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:58.817970+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:22:59.818110+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548478976 unmapped: 109789184 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:23:00.818242+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548724736 unmapped: 109543424 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:23:01.818388+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: osd.2 439 heartbeat osd_stat(store_statfs(0x198adb000/0x0/0x1bfc00000, data 0x1c7bc18/0x1ec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548798464 unmapped: 109469696 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: tick
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_tickets
Nov 29 09:23:33 compute-2 ceph-osd[79833]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T09:23:02.818540+0000)
Nov 29 09:23:33 compute-2 ceph-osd[79833]: prioritycache tune_memory target: 4294967296 mapped: 548814848 unmapped: 109453312 heap: 658268160 old mem: 2845415832 new mem: 2845415832
Nov 29 09:23:33 compute-2 ceph-osd[79833]: do_command 'log dump' '{prefix=log dump}'
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.45402 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.45414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.52181 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/177295995' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3332098037' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2760050205' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.45435 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.52187 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3332299166' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3801949607' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.45450 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3530326166' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/957040257' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:23:33 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 09:23:33 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3723149447' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:33 compute-2 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:23:34 compute-2 nova_compute[232428]: 2025-11-29 09:23:34.053 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 09:23:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2252901709' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:34 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:34 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:34 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:34.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:34 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 09:23:34 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3126345618' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.48844 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.52211 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1681149973' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3723149447' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.48865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.45480 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2080612485' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/329831851' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3788781579' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.48886 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2252901709' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1537453819' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3985060618' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1271726264' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3126345618' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 09:23:35 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:35 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:35 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:35 compute-2 crontab[360584]: (root) LIST (root)
Nov 29 09:23:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 09:23:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3875292586' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:35 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 09:23:35 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2266876325' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.48901 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3729422123' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2496754110' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3269100547' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.48919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1874298122' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/467088641' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3875292586' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4205457577' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/4006659768' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2336183748' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/675610105' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/847719017' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2266876325' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3125663046' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3058393943' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:23:36 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 09:23:36 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3674731290' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:23:36 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:36 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:36 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:37 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:37 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:37 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.48931 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.48946 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3573856101' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2157463288' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3674731290' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.48964 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3933232208' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3006769114' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3552108203' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3886851741' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3196551018' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/182765302' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3972491086' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 09:23:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1756569330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 09:23:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/491029718' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:23:37 compute-2 nova_compute[232428]: 2025-11-29 09:23:37.638 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 09:23:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2390509284' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:23:37 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 09:23:37 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2241015991' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.48982 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.45621 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3165795423' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1756569330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1850719826' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.52367 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/491029718' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2390509284' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2241015991' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 09:23:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1870929738' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 09:23:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2742967440' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:23:38 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:38 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:38 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 09:23:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1731845847' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:23:38 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 09:23:38 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1697834902' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 09:23:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1967478840' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:23:39 compute-2 nova_compute[232428]: 2025-11-29 09:23:39.055 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:39 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:39 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:39 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:39 compute-2 systemd[1]: Starting Hostname Service...
Nov 29 09:23:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 09:23:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2403007096' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 podman[361097]: 2025-11-29 09:23:39.248186616 +0000 UTC m=+0.057183294 container health_status d3c694058c0f133376ba660d48990b69b30559eaf9ff2eaaba3ee8343e36ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 09:23:39 compute-2 systemd[1]: Started Hostname Service.
Nov 29 09:23:39 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 09:23:39 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3432612923' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.45639 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.52379 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.49006 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.45651 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.52388 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.52394 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.45675 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.52403 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1870929738' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2742967440' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1110412571' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1731845847' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1697834902' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1582979067' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/2104169900' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:23:39 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1967478840' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1111903357' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3263509954' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/875558750' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:40 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:40 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:40.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:40 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 09:23:40 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3299449446' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.45705 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.52424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.45723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.52439 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.45729 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2403007096' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.52457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/559655583' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.52472 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3432612923' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1375575306' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1859689210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.45753 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1150461807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1111903357' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/395779013' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3459416611' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.45768 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3263509954' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/875558750' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:40 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3299449446' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 09:23:41 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:41 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:41 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:41.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.49132 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2674347029' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1156689358' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.49153 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.49159 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.52556 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.45822 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/378849710' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:23:41 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1165798035' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 09:23:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3513211539' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:23:42 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:42 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:42 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:42.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:42 compute-2 nova_compute[232428]: 2025-11-29 09:23:42.640 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:42 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 09:23:42 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2864833885' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.49174 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.49168 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.49180 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2399889830' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/778411828' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.49192 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3513211539' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3567744018' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:23:42 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/64192793' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:23:43 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:43 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000031s ======
Nov 29 09:23:43 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:43.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Nov 29 09:23:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 09:23:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3466932149' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:23:43 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 09:23:43 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1832272552' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:23:44 compute-2 nova_compute[232428]: 2025-11-29 09:23:44.060 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:44 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:44 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:44 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:44.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:44 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.49204 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2864833885' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1773989843' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/3171430122' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.49219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/3466932149' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1832272552' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/3897280071' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:23:45 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1365637237' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:23:45 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:45 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:45 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:45.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:45 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 09:23:45 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/700449798' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.52592 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.45876 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/2010024013' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.52616 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/4128764641' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/700672331' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/700449798' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1858769165' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 09:23:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 09:23:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2405539361' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:23:46 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:46 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:46 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:46.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:46 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 09:23:46 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1216671340' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:23:47 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:47 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.001000032s ======
Nov 29 09:23:47 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:47.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.45912 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.52649 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.49276 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.52658 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.45936 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/2405539361' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1026858214' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.101:0/1974847851' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.102:0/1216671340' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: from='client.? 192.168.122.100:0/1982774867' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 09:23:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2707939131' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 09:23:47 compute-2 nova_compute[232428]: 2025-11-29 09:23:47.641 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 09:23:47 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3268744493' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 09:23:47 compute-2 ceph-mon[77138]: mon.compute-2@1(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 09:23:48 compute-2 nova_compute[232428]: 2025-11-29 09:23:48.205 232432 DEBUG oslo_service.periodic_task [None req-71762240-0e0b-4295-b693-3ba5ac69418a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 09:23:48 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 09:23:48 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2683671410' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 09:23:48 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:48 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:48 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:48.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 09:23:49 compute-2 ceph-mon[77138]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 09:23:49 compute-2 ceph-mon[77138]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4173176140' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 09:23:49 compute-2 nova_compute[232428]: 2025-11-29 09:23:49.061 232432 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 09:23:49 compute-2 radosgw[83394]: ====== starting new request req=0x7f55978876f0 =====
Nov 29 09:23:49 compute-2 radosgw[83394]: ====== req done req=0x7f55978876f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 09:23:49 compute-2 radosgw[83394]: beast: 0x7f55978876f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
